doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/59320 (DOI)
So, okay, what I'll talk about is a subject that, again, I have been working on for the past two years and slowly improving various results. And I spoke about basically the same results in Oxford, so I apologize to people who have been there. And let me start right away by stating a problem. I hope people in the back can see the board. So this is the problem in its most generality. Let G in GL and R be real algebraic. Let gamma in G be a lattice. Now, I will not need it, but this is a discrete group. Eventually, we'll specialize to a simpler case, but this is a discrete group such that the hard measure which is induced on G mod gamma is finite. And let pi from G to G mod gamma be the usual map, which is pi of G equals G gamma. Now let's fix for the rest of the talk, R bar, to be O minimal expansion, expansion of the real field. And we take X in G to be R definable. Can you see in the back? So wrap down here, not so much. Let's do narrow boards. We have X in G R definable. Now we are looking at the image of X inside G mod gamma and of X inside G mod gamma. So gamma is not definable. No, no, no. Very much not definable. It's a discrete group. And the question is what is the topological closure of pi X in G mod gamma? OK, as we will see in a second, this is two general. But I should say that this is already, in some cases like this, we know this even from, well, recent work in model theory. For example, we know that you can look at this and onto an abelian variety. And if you replace the question, you start, let's say, with an algebraic variety inside CN. And instead of asking about the topological closure, you ask about the Zariski closure, then this is what's called the X Lindemann-Weierstrass theorem, because we know that if the variety was irreducible, the image is a coset of an abelian sub-variety of A. And then about a year ago, maybe two years ago, Umer and Yafayev looked at this problem and they asked the question, what can we say about the topological closure of the image of an abelian variety? And here and here, of course, the lattice is like R to the 2n in general. Z to the 2n. Z to the 2n. OK. And first of all, this is two general. The problem, as I stated here, in the sense that we cannot give good answers in general. And the example usually, you take G to be S of L2R. You take gamma to be S L2Z. And already here, we have very simple, definable subsets of G whose image is the closure of the image is something like a fractal, like a counter-set. And for example, if you take D to be the diagonal group and you choose G properly and you look at the definable set being just the coset of DG, then you can find G such that the closure of pi x is very complicated, this fractal. So x is definable. It's definable. It's a very simple set, but the closure of the image is very complicated. So what about D itself? No, the image of D, I think that this will be a lattice in D. So the image of D probably will just be a circle, the closure. I would think that this is closed, I think. OK. Now, of course, the problem, this type of problem, a problem is coming from egodic theory. So this is a dynamical system problem. And the classical theorem of Ratner, 1 from 1994, and I'm forming it, OK, I will say something later in the closure version of the theorem. The theorem is about, is formulated in terms of measure. Also I'll come back to this, but I'm forming it in the topological language. If H in G is a unipotent group, then the closure of the image of all orbits is nice. And then there exists another algebraic group, there exists F in G, algebraic group, such that the closure of pi, so then, sorry, then for every little g, then for every little g in G, there exists F, algebraic group, such that the closure of pi H G is just pi G. So the closure is itself the image of another orbit. So the closure is an F, an F flow, I can say. So the closure of pi of the orbit is itself an orbit, an orbit under F inside g mod gamma. Is there any unipotent group? Maybe they can do, it's enough that it's generated by unipotent groups, but for us this is what we do. F depends on H and on G, but the dependence on G is up to conjugate, not more than that. We tend to be looking at cosets, yes, yes indeed, indeed, we'll see why. But this is the starting point of what we need. Now what I want is to start working inside G and not in the image, and well let me start with that. So if we take G to be just the identity, then we will get that the closure of pi H is just the closure, it's just, sorry, pi of F. And if we pull it back into G, which will be more convenient for us, then instead of working in pi, in the quotient we'll work in G, what it says is that the closure of H gamma is exactly F gamma. Instead of talking of the closure of pi H, I can talk about the closure of H gamma and then take pi, it's the same thing. So now this is the closure I should say inside G. So we're in a situation where the closure of H gamma is a nice, it's a group times gamma. Now notice that if G was a billion, like in Cn or in Rn, this is very easy, why? Because H gamma is a group itself. Right? It's in a billion group, H gamma is a group, so its closure is a lig group. And then you just take the connected component of the identity and this will be your F in this case. But of course once we move out of the a billion case, this is not a group anymore and so there is no reason why you should be able to describe this as a group times gamma. In this case, all connect, okay, so you're right. But in the case of a unipotent group, which we will have eventually, all groups, all connected groups are real algebraic. Okay, now I want to give a name for this, so I will not go F actually, it can be seen. One way to read F is the smallest subgroup, real algebraic group containing H such that gamma is a lattice in F. What do we mean by that? It just means that again when you look at the measure restricted to F mod gamma, you get a bounded measure. But I will not go into that, I want to give a name for this F. It's uniquely determined and I want to write F from now on as H gamma. So given H, we find an F containing it, which is nice in this sense. The closure of H gamma is F gamma, so this is what I want to take out of Rottman's theorem is the existence of H like this and a name for it. Containing H. Containing H, yes. That H is unipotent. Being here is for unipotent H. Contains F, contains H, yes, contained. Because once you work with the cosets, it's not true unnecessarily. Anymore you have to bring conjugates into. Yes, yes, yes. Okay, now I'm going to simplify much the situation. We're going to leave the general Lie group situation and assume from now on that G is a unipotent group itself. So in particular, all the real algebraic subgroups of G will be unipotent. So from now on, and this is the setting of our theorem, G itself is unipotent, which for us will mean that G is a real algebraic subgroup of the upper triangular matrices with one under diagonal. Up to conjugation, this is true, which for us will mean that G is a real algebraic subgroup of the upper triangular matrices. So this is just 1, 1, 1 here, 0, and real algebraic. Actually all connected subgroups are real algebraic. Another way to characterize a group like this, if you don't want to work with matrix group, these are connected, simply connected, and unipotent groups. And now, maybe I'll put it here because I want to leave the theorem on the left, it is a fact that for unipotent groups, lattices are what we think of lattices in the Abelian case. So fact, a discrete gamma in G for G unipotent. In the lattice, which means the measure is bounded if and only if G mod gamma is compact. So from now on, we may as well assume that G mod gamma is compact whenever I say lattice. And now I will state the theorem that we'll be proving, we're not proving, talking about its proof. Can I have some water? Is this water? No. Is this some water maybe? Sorry, I should have prepared in advance. I think my mouse is slowly, slowly disappearing. Yeah, thanks very much. Great, much better. Okay, so let's say I want to fit everything into this board, so let's say that R bar is again O minimal and we have X in G definable. And to make it easier, let's make it closed already, because anyway we're going to take closure in a second. So let's just assume that X is closed because we're interested in closure anyway. And I don't have a lattice yet. Oh, great. Thanks very much. I don't have a lattice yet, but already we can extract information from X. Okay, then there exists a number R in N. There exists finitely many real algebraic subgroups of G, positive dimension. There exists C1, CR definable sets closed. Might as well take them closed. Closed. So such that all of this before gamma was chosen. For every lattice gamma in G, we can now describe the closure of, and now I will describe it inside G and I will say something about the projection, but it's more convenient for me here to describe it, the closure inside G, describing the closure of X times G. Okay, I can write it here. The closure of X gamma is the following finite union. First of all, you take X, of course you need X. And to X, you add the following finitely many sets. I goes from one to R, of CI HI gamma, so now I'm using the notation from there, so every HI, depending on gamma, I take the smallest gamma rational group containing HI and you need to add. I'm actually using the graph that's given me. This does not make sense otherwise. And I guess we have to multiply by gamma. So in some sense, what we are doing is reducing the closure problem on arbitrary definable sets to groups. As we will see, these will be exactly groups that sit on X at infinity, somehow that's stabilizing some sense, are affiliated to X at infinity. If I wanted to present it in the projection, then we will take the projection here and then we'll just get the projection of this. Okay, so the closure of the projection is just pi of this. Notice one thing, which is, I think, not obvious, that even though we have this gamma on the way, this is a definable set. It's families, you have finitely many groups and you have finitely many families of these cosets of these groups, so this is itself R definable. Of course, the closure is not R definable because we have gamma, we cannot avoid gamma, but it's an R definable set times gamma. And moreover, it's important, we can say the following, for each i from 1 to R, the dimension of C i is less than the dimension of X. Here, I multiply everything by gamma. Ah, ah, ah, this is not necessary. Thank you, this is not. It's good, someone counting parentheses. It's important. The second thing is, I'm not sure if we'll get to it, but if you take the maximal, so some of these have no h i which is contained in some have contained h i above them, but for h i's which are maximal with respect to inclusion, the C i's are actually bounded and then this set is actually closed set already. So some of these sets are not closed, only they become closed when we take the union like we're adding boundary components, but the maximal ones are closed, so C i is bounded and so it's easy to see. When you take a compact set times an algebraic group, you get still closer. Okay, let me say some remarks about this. Maybe we'll get more than remarks, maybe not. In the collection? Yes, in the collection, yeah. Why do you actually take h i and that they are real algebraic and h i gamma is also real algebraic, so why don't you take h i gamma to start with? I could, but then it would be a weaker theorem because now the same h i work for all gammas. You just have to take the gamma closure of the h i. So you could have formulated the result for each gamma separately, but there's something strong about that because you have like an a priori family that we will see we extract from x and we just have to apply the Ratner results to each of the groups to get the closure. Okay let me make some comments. Let's take the case when x is a curve. Let's look what this theorem says when the dimension of x is 1. So x is just a curve. We'll see an example, I'll put an example in a second. In this case, by one, the c i's are finite. So then we really, all we are doing is taking the curve and adding to it finitely many cosets and the closure then is obtained by adding to the curve finitely many cosets and that's it, you don't need more. Then because the dimension of c i is 0 is finite and then the closure of x gamma is just x union some finitely many maybe not the same r from k g i h i gamma. So you only need to add finitely many cosets and all of this. So let's do the example that if you've heard Sergei and I talk about it, this is the example I usually give, so let's look at this so we can embed this in g. This is not unimportant but it's algebraically isomorphic to an important group. Now let's take x, the hyperbola, all the x y, x y is 1 and to simplify let's just look at the first quadrant. Let's take gamma to be just z square. So we have this square and it's not hard to see that when you take z square closure you basically moving along these directions and what you will be adding is these two groups. So this is like h1, there is no point here, it's really a group and this is h2 and the closure let me write it additively because we are now inside r square is exactly x union h1 union h2 well plus z square. Here's however, okay I will stick with it, we'll do maybe another example later. I should say that of course what helps here is misleading that h1 and h1 gamma are the same because h1 is already z square group, the intersection of z square with h1 is a lattice, the intersection of h2 with z square is a lattice so in this case there was no need to take h1 z square or h2 z square. If I change the lattice and make it an irrational lattice then h1 and h2 both will become the whole of r2 so if I replace this by an irrational lattice then the closure of x plus the lattice will be everything, will be the whole of r2. So again notice that this operation is missing here because this is the same as h1 if you want z square and this is the same both of them are rational with respect to the lattice z square. Okay second remark. Actually turns out there's a theorem like this exists in the literature of robotic theory of well more or less there's a theorem sorry of Shah from 1994 actually maybe Ratner is not 1994 maybe slightly earlier 1992 which is more general than what I would say now but for us it will be enough. He did not even need to say that G is unipotent. Assume that you have p of x a polynomial map in some variable xd a real polynomial map and you take x to be exactly the image of the polynomial map. R2G yes thank you so it's polynomial map in in it's not in R sorry it's several variables so I should say pij and maybe write pij a map from Rd things into G. Assume that the image of the polynomial land inside G. It's good someone is counting for it. No you're right you're right you're right. Okay then there's a very strong version of this result because what you have to do is the following then let gh be the smallest set of a real algebraic subgroup subgroup H of G containing x. So just take the intersection of all cosets of real algebraic groups which contain x then for every lattice for okay so let me just say whenever I write gamma I mean a lattice in G the closure of x gamma is exactly the closure of gh gamma which is just gh gamma gamma. So actually when you take the image of a polynomial map as your definable set which is obviously definable then all you need is one coset and it captures the whole closure. And if I have time we'll come back and see how we can deduce this result from our work. But I should say that its theorem did not even assume unipotent but then the notion of a polynomial map you have to be slightly more careful about. Sorry yeah all my gammas now as I said it all my gammas will be lattice. Third remark. This was two I guess. This will be three and I'll say it in words maybe and then I'll say something more precise. As I said the theorem from a Goddick theory both of Scha and of Ratner are not formulated so much in terms of closure actually in Scha you have to extract the closure but it's formulated in terms of a Goddick theory and in terms of convergence of measure or what is called equidistribution results. And he says here that he makes sense of what it means for X to be equidistributed and then he says that X is actually distributed inside gh gamma. I'll just put I don't want to define equidistribution but still I want to talk about it. I know I know we don't like that. It turns out that how shall I say that that for definable sets in O'Brien's structure I'll give examples even without defining closure I'm sorry that I'm being vague and equidistributions are not the same. Differ of course you don't know what it is so what do you care if it differ. But let me give an example again without defining. Yes yes yes I didn't say equidistribution in the closure actually equidistribution in the closure this is what I should say. But let me give an example and whatever let's look at R2 square with gamma being Z square and X being the curve of flan T ln T greater than 0. So this curve of course is a geometrically a very simple curve is definable in Rx and it follows from what we are doing that the closure of X plus Z square is R square but X is not equidistributed in whatever language whatever it means without going into it but X not equidistributed in R square. Again I'll put it and actually it was interesting I mean without even knowing we were in a meeting in Oxford and Alex will he spent the first part of his talk talking about equidistributions and immediately pointed out that this is not equidistributed so these are not complicated statements what we prove is well at this point again it's a funny to say that theorem when I didn't define it but let me just say without a theorem its observation in polynomially bounded case at least for Rn these are the same so as long as the set or the curve is defined in a polynomially bounded setting then closure if the closure of X is R square then X is equidistributed so I will just say very vaguely but same for polynomially bounded so these are all minimum structures in which function is eventually bounded by a polynomial. Okay so I want to spend the last I guess 15 minutes to say some things about the proof and so I will leave the result and at least maybe give one idea that helps to describe the closure. Okay so let me put some other theory so far there is not much let's take R to be elementary extension of R but to me elementary extension with respect to everything so if you want a big ultra product with respect to full structure not only the minimum structure I just need it in order to talk about the lattice in elementary extension and in order to because I will move from R to elementary extension so let me say put a notation for X in Rn let X sharp denote the realization of X in the big structure and now as usual we have the valuation of all the alpha in R such that is bounded by some n there is n in n and we let mu be the ring of the ideal of infinitesimal so the maximum ideal alpha in R such that for every alpha sorry for every n. But actually I am interested in O and U on G on the group G not so much in R and what helps us we could have managed without it but what helps us is that when notice that when G is in U, T and R then actually G is closed in Rn square M and R right because the diagonal is one we cannot approach the determinant is one we cannot approach elements with determinant zero and then let me write OG just to be O intersection G and mu G to be mu plus I, I is the identity matrix intersection with G. Yeah yeah O to the n square thank you O to the n square and mu to the n square thanks. And now we have the standard part map going from OG into G so G is the real points I remind you I am calling G sharp the extension so we have a standard part my going from here to here and the kernel is exactly mu of G and in fact O of G is well it is a group mu of G is normally in O of G and O of G is the semi direct product of mu G and G and the R points this is the R points. So above it should be G sharp. Here you mean? Yeah. I, I, yeah this should be G sharp this is your answer this should be G sharp I prefer the notation to leave it so G but you are right this should be G sharp thanks. Yeah yeah. And just to leave this to put this notation I am for why any set inside G sharp I will denote in abuse of notation standard part of Y standard part is a partial map but I will write standard part of map standard part of Y to mean the standard part of Y intersection OG. Okay so I should not write this because standard part is not a full map but I will write this and simple observation is so this is just simple facts that we do often when we teach this stuff that for any X in G one way to get the closure of X is to go to elementary extension and take the standard part. This is very easy to see this is the case. So now all of this was to go back to this problem we are trying to understand the closure of X gamma so now we are back to the problem so assume now again X assume X in G is R definable and the closure of X gamma can be written as the standard part of X gamma sharp but this is the same as the standard part of X sharp times gamma sharp and what we want to do and it comes out to have really very nice geometric meaning is to now we obtain the closure as an image of a map and what we want is to define to divide the domain of the map according to the complete types on X so I will write it like this over all complete types in the O minimal language now in the O minimal language of PR times gamma sharp. I am going to take the standard part of X sharp gamma sharp type by type and for each type we will try to understand what this is and notice if we understand for each type what is the standard part then we will get what we want so the problem the heart of the problem is to understand what is the standard part of one type times gamma sharp so what is of P R times gamma sharp and I will do a very very simple example first of all which of course we will not get anything but assume that P is the type of an element which is bounded for alpha inside O of G so it is infinitesimally close to some element of G this is very easy to see then we all what we get here is like the monad neighborhood of the standard part of alpha or not even full monad neighborhood we will get part of the monad neighborhood but we will not get anything more than that when we take the standard part of this PR gamma sharp is exactly the standard part of alpha which is element in the closure of X but we said X is close so it is element of X times gamma so this is obvious because when we take the closure of X gamma we in particular get so this is part of X gamma so here of course this is exactly the contribution of here this part so the bounded types the types which leave inside the standard part will come here will contribute X itself which is of course we have to have X so the interesting part is the unbounded part the unbounded types types which leave at infinity and here we introduce the following notion we will call it nearest coset to a type definition for any alpha in G sharp so in elementary extension and for G in the real world and H in the real world real algebraic we say that G H is near alpha if up to infinity decimal on the left you get there so if alpha is lies in mu G G H or we could put mu G here if mu G alpha intersect G H for example if we take the curve XX square X say 1 over X and we take non-standard element here alpha then of course the X axis is near this point no times the H star yes thank you yes it could be at infinity thank you yes G sharp and the result we prove first that the intersection ok I'll say it like this so one theorem is that indeed there is a nearest for any alpha in G sharp there exists a smallest coset G H near alpha notice that the whole group is near alpha ok so every element has some coset near alpha near it the only issue is what when can you get less than the whole group ok so there is one coset G H which is contained in all other cosets which are near alpha and we can denote it let's call it G alpha H alpha of course G alpha is not unique but H alpha is unique every representative can be chosen and it's easy to see that if alpha is equivalent over R in the O minimal language actually even the semi algebraically semi algebraic language then G beta H beta equals to G alpha H alpha so actually G alpha H alpha is the property of the type of alpha so we will denote it from now on for the next two minutes by G P H P where P is the type of alpha over R so it's really the nearest coset to the type I should say for example this is not true in SL2R if you allow any groups any algebraic group so you can have two cosets of algebraic groups which are near an element but the intersection is empty but not yet in a second in a second so the theorem that we prove and I will finish with that that for every complete type or O minimal type Px in P and G the standard part of PR gamma sharp is exactly well I do it in two steps the closure of GP H P gamma which is the same as GP H P gamma gamma so at infinity what matters is the nearest coset and the nearest coset it what is what determines the closure of PR gamma and as a result just going back to where we left it's part of the work is the same as the closure of X gamma is just the union an infinite union at this point gamma P realizing P type on X so R the realization of P in R P realizing X P type on X let me write it and let me understand V dash P V dash X P implies X okay I'll stop so at the end we have to go of course to here and right now it's a union which is not clear how you handle it it's a union over types so there is more work to do and more models here we actually to get this statement.
Let G be a real algebraic unipotent group and let Lambda be a lattice in G, with p:G->G/Lambda the quotient map. Given a definable subset X of G, in some o-minimal expansion of the reals, we describe the closure of p(X) in G/Lambda in terms definable families of cosets of real algebraic subgroups of G of positive dimension. The family is extracted from X independently of Lambda.
10.5446/59321 (DOI)
Joint work with Pedro Andrés Estevan. I think he has another name, but I think I put only three because this is the one I remember. So, okay. So, okay. And I made the slides last night, so I don't remember them very well. So, for me, this is also kind of... So, okay. So, we start with complete theory and we have some type and we want to know... We assume that it doesn't fork over some subset of its domain and we want to know what can we say about its restriction to the subset. And where is the time? Okay. And so, more precisely, suppose P is one of these properties, stable, simple, or NIP, I suppose to be capital letters, is the same true for the restriction. So, let me give all the definitions for those who don't know. So, these definitions work for partial types. So, Pyrovex is stable. If every complete extension of it is definable over the domain in which it is defined, and equivalently, there are no sequences A, I, B, I, such that A, I is realized a type and B, J are something and they witness the order property. Okay. And NIP is the same definition, so partial type is NIP. If there are no sequences A, I, and B, S, S subsets of omega, and some formula such that the A, I is realized the partial type and phi has the independent property with respect to these witnesses. Okay. So, the rules of A, I and B, S may be reversed, meaning that I can ask instead... Here I ask that the A, I realize the type, I can ask instead of the B, S, S, V, I, I, the type and it will give me the same definition. And the same is true for the stable part. For the stable definitions, I can replace the A, I, and B, J. But for simple, this is... Okay. I'll tell you the definition first. Pi is simple. If there are no things, there are no things which witness the property. So, no K less than omega, no tuples indexed by a tree, and some formula such that any branch is consistent with the type, and yes, but every row is gain consistent. Okay. So, unlike the NIP and stable case, it is not true that you can reverse the rules of X and Y in this case, and the triangle free unknown graph is an example where if you reverse the rules, you get one definition but not the other. This is Altium. So, right. Yes. So, the difference, yes. Okay. But I'm not actually going to talk about simple types. I'll just put it here. Okay. Now, let's talk about... So, I started the talk with forking, but somehow everything is easier when you look at co-forking. So, let's say that the type of A over B doesn't co-fork over A if this. Okay. So, this means the type of B over A, the type of B, capital B over A, A doesn't fork over capital A. And so, here's an exercise which I'm going to solve. Don't worry. So, if B is some type which over B, which doesn't co-fork over A, and it is stable on NIP, then so is the restriction to A. And so, I hope it's true. Let's see how we do it. So, yeah. And solution, yes. And it's also it's italic, and that's also not supposed to be like that. So, suppose that you have a partial type. Let's say that we're doing the NIP case. So, what I'm going to say is that the definition of NIP for partial type is equivalent to saying that there is an indecenable sequence, I mean, having IP. Same as saying there is an indecenable sequence over the domain of the partial type, over the realization of the type, and some B such that phi of A, I, B holds if and only if I is even. Okay. So, you can take this as a definition. And then, if you have that B is independent from little A over big A, and the type of A over A has IP, then you can make I, oh, sorry. You can take this indecenable sequence which written as IP for this type, for this type, and by not dividing here, you can make it indecenable over B and you get a contradiction. Okay. So, let's look at the following corollary. If you have a type over set B and M is a model contending B and P doesn't fork over M, then the restriction is stable. Okay. That's a corollary of the previous exercise. So, let me show you the solution. I mean, it's a proof. So, first we can extend P to a global type. Then P is M invariant. You can show for stable types. If you don't fork over something, over a model, you invariant over that model. And actually, also for NIP, I'll get 20 P in a minute, but it's very similar. So, it's M invariant. So, by stability, it means stable types are just types which are definable. Then we know that it's M definable, but then it doesn't co-fork because it's a narrow. Okay. It's a narrow of its restriction to M. Why are you proving the P is stable? What? I'm assuming it. Ah. Corollary. Ah, it's a corollary of the exercise. Right? If it doesn't fork and this one is stable and the big type is stable, then the restriction is also stable if it doesn't co-fork. Oh, it doesn't co-fork. Yeah, but in this case, it doesn't co-fork. Okay. So, again? M invariant, no sufficient. Because if it's M invariant, and you know then it's, since by stability, it follows that it's definable as well. But M invariant doesn't imply that it doesn't conform? No, no, no, but it implies it's definable. Definable implies that it doesn't conform. So, it's in general, right? Yes. Once you have a definable type, it doesn't co-fork. What about the binogilitude? In general? What is it? In the variant? No, no, that's not all. No, I didn't say that. I will use it here. So, it's false and it's true. Okay, so, here's the theorem that, okay, so, I guess, I don't know actually the history, but I guess that this theorem, you can see how this theorem is motivated by this corollary. So, a theorem of Adler as a novice in Pili, which generalized Hasson and Donchos. So, if you have, which I think they only did it in the NIP case. What? No? No. Oh. Who did NIP case? Sorry. These guys. Who did these guys? Hasson and Donchos. I'm sorry. I'm sorry. Okay. So, yeah, so, they proved that if you have a stable type in general without any assumption on the theory, if you have any stable type of a set, and now you have some subset, not necessarily a model, and P doesn't fork over the subset, then the restriction is stable. Okay. So, yeah. And the proof, their proof used to generically stable types. And, yeah, I thought about it like why is it so complicated to go for models, because for models it's so easy, like I just did it, and for general set is so difficult. I mean, why? But, okay. So, okay, I thought about it. And what? The stability theory is easy for models. Yeah. Okay. So, yes, I guess that's the reason. So, let's talk about NIP. So, like in stability, like in stable types, if you have a NIP type, yeah, if P is a global NIP type, and it doesn't fork over a set, then it is the scaling variant over that set. Okay. Like in NIP theories. Okay. And so, here's the theorem that we did. So, if P is a global NIP type, and so every Rascals-Torn-Mutomophism fixes the type. So Rascals-Torn-Mutomophism is one which fixes the Rascals-Torn types. So, any Rascals-Torn-Mutomophism fixes this type. Yeah. So, the theorem here is the following, if you have an NIP type which doesn't fork over A, then we'll already show. If it doesn't fork over A, then we know the restriction is NIP. But this has not that, but if you generate a Moli sequence in P over A, then the restriction to A union in the Moli sequence is NIP. Okay. So, to get NIP, you don't necessarily have to have, it's not enough to have A, but it's enough if you add to it a Moli sequence. Okay. But it's not true in general. So, there is an NTP2 theory, and in parenthesis, in fact, about the EIMP minimal. So, it's the simplest NIP, NTP2, sorry. With a NIP type, with a global NIP type, which doesn't fork over a model, in fact, it's a coil over that model. So, it's the best way of non-forking. Okay. Apart from the finability, but this we know you can get. And such that the type is NIP, and in fact, it is distal NDP minimal, but the restriction has IP. So, it seems like it's a very strong negation to any hope that this theorem could hold. Because even of the models, and even if you assume the theory is very nice, and NTP2 is the, by the way, NTP2 is the simplest case you can think of, because for simple theories, the original, I mean, for simple theories, it is true, the result that you can respect to a subset and it doesn't fork. Because if the theory is simple, then forking and co-forking is the same. So, you can just use the previous. Are dependent types stable in simple theory? And also that. Okay. But then you use, okay. But then you use something more stronger than an exercise. Okay. So, I thought, like for the remainder of the talk, I can do, I can do, I can give an idea of this proof, and some applications, and also describe the counter example. Okay. So, let's deduce from this how, deduce from this an easy proof of the stable case, okay, which doesn't use generically stable types. It's a bit cheating because I didn't tell you the proof of the other thing, but I can guarantee it doesn't use generically stable types. So, okay. Yeah. So, first of all, I want to say that the previous theorem, so this one, this one, I can replace NIP by stable, same proof will work. Okay. Okay. So, if you have a stable type that doesn't fork over A, then it's restriction to a moly sequence is stable. Okay. So, let us see how to get the previous theorem of other case numbers in P-line. Okay. So, suppose P is stable but doesn't fork over A. So, suppose towards contradiction that restriction is unstable. Let M be some model, and let such that, yeah, okay, let C0 be a restriction of, a realization of P to that model, a realization of the restriction of P to that model. So, by assumption, there is some any insurmable sequence, J indexed by the integers such that, such that phi of C0 be I if and only if I is greater than 0. Right. That's what instability gives you. That's what the other property gives you. Okay. Now, let I be a model sequence denoted by P over everything we have so far. Okay. So, by stability of the type, it's not very hard to see that this sequence I, the smaller sequence has to be in the same set. Okay. It's not only another sequence, it's in the same set. But then, we get the C0 realized at the time, realized, realized as the type over everything bigger than 0. And also, P is a scarring variant, I said it before, and the NIP type is a scarring variant, in particular any stable type is. So, you get a J is indescribable over, over this, but this is a contradiction. Okay. Because now we have some guy realizing P restricted to A and the model sequence, and a sequence J which is indescribable over this domain, which witnesses instability, which is impossible. Okay. So, okay, so this, this, and, ah, yeah. So now, I want to do. That's the idea to prove. Yes. Yeah. So, in the beginning, we didn't know what to use. But then, somebody came up with the idea of using the blackboard. So, um. Again. Is it your wrong use of blackboard, anyhow, used to have the last slide. So. But then, you're not going to use the blackboard. Yeah. So, which blackboard should I use? This one? I guess I'd use this one. So, um, so we assume that P, um, P is global type, and we assume it's NIP. And P doesn't talk over some set A. Actually, here, the, the proofs for models or sets, I, I, I can't make it simpler for models. So, um, and we have I, maybe it's a bit simpler, but not much. I is a model sequence. A model sequence over A. And we want to show that P restricted to A, I is an. In P, in P. So generated by P. P doesn't talk over A. It doesn't talk over A. It makes sense to generate a model sequence. Um, okay. So. So. Yeah. It means a sky. So what I mean by model sequence by model sequence. I mean, I mean an indescendable sequence, which is generated by P. So just P being NIP, not forking over A. You get the fact that it's invariant over. We know this, right? It's a sky invariant over A. That's going very, very. Not invariant, yeah. But you can, but still I can say a model sequence is just a sequence, which is indescendable and generated by P. So what, what, what the proof gives in the end? So maybe I won't really go into all the details, of course, but so you have some five. So suppose not. Okay. So suppose that's what you want to show. So if not, then you have some formula phi of X, Y, and some, let's say, A realizing P with six to two AI and some BI, some sequence such that phi of AI, B, sorry, AI, if and only if I even, right? Now, so this formula has IP. However, however, there must be, there is, oh, sorry, there is some sigma, some other formula in P, which implies phi is NIP, right? That's by compactness. So what I mean by that is that there is a, okay, that's like this quickly line is not very precise. This means that whenever you have an indescendable sequence over C, you cannot have some guy realizing this formula, which alternates with respect to phi. This is by compactness. You know that this such a formula exists. So, okay, and further, there is another formula also in P such that sigma is NIP. Okay. So we can go on like this forever, right, but we stop here. And now what the proof gives is that there exists A, I want to say, yeah, sorry, sorry, yeah, there exists some C such that maybe I should call it, there's already a C here, so maybe I should call it C star, but it's the same kind of, same length as this C such that sigma over AIC, oh, yeah, C star, and that's called, okay, maybe, is there an eraser as well? Yeah. Where? Oh, okay. So first of all, the first step in the proof is actually to assume that I is MOLI over AD. Okay? That's the first, you can do it. But if you don't want to do that, let's assume that D is the empty set. What's that symbol? Yeah, it's the same D as the car, the car, is it? Yes, this D. Yeah, yeah, this parameter, yes. It looked like an alpha. Yes, no, it doesn't look like an alpha. Yeah, the other one. This one? No. Ah, this one looks like an alpha. Ah, yeah. Okay, yeah, I agree. Sorry. No, this is fine. There's no alpha at all. Okay, so we assume that I is MOLI over AD. We can assume it without any loss of anything. And then what we'll get is we can find some C star which will incidentally have the same type as C, even the same lascar stone type as C over A, such that sigma of AI C star if and only if I is even. And that will be a contradiction because this sequence, I, is incidental over D, and all of it satisfies high, of course, but the choice of this formula will tell you that this is a contradiction. Okay? Okay, so now, okay, to do this, you have to do something, but it's not very hard. I can promise you. So it's kind of a very local, the proof is very local. You only use the fact that to show that you don't, I mean, if you don't want to show that the whole type is NIP and you only want to show that the restriction to phi is NIP, you need another formula to say it. It would be nice if we could, you know, have this completely local, but it still seems like we need, at least in this proof, a couple of more formulas. Or at least one more formula to be NIP. Okay. By the way, where do you use M being a model? Where is M being a model? Ah, just because it's an, to say that this is, otherwise, why would it be an indescribable sequence? I think that's it. When you generate a type, when you generate a sequence, I think that's the only thing I'm going to use it for. Because you need it to be invariant. It isn't really invariant over the model. It's only the scalar invariant over the set. And also, I would like to mention that this proof here, if the type is, I didn't write it in the slides, but if the type is generically stable, then the result is, so if the type is generically stable, then you can move I. Same proof. Can you move the model? You move the sequence I. Same proof, as for the stable case, gives you this result without the I here for generically stable types. So, generically stable type. If you have, yeah? You go back to that, just this proof here. This one? Click, click. Click, clicking. Yeah? Ah, okay. So, I'm going to click more now. Okay, so, I used the blackboard. So now, let me describe the example. So what the example is, actually, I remember like, okay, anyway, the example is, you look at the theory of trees and you put a random graph structure on the open cones starting at this point. So this is, this sounds like something Pierre would say, but I'll go into a little bit more details. So, let's call this theory the DTRR, if you have a better name. There. So, okay. Here are the axioms. So first axiom is the language. First of all, what is the language? The language is you have a meat tree. So you have less than meat and R. And the theory DTRR is the model completion of the following axioms. So this is going to be a first-class. So first of all, the redact is a meat tree. And this is the part about the random graph. So for any triple of points, x, y, z, so I want to think that x here is like the base, okay? So, and y and z are connected. So you have a random graph associated for each x. That's what you should think of, okay? And this, for instance, this axiom says that it's a graph because you can switch y and z, okay? Now this axiom says that you want that in the tree, x is below y and z, and x equals to the meat, okay? And that's the stuff you put the graph on. That's the axiom. Okay, yeah. And then, next axiom, is that if x, y, z are connected, so if y and z are connected with respect to x, and x is less than z, the meat of z and z prime, then also y and z prime are connected. This means that actually the graph is on the cone, it's not on the points, okay? This relation, the relation x is below z and z prime, z meets z prime, sorry. This is an equivalence relation. Yeah, but vertices are not pointed out, right? Yeah, okay, if you want. But anyway, if you don't, if you don't like the interpretation I gave, you can just look at the axioms and show that. Okay, so yes. Okay, so these are the axioms. Now, yeah, okay. So, let me tell you what the type is. So yeah, I promised you a type. What did I promise? I promised you a type. So actually, maybe I should have said this. The theory is actually omega categorical. It's even better than, okay, so it's in addition to all the other stuff I said. I want to give you a type, which is nip, and it doesn't fork over a model, but its restriction has IP. Okay, so we take some model. So remember, it's a tree with this extra relation. And we take some branch. So branch is just a maximal chain. And we look at this type. This is a type over the branch. It says I'm bigger than everything. And notice that this actually gives you a complete type over the model. That's knowing you're bigger than everything, gives you a complete type even in the language with R, because of the cone's business, if you think about it. Okay, so this gives you a complete type over M. And now let's see any realization of that. And pi is a partial type. It just says x is below C. Then, of course, pi is a finite size 5 or in M, because even in B. Okay? And yes. Okay. And now we have to show that pi has IP. And sorry, P has IP. And pi is nip. Okay. Okay, so let's show that this, okay, so this is another place where I kind of plan to use the blackboard. So maybe I'll use this one now. So we have some D realizing P. And let's find some D realizing P. And some BI realizing P. So such that your BI meet BJ is D. So you have B0, B1, B2. And this is D. And this is B. Okay? So this is definitely, this we can definitely do. Okay? Because the type only says bigger than P, than B. Okay. And now for any set, I want to say that the formula says saying that, let's call this point, so yeah, so the formula says that the saying that R of X meet BI, the BI is connected to X with respect to X meet BI for I in S, while the negation for I not in S is consistent with P. Now we have to onboard. So yeah, why is it consistent with P? Because I can find some, let's say I want to find this, to realize this type. So I can find some C, this guy, such that, so it starts a new cone from D. Okay? So C meet BI is OSD in this case. Okay? And C is connected with the graph to the BI's I want. Okay? And not connected to the BI's I don't want. Okay. Good. So, okay, now let's prove that, let's prove, sorry, that the pi is nip. So remember pi says that X is below C. Well, the idea is that once you're below C, then you somehow, you're supposed to look kind of like a linear order. Okay? So, okay, so that's why it should have worked, but then, okay. So, let's say, yeah, I want to show that pi is nip, right? So let's assume we have some A realizing pi. So A is above B and below C. So maybe I should also do this on the board. Okay, so A is this is C and this is B. And we have some I which some sequence witnessing IP with some formula. But the point is it's indescribable over MC. Okay? Because what I want to assume is that pi has IP, which means it's indescribable over the domain of pi. By quantifying relation, and as I know the dense trees are nip, then we can assume that pi looks like that. So R of T1 of XY, T2 of XY, T3 of XY. Well T1 to T2 and T3 are terms in trees. Okay? Okay, so, yes. So I can also assume that I is indescribable over MA, there should be a C there, I think. In the language of trees, I can assume that. Because again, because this theory, just the trees, is nip. Now let's see what the axioms imply. What's the whole thing imply this? The T1 is the meat of T2 and T3. Because why? Because I know that it happens sometimes. Because R holds sometimes. And maybe I should have written the axioms on the board, but one of the axioms is that R of XYZ implies X equals Y meets Z. So if R of these holds sometimes, for some I's in the sequence, because we can assume that the sequence is indescribable over A, then we get this always. Okay. Also we get that A appears in T2 here, but not in T3. Okay, all vice versa. We also get that T3 is not comparable with T2. It also follows from these axioms. Okay. So what we get is that T1 has to be smaller than T2, because A appears here. And the term, maybe I should have said it. If A appears in T2, then this term is below A. It's less or equal than A. So T1 is less or equal than that. So what we get is that, so even if you didn't understand all this, you can still, you have here T1 is the meat of T2 and T3, and you have that, here you have C, here you have T2, here you have T1, and yes, here you have A maybe. And that's T3. Okay, that's what we have. So T1 is the meat of C. So by the drawing you can see, by the drawing you can see this. Okay. So, but then we know that, yes, T1 is smaller than T2 meat C because it's just T2. So by the axioms, we get that, you see, so these two guys are equivalent, model this guy. I told you this is an equivalent solution. So it means that I can replace T2 here by C. Here I place T2 by C. But that's impossible now because this I is MC indescendable. So I can't have this guy alternates. Okay. Yeah, so this is the proof. The example. Yeah, to prove that it's all the nice properties I said, you have to work a little bit harder, but not much harder, surprisingly. And right. Okay. Okay, so let me just end this talk with some questions. So first of all, Artem even asked in his thesis what happens in simple types. So I kind of hope that using this new proof for stable can help solve this question, but so far I couldn't. I didn't think about it too much, but still would be nice to know for simple types the same question, what exactly happens for simple types? What do you expect to happen? I guess I expect this, I guess the same is also stable, I guess. But the problem is that somehow for simple types, you also have inconsistencies. So I mean, you have the three properties, so it's not like in stable. Okay, the same proof cannot work somehow. Is that true in the search and theories? The question? I don't know. Yeah, you can restrict it to, yeah. Good question. Yeah, I don't think we know anything on forking really. Even the models, the model is nothing, even the model. Yeah, I think so, yeah. Even the models. Next question is, okay, that's a little bit, I thought about it during the flight. So in the example, we need one element of the model sequence to get NIP, right? So we had a type, in the example that we had, we had this global type P, and once we realized one element in the model sequence, namely C, then the restriction is NIP. So the question is, maybe the examples, it will be, well, is it always the case that you only need one element from the sequence? I mean, I guess not. But can you find actual examples that you need more than one, you need two or three? Yeah, okay, that's it. Thank you very much. All right. Questions for Ptai? Another question. So your model sequence is, maybe one element of the model sequence. Go ahead. One element.
Adler, Casanovas and Pillay proved that if p is a complete stable type over a set B which does not fork over a set A, then the restriction of p to A is also stable. I will address the analogous question, replacing stable with NIP. In addition I will present a new proof for the stable case which uses elementary techniques.
10.5446/59322 (DOI)
So this is joint work with Christoph Kruppin's, get at least most of it. Pretty much all of the concrete stuff I will say will be joined with Christoph. So first, maybe I should say that we have a blanket assumption that the theories we're working on are always countable. Sometimes other things will also be countable and we'll not say it out loud, but I don't think you should worry too much. So the main goal of this project is to understand strong type spaces. If you don't know what they are, I will explain in a minute. The idea is that in my previous work with Christoph and with Anand, we studied these spaces and somehow it seemed like they behave a lot like quotients of compact Polish groups. But back then, we didn't quite manage to express them that way. So it had some ad hoc arguments for values things that would follow from such an expression. But now, so now actually last year, we and Christoph, we managed to show that in a very strong sense, especially on the NIP hypothesis, we can show that these strong type spaces as well as the Galois groups and quotient of titlefinable groups, they all behave like quotients of compact Polish groups. This observation and the theory that led to it, it can be used to recover essentially all known theorems about cardinality, and the so-called Borrel cardinality of strong type spaces, and quotients of titlefinable groups. So we start with a titlefinable set, C is my monster model, and we say that an equivalence relation on this set is invariant to just invariant on the automorphism of the monster model. We say that it is bounded, it has a small number of classes. So if your monster model is saturated and smaller than the cardinality of the monster model, otherwise smaller than the saturation degree. A strong type is simply a bounded and violent equation which in addition refines just having the same type over the empty set, which I denote by three bars. Okay. A strong type space is simply the quotient of a titlefinable set by a strong type defined on that set. So particular examples of strong types are the classical strong types, like the shellaxe strong type, kimpli strong type, leska strong type. But I won't really focus in particular ones, this talk. A related notion is that of a connected group component of a group. So something I will use a little bit. So if we have a titlefinable group over the empty sets, oh yeah, also here this set X will be titlefinable of the empty set. If I don't say what is titlefinable over, it's usually over the empty set. So we have a titlefinable group over the empty set. The connected component is simply the smallest subgroup, which is titlefinable over the empty set and has small index in G. So given a strong type space, we have a canonical topology on it. So if we have a titlefinable set and a bounded divination on that set, then we say that a subset of this quotient X over E is close in the logic topology if it's pre-image in X is titlefinable. It can be equivalent to set it's titlefinable. So here it's titlefinable with parameters, but equivalently just titlefinable over any model, fixed model. It's well known that this topology is compact, essentially because this set X is titlefinable, but it's housed off only if this equivalence relation is titlefinable. So in addition to topology, these quotients also have a well-defined Borrell cardinality. I will not define that notion. I'll try to walk around this somehow, not to get to technical. In particular, we also have this logic topology in the quotient of titlefinable group, but it's connected component. Just because the cost of the equivalence relation is bounded and the equivalence relation on the group itself. Maybe I should make a remark here that, so this topology is housed off if this relation is titlefinable. So in case where this equivalence relation is titlefinable, topology somehow gives us the full information about the quotient. Whereas if it's not titlefinable, it can frequently be completely used. The topology can be anti-discrete, and then maybe this Borrell cardinality is more useful. There's just a remark. Okay. So before I go to the main theorem, I want to look at some toy examples. So if you consider a titlefinable group G, and it's connected component, then this quotient is a compact house of Polish group with the logic topology. So the fact that it's compact house of, it follows from what I've said before essentially, because G is titlefinable, and the cost of the equivalence relation here is titlefinable, so it's house of. But in fact, it's also a topological group. So the group operations are continuous with respect to this logic topology. Now if we take any subgroup of this group G, which contains this connected component, well, I should also say that it's invariant over the empty set. Then if you look at the quotient G mod H, and the quotient G mod G 0 0 mod H mod G 0 0, this should be empty set here. Then they are essentially the same in very strong way. The important point here is that this group here, G mod G 0 0, it's a Polish group. So this is just a quotient of titlefinable group by subgroup, and this is a quotient of a compact Polish group by a subgroup. So like I said, I want you to, in general. So first for strong types, it's a bit more difficult. I don't want to say too much, but there's an object called the Kimpilagawa group. If you don't know what it is, you don't need to cut really, but it's just canonical compact Polish group, which is associated with given first of the theory. Given a complete zero type P and a strong type E, which is closer than the Kimpilagawa strong type. Again, if you don't know what it is, you can just take it as a black box of sorts. On the set of realizations of this single complete type over the empty set, then the Galois group acts transitively on the set of classes of this E. We can see similarly to what happened with the groups, is that this strong type space is essentially the same as the quotient of the Galois group by the stabilizer of any one point in this strong type space. But the problem with this approach is it only works if we have this type of findable thing below somehow. So if H contains the GZ0, 0 of an empty set or if E is course of the Kimpilagawa strong type. So if you know what the Galois group is, you could think of imitating this approach with the Galois group instead of Kimpilagawa group. But unfortunately, this group is not housed, in particular, cannot be polished. So we need to do something better. So as I've said, we can recover a lot of information about these type spaces using this reduction to compact groups. Because compact groups are much easier to understand somehow. So one other simple but relatively simple observation is that, if you have a compact Polish group and a subgroup which is analytic, this we just think not too insane, you can think Borrel if you prefer. Then we have exactly one of the following. Either this subgroup is open and then the quotient is simply finite, or the subgroup is close and the cardinality of the quotient is just continuum, or it's not close and then because the subgroup is analytic, it still has in fact the bare properties enough. But because of that, the quotient still has cardinality of the continuum. In addition, it's not smooth in the sense of Borrel cardinality. So if you don't know what it is, maybe you don't need to worry so much for now. How does Borrel cardinality important part of your talk for all Borrel cardinality? I wouldn't. It's important for the conclusion, but you don't really need to. For the ideas, I don't think so. I won't get into these details. Yes. So in particular, if you know what smoothness means, it says that this quotient g mod h is smooth if and only h is close, but also more concretely perhaps this index of h is always finite, in which case h is open or index is continuum. It cannot be LF0 for example. So we want to show the same facts for essentially the same facts of strong type places and quotients of tightly-finable groups. So using these observations and the toy examples I gave before, we can take a single type over the empty set, and take advantage of the translations or a strong type equivalently because it's just one type. On the set of realizations of this type, which is course that they keep it like strong type, so we can apply the things that I've said two slides before. Then we have very similar trichotomy. Namely, either this strong type is simply relatively definable, and in this case, the quotient is finite, or is type definable and the quotient has cardinality of the continuum, or it's not tied definable in which case, it still has the cardinality of the continuum and it's not smooth. So the idea is that we use this previous trichotomy in this way that we have this, we have x, we have x mod e, and here we have this group Gal Kp of the theory, and here we have Gal Kp mod something. By the way, when I said they are the same, we actually have a function here, which is just homeomorphism. In fact, we have a function here which comes from a group action of the Gal Kp group on x mod e, as I've said before. So Gal Kp acts on x mod e, and the stabilizer has the same properties essentially as e. So stabilizer of a point here is closed here, if and only if this e is type definable. Similarly, the stabilizer is open if and only if e is relatively definable. Using this and this previous trichotomy I just gave before, we can just, so what we do? We take this e, we look here and we pull it up upstairs, this Gal Kp group. So now we have this trichotomy from the previous slide. So h is open and index is finite, h is closed, index is continuum, or h is closed, index is continuum, it's not smooth. So I didn't say precisely what I mean by saying that these quotients are the same, but I will make it precise in a minute in the more concrete case for the fourth theorem. But it allows us to basically take this conclusion and push it downstairs to this. So if h is open, then e is relatively definable. If h is closed, then e is type definable. If the quotient is not smooth, this quotient is also not smooth. Okay. So as I've said before, we don't want to just look at strong type spaces. We also want to look at type definable groups and basically the same, by this in the same way, we can do the same, arrive as analogous conclusion for quotients of type definable groups by invariant analytic subgroups of type definable groups. Over the empty set. Just empty set. Yeah. I don't really care about connectedness. Then there will be continuum defined over shell-assroom type. Yeah. Continuum-a-classes. Yeah. So that's consistent with it. In this case, if you go up here, so if e is shell-assroom type, you go up here, you find some subgroup which is intersection of open subgroups, but it's not open and it's closed and has many cosets. So the second case can be definable. No, not definable. It can be intersection of definable. But no. So it's definable. Okay. Yeah. So as I've said, the theorem we actually proved was a lot more difficult in this thing. This approach is completely useless for arbitrary bounded invariant equivalence relations or quotients by arbitrary bounded invariant subgroups without this assumption or this assumption. Okay. So the main theorem, the statement of the main theorem, that's still not the full statement, but some approximation of it. So we start with x. As before, we start with a single type of the empty set, complete type with look at the set of relations. Then for this type, we can find a compact Polish group, G hat, such that given any strong type e on this x, we can find a subgroup H hat of G hat, such that we have those transfer principles which allow us to do this trick all over again. Maybe some other stuff as well. So H hat, this subgroup is closed, if and only if e is type definable. H hat is open if and only if e is relatively definable. H hat is analytic provided e is analytic. We don't have if and only if here. I forgot to erase this part about bare property, but this is also true. Moreover, we have this thing about brokerability if you don't know what it is, don't worry. But this quotient G hat mod H is borough reducible to x over e. If this type p has an IP, then they actually are borough equivalent. This is the main theorem for strong type spaces. But you also have analogous theorem for tidal-finable group G. So if you have a tidal-finable group G over the empty set, we can find a compact Polish group G hat, such that given any subgroup of G which is has bounded index, I forgot to write this should also be invariant and analytic, then we can find a subgroup H hat, which has basically all the same properties. So H hat is closed if and only if H is type definable, H hat is open if and only if H is relatively definable in G, H hat is analytic if provided H is analytic, and we have this kind of reduction as well. So as I've said, now we can just take the trichotomy. I've proved before and just take a change a few steps and basically we're done. So before we had coarser than can be less strong type, now we take bounded. So we take a type of the empty set, complete type of the empty set as before, the set thing is still the same. We have an invariant equivalence relation on this x, which is now not coarser than can be less strong type just bounded and analytic, and then we have exactly one of the following. When you say analytic, then inside you are still getting these coarser than the entire equivalence. No, it's fine. If it was coarser, I mean, it wouldn't make sense. It's invariant. Yeah. So by analytic, I mean that so E is invariant, so it corresponds to a subset of however you prefer to call it, you can call it Sx squared empty set. So just look at the set of types of pairs which are irrelevant and it's a subset of this type space. So it's a polished space, you can think of analytic subsets here. Or you can think of Borrel if you prefer. So we have this kind of equivalence relation, then again we have this trichotomy. So either is relatively definable and the quotient is finite, or is tight definable and the quotient has cardinality of the continuum, or is not tight definable in which case the quotient still has a cardinality of the continuum, but it may not be smooth. It is not smooth. As I've said, the proof is essentially the same. The only difference is that here instead of Gaul key p we put the g hat, here we have Gaul mod h hat. If you recall what I said on the previous slide, it allows us to take this trichotomy hat on this level and push it down here. For tight definable groups, we also have a very similar statement. Namely, we start with a tight definable group g, and we take an invariant subgroup which has small index and is analytic, then we have exactly one over three. h is relatively definable and the index is finite. h is tight definable, index is continuum, or h is tight definable in the index continuum and it's not smooth. The idea is essentially the same. Here instead of x, we will have now g and here g mod h. So it implies in particular that given an if we have a tight definable group and a subgroup of it which is analytic, then its index cannot be infinite but smaller than continuum. So for example, it cannot be LF0 as I said before. So maybe I should say that this trichotomy, the previous trichotomy I actually stated three years ago here, but now it's a different proof of that. It's more before we had the Nathog argument for each of those, and now we have just a reduction to the compact groups. So you mean the four was what we did in the joint? Yeah. Actually, we didn't do it for tight definable groups, but I think it could be easily extended with different definable groups. But the upper corral was the main result. So now there's different proof using some of these. Yeah. It's kind of similar but it's more somehow more direct, I think. Okay. I should also say that this trichotomy is not true if there's no assumption about H. So you can find sort of Vitaly sets, Vitaly subgroups of definable groups which have finite index but are not definable. So this assumption here that H is well behaved is somehow essential. Okay. So I don't want to say too much about the proof because it's quite complicated, but I will just show around some ideas that appear in the proof. Okay. So one of the more important tools, actually for the part that I did not linger on too much about this Burrell cardinality stuff, is this whole called dichotomy which we have is that, okay, it doesn't look like a dichotomy. So basically, yeah, there's something called Rosenthal dichotomy, but it's not immediately clear how it appears here. But okay. But anyway, so we have this. So now we're moving away from others here for a minute. So forgive me. But okay. So we take a compact polyspace X and we take a set of continuous function, continuous real value functions from this space to the reals, which is bounded in the supremum norm. Then the following are equivalent. The type over here, this should be point-wise closure in Rx, so real value functions. So we consider the set A of functions. So it has a supremum norm because it lies in the space of continuous functions in the Banach space. But also it's contained in the space of all real value functions on X with just point-wise converges topology. We consider the closure. So this closure A bar. So A bar consists of Braille function, in fact, of Bayer class 1 functions, if and only if this space A bar has a fresh out unison property, which means that given any subset of this A bar, the closure of that subset is just a set of limits of sequences. So that's kind of a strong property. It's kind of like big metrizable, but weaker. Another equivalent condition is that it does not contain an independent sequence. So this is something that should be a bit suggestive. I don't want to say exactly what an independent sequence is, but that's pretty much what you think if you try to think of a concept of NIP in Banach spaces. That's kind of what you come up with. So this is not a coincidence. This condition is closely tied to NIP. Another condition is that it contains no L1 sequence, which means, roughly speaking, that the closed subspace generated by A does not contain a copy of L1 as Banach space. And if we have such an A, then this closure A bar, equipped with a point-wise convergent topology, is a compact space, because these functions are bounded. So it's an inter-supnome. So this closure, in this case, is called a Rosenthal compact. Also, any topological space which is homeomorphic to such a set is also called a Rosenthal compact. That's one thing outside of a model theorem that is kind of useful in the proof. And the other one comes from topological dynamics. Again, I don't want to get into too much detail. I don't have so much time. Is this thing before was only used for the NIP? Yes. But it also makes the general case slightly easier. It will appear. OK. So if we have a group of homeomorphism, a compact house of space X, then the LA's group associated with this action is just the point-wise closure of gene in the family of functions from X to X. So we have G here in X to X, just functions. And then we have G hat, which is just EL, or EGX. And so this closure is not so hard to check that it's a semi-group. If you give it the function composition as a semi-group operation. And it's also not hard to check that it's compact house of. And a left topological semi-group, which means that we can vary the left argument in its, yes, left argument and discontinues this way. It's not continuous on the right. OK. And we say that the action of gene on X is tamed if this semi-group, so this compact semi-group turns out to be Rosenthal in the sense that I've just given before. What really matters for me somehow is that it consists of measurable functions, and it has this URSN property. OK. And this kind of semi-group, in this context, they also come with so-called ELIS groups, which I'm certainly not going to define here. But there are some semi-groups of the ELIS groups, but with identity, which is different for each one. And they come equipped with a compact semi-topological group structure. But it's not the inherited topology, it's a different topology. Which is not necessarily Hauser. However, they have a canonical Hauser quote. Each has a canonical Hauser quotient. And in fact, they are all isomorphic as semi-topological groups. So we usually say just the ELIS group, because it's isomorphic class, just one. OK. Yes, so we always have this canonical compact Hauser quotient. So how does it fit into this environmental translation business? So we call that what we wanted to do, we wanted to express this quotient x mod e as a quotient of g hat mod ch hat. Where x was the type of, was the centralization of a single complete type. OK, so that's what we started with. So what we do is we choose a countable ambitious model. So I don't want to write the definition, but it's just homogeneous in a weak sense. Why do you call them ambitious? Because something is an ambit. Yeah. Yeah. Yeah. Yeah. Yeah. So we choose a countable ambitious model, which contains our realization of this type p. Oh, there's something missing here. Yeah, we consider the action of the automorphous group of this model on the space of type smm. So smm is just, is the space of types over m, which realizes the type of small m over the empty set where small m, this enumeration of m. So if you were here three years ago, then I was talking about similar thing then, but then it was the monster model. So I think it's maybe kind of easier to understand them now. OK. So we have this space of types. And now for simplicity, I assume an IP because it's a bit easier. The next step is a bit easier. So because we assume an IP, I said it's very strongly related to this Rosenthal business. And then it implies that this action is tamed. So this elisemigroup of this action is actually a Rosenthal compact. And OK, this gives us several things. But among others, this implies that this group, um over hum, is actually itself already compact Polish. Because by this fresh hathurism property can be somehow transferred to this thing. OK, not exactly, but it implies that this has something called countable tightness, which means that the closure of any set is just the union of closures of countable substance. And for compact topological groups, compact house of topological groups, this is equivalent to metrizability. OK, so in this case, so yes, so to answer your question, an IP is here to make this easier to construct, because without an IP, you have to do an additional step to get a Polish group. Sorry, so without an IP, what is the Polish group? It's not this. Yeah, it's a quotient of this group. I don't know that it would be Polish without an IP. I don't think it would be. But yeah. It just means separable. Metrizable. Metrizable. Metrizable. Metrizable. Metrizable. So separable is not equivalent, no. So like 2 to omega 1 is. Yeah. OK. Yeah, so as I've said, without an IP, we have to work a bit harder to get this group, G-hat. OK. Yeah, so that's the construction of G-hat, which is actually kind of the easy part. And then we have to do quite a bit more to get all these properties of listed. So just to try to give some broad idea. So we can show that we have a commutative diagram like this. So here we have this Alice group, who have the space sm of m. And we have a continuous suggestion from this space onto xm, which is, so here xm is just the set of type of a over m where a realizes. Did I write? Oh, actually, I did write it. OK. I forgot that I wrote it. Yeah, so xm is just the space of types over m of elements realizing this type p over the empty set, or equivalently just set of types over m of elements of x. OK, and here most maps are not so difficult to understand. So this map, so EL, it was a family of functions from smm to itself. If you go back to the definition. So this map is just evaluation of the type of the model over itself, on the type of the model of itself. Yeah, this map, this vertical, this horizontal arrow upstairs, so it can be, you can give an explicit formula for it, which if you don't know the theory, you won't understand, but it's just, it's a sensor, certain natural epimorphism from the semi-group epimorphism from the Alice group to the group G hat. And the other arrows are kind of what you expect. Here you just take a type of an element and go to the class of this element. And here, so this x is realized in m, so we just basically take a sub-tuple of m, which is the realization of this type, act pp, and just restrict to a sub-tuple of variables. So this is not that difficult. But it so happens that this map from G hat to x mod e, actually it factors, actually the way we constructed this is that we obtained as a factor of the map. So we go through the orbit map from going for the Galois group, which I don't want to define here, but it's another group which is associated with a theory. It acts on x mod e. And so we have an epimorphism from G hat to the Galois group, such that this function from G hat to x mod e is the composition of a group epimorphism and a group action. And it follows that the group G hat actually acts on x mod e. And h hat is simply the stabilizer of class of this element that we chose here to construct these functions. And then after that, after we've done all this, we have to do, well, we have to work a lot actually to show that this G hat and h hat have all the properties that we've seen before. OK, I think that's it when it comes to what I want to say about the proof. I know it's kind of vague, but yeah. So some concluding remarks, concluding one that's missing here. So there's a wicked variant of the trichotomy which applies in the case when the domain is not so. So the theorem that I had for strong type spaces, it was always for types defined, strong types defined on single, well, the use of a single complete type. But there is a variant of this which applies in case when the domain is a bit larger, but it's kind of weaker. But we still have this equivalence of smoothness and type of findability, whatever it means. And we could also consider smaller sets so we can take a subset of the domain and see what happens to e restricted to this subset. And under reasonable assumptions, we still have essentially the same conclusion. So we have a compact group, we have a subgroup, and so on and so on. Furthermore this group G hat, so the way I described it, it depended on the type P that we chose in the beginning. But actually we can choose it in a way that does not depend on P. So there's some, in a way it's a natural way to choose it. But the downside is it still depends on some choice of a model. We have to choose this ambitious model at some point and there is no obvious choice of that. So in this construction is- Can you show that if you change the model you get different? Like I know. But I think it should be pretty easy if you look at something trivial. If you have theory in which the Galois group is trivial, we can take a rigid model and a non-rigid model or something like that. And so in my thesis I've given a more abstract treatment of this which allows us to prove all these results in some similar ones just as corollaries of something more general in the sense that it's not more than theoretical. Something that I forgot to write here is that actually you may have seen some hint of that in the way in what I presented, is that this group G hat, the construction of this group G hat is actually quite concrete, relatively speaking at least. So this can actually be used to compute these objects. Like we can take a particular theory T and try to see what is the Galois group exactly as a group using this whole construction. And conceivably it can also be used to somehow understand various aspects of this group, or strong type spaces which I did not consider yet. Okay, so that's the end of my talk. So thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
In recent work with Krupiński, we showed that strong type spaces can be seen (in a strong sense) as quotients of compact Polish groups, and as a consequence. I will give a brief account of the argument, as well as describe some applications, such as showing that a non-definable analytic subgroup of a type-definable group has index continuum (and in particular, that an analytic subgroup cannot have countably infinite index).
10.5446/59323 (DOI)
But to be fair, as far as I know, there is just one unique paper in the world that deduces stuff from metastability, which is the one I'm going to talk about. So anyway, I'm doing metastability in full generality, I guess. No, there's more than one paper that mentions this word, because there are also papers of mine where I'll prove that certain things are examples of metastable theories. There is a definition. Well, you're going to see one in five minutes. So this is joint work with Udi. But as many of my papers with Udi, this one has an obligated history, and some of you might have heard Udi talk about such things 12 years ago. There was a paper of Udi around since 2000. Yes, so that's the same paper that's now finished. What did you finish before? Well, there were mistakes and holes in the proofs, and now this is all corrected. Hopefully. So to be fair, it's joint with Udi, but it's really a lot of Udi's work, and then I came along and tried to clean things up. So yes, and also I apologize to people from Paris and Berkeley that probably already heard me talk about such things, but I think there aren't that many around, except also people from LA, I guess. But anyway, but I'm going to try to speak about things in a slightly different way from what I usually do, which is that usually I really focus on algebraically closed-valued fields, and today I'm going to talk more generally about metastability. But the theorem's essentially going to be the same. OK, so well, first let's answer Anand's question and define. Well, I'll define. You will still not know what it is. All right, great. So let's start with a few definitions. So the first definition is a stably dominated type. So if I take P, the type of some A over some C. So for all the talk, I fix a theory T, which actually minates imaginaries, and I'm going to work in that theory T in the monster model. So I take an A and a little C, and I look at the type of A over C, and I say that it's stably dominated. Oh, sorry. And I take an F, which is C defined. Well, C, actually it might be pro-definable. And defined on P, and we say that what pro-C definable? It means that it might have more than one component, that finitely many components. F is a function? Yes, it's pro-C definable function. I forgot function. So it's functionally infinitive and infinitive? Yes. Well, no, infinitely many outputs. The inputs are on the type. So if A is an infinite double, OK, it also has infinitely many outputs. And each component is a definable function. So in all the cases that I'm actually going to look at, you can take a definable, you can forget the pro thing, but. And it's like a double function, yes? Yes, it's a double function. It's also a function whose graph is pro-definable, which is all the same thing. OK, so I say that P is stably dominated via F. Oh, yes. If, well, first of all, F of A lives in what's called the stable part of a C, which is the union of all stable, stably dominated sets over C. Stably embedded, I mean stably embedded. Oh. Sorry, too many stable around. And for all set B of parameters such that B is independent from F of A over C. So here what do I mean? The independence here is really not fucking in the stable part. So this is really in st of C. And here, really, what I should put is not B, but it's, so this is notation for the independence from SCC intersect DCLB. OK, I'll write B anyway. But so really what I mean by B is STC of intersect DCLB. So it's this trace of B inside the stable part so that this makes sense. So yes, this is non-fucking. But it's in a stable theory. So I mean there aren't that many options. Yes, the stable part is stable because it's a union of stable sets. OK, I'll finish the definition and maybe then I can explain a few things. Then the type of B over C and F of A implies the type of B over C and A. So a stability dominated type, a type is a stability dominated by a function if whenever you take something independent from the image into the stable part, the type isn't completely determined by this image into the stable part. OK, it's high. It's too difficult to see things in the house head. So should I rewrite stuff at the top of the board or try to avoid writing stuff at the bottom maybe from now on? What? Yes, so. What? What you are saying is that E is defined in the clothes inside the standard. No, no, what I'm saying is that really what I should have written there is that. That's my definition. What I'm defining here is actually this symbol. What does this symbol mean when B is not in the stable part? What it means is that the stable part of B is independent from F of A. Right, so. I think you're working in dependent theories. Yes. I mean this thing is stable, so it depends what you mean by this symbol. It's not working. It's not working, yes. It's also just not working because this set here is bunch of stable, stable embedded sets, so it's just not working. Yes, so maybe I should just have said it's not working. OK, maybe. OK, whatever. OK, so that's a stably dominated type. So the definition is a bit terrible and ugly, but there are good reasons why we consider this definition because it really says that the type comes from a stable set, whereas other definitions, for example, like generic stability, doesn't really tell you that. It tells you that you have a lot of properties that are common between stably dominated types and generically stable types, and we'll see that actually in the theories that I consider they're equivalent. But the important thing here is that you really know that you have a map to the stable part that really dominates your type. This is the important part of the definition. Yes, but I'm not in stability theory here. I'm in a non-stable theory. Well, it's not metastable yet. Anyway, so now I can define metastability. So T is metastable. If two things happen, so the first thing is really the important thing. The thing and thing is there for technical reasons. So the first thing is that for all set of parameters C, there exists a B contained in C such that, sorry, sorry, sorry, sorry. A theory is never metastable like this. It's metastable over gamma, which is zero-definable. Set, yes. No, zero-definable set. Or it could be tight-definable. I don't think it changes much to the definition, but here I'm going to take it zero-definable set. And so whenever I have a C, I find a B such that for all A, the type of A over C and dcL of Ca intersect gamma, this type here is stability-dominated. Because C is B. Stability-dominated, okay, yes, I should have said that. Stability-dominated by the full description of the stable part. So by any F, enumerating STB, sorry, intersect dcL of Ba. So it's easy to check that if you're stability-dominated by some function, you're actually stability-dominated by a function that enumerates everything. And so this is what stability-dominated without reference to a function means. Here the intersection is just with a dcL or includes B2. No, no, no. Well, this contains B. No, I put B. So yes, so this is essentially saying that you have a set gamma that you want to ignore. And if you add everything that comes from gamma, over that you're stability-dominated. Okay? All right, so this notion was invented by, well, so the fact that, well, the first example was proven by Haskell-Ruschowski and my personal though they never, oh, I forgot number two. That's what you're pointing at. But as I said, number two is there just for technical reasons. So it's much less important. It's just that every type of A over C where C is equal to aclc, remember I have imaginaries in, has a global C in variant extension. Yes? And then in the second definition you're making it stable-dominated. Types of a B. Okay, larger, but the type is now over a larger set. Oh yeah, but you're right. You're right. You're right. You're right. It's just that I was silly. So let me also add that gamma is orthogonal to the stable part because that's really the case you really want. So which means that in particular the stable part over this bigger set is the same as the stable part of a B because everything you add is actually adds nothing to the stable part. This is just for ease of notation. If you don't put that you have to put the bigger set down there. Okay. The things you add just come from gamma? Yes, they're all in gamma. It's B union stuff intersect gamma. Gamma, do you have to put that in the GCL gamma? No, I'm in gamma. Gamma is something you define it with closed. So I mean here I'm more considering that gamma is equal to gamma eq. You could say that I could also be defining T is metastable over gamma eq. Just like gamma eq, gamma is a definable set. Okay. If you want. But the definition, I mean I'm only going to consider cases where gamma is equal to gamma eq anyway. So the definition is not going to change. But if you prefer I can define it this way. It means the eq of the full induced structure on gamma but it's not stably embedded. Okay. Okay. Okay. Preferring this way. You're happy now? Okay. So just because I'm going to use the term later, such a B is called metastability basis. Okay. So. No, no. Any B such that everything, every type modulo gamma is stably dominated, it's called a metastability basis. So what exactly means that f enumerates this intersection? Well, every element in there is a function of a which is B definable. So you just take all the functions that enumerate this thing. Just look at the definable closed out, a section, stable intersection, stable part. Is it a triple? Yeah. It's a triple. You can enumerate it. No. It's a myth. That's for all a, for all a's there. Yeah. Any tuple anymore. Yes. Well, in a model of T. Okay. So this notion was invented to, well, for ACVF. And so the good thing is that ACVF is indeed metastable. And actually the notation also come essentially from ACVF because in ACVF, the stable part is everything that's internal to the residue field. And gamma is going to be the value group. And metastability bases are going to be maximally complete models. We now have a few other examples. So for example, the existential, well, the theory of existentially closed valued differential, or maybe I shouldn't have started with this one, valued differential fields where the differential is actually is monotonous. So I should put monotonous somewhere. So V of D. Oh, yes, you're right. So where the, the, the, the, the variation only goes up in the valuation. This is usually, this series is referred under different name, but this is also metastable, and you can check that, well. Once again, the stable part is going to be what's internal to the residue field, which now is going to be a model of DCF0. So a quick characteristic zero, sorry. And gamma is also going to be the value group. And also, so another example is, is a separate closed valued fields, valued fields of finite imperfection degree. I don't know what happens in infinite imperfection degree. So oh, yeah, okay. So all the examples are essentially valued fields, as you can see. They are also, you can fabricate other examples. So morally, any Hanselian field whose value group is, whose residue field is stable should be metastable. And this is not, I mean, this requires a proof that I don't think has ever been written. Okay. So now we know what the metastable theory is, and we know a few examples. So now I'm going to start talking about groups. So the goal, the, the, what I want to talk about is try to explain that, that, well, if you have a metastable theory, this decomposition of object as a stable thing over gamma things reappears also for groups. Wait, I, we have some metastable reason for that. No, there's no reason for that to be true. Well you could lose gamma. Okay, but except this or something. Yeah, for example, it could be, but then everything is metastable over the full thing, over the full theory. So it's kind of empty. Also over the value group. I said it aloud. So also, I think I wanted to mention, but I really want to get going to get to actually say things about groups and not just define metastability is that in a metast, in a NIP metastability theory, because we don't know also, we don't know that metastable over something NIP and twice NIP. That's not entirely clear. But in a NIP metastable theory, stable, stable domination can is actually exactly the same as generically stable, which is also exactly the same as orthogonal to gamma. So the table domination is actually can be defined in a much nicer way. But well, we still can't define metastable without stable domination. So I had to define it anyway. And as I said, it does give you more tools than just generic stability because you actually get actual maps to the stable part. Okay, so now let's define stability dominated groups. So if G is a definable group, so everything I'm going to say actually works for pro definable groups. And actually, if you want to prove the theorem, I'm going to state you need pro definable groups, but I'm not going to put the word pro anywhere. That's going to be better for everyone, I think. So if G is a definable group and P is some global type, a global definable type, a definable type, well, okay, global type concentrating on G, we say that what? It's not just right now, but in a second it will be. We say that P is a definable generic if for all, or I might have forgotten something very important. I will, for all G in G of the monster model, the action of G on P, which is just the type of the elements of the form GA, where A is the realization of P. Gp is, say, definable for some small c that does not depend on G. Equivalently, the orbit of P under the action of this group is small, and P is definable. Yes, you prefer G star P? Yes. Well, that's why I said definable generic so that it's a word. Yes, I agree. But for a type, you want, you want strongly definable generic, maybe? Definable F generic if you want. If you want definable F generic if you want. Yeah, I know, I am aware. But I find definable F generic weird because, so I think at some point I call them degenerics, but because you don't really need forking to define them, so it's weird to have the F around. But anyway, it's an important. And so secondly, G is a stably dominated group if it has a stably dominated definable generic. Definable F generic if you want. So I should have said, and I forgot, that stably dominated types are always definable. I mean, they have a unique definable, okay, finitely many depending on if C on stationary issues. So you have definable C, ACLC definable extension. So the definable here is kind of free. So anyway, group G is a group that you have a stably dominated type which has a small orbit, which is, okay. So the first result I want to mention, which is not the hard result, but which kind of starts to give an idea that these notions work well with groups is that if you have a stably dominated group, the map that dominates the, the maps that dominate the degenerics can actually be taken to be a group morphism. If G is stably dominated, there exists a stable group. So by here I, a stable group H and a definable group homomorphism, rho from G to H such that any generic of G is stably dominated via rho. Yes, any definable of generic. But actually you can show, you can show that any F generic in that case is definable and there's a unique orbit of F generics. And the old generic, yes. So that's the first sign that things are nice, which is that, well, out of a stably dominated group, we do find a stable group that dominates what's happening. But we would like more than that and in particular we would like given any group to be able to decompose it into stuff that come from gamma and stable stuff. So what would we, if we look at what's happening in the definition here, what's happening is that whenever we have an element, we have a function to gamma such that in the fiber things are stably dominated. So what we would like to do, so I'm going to put which because it's very false, when we have a group G, we would like to be able to find an H, sorry, maybe I should write it this way. So this is any definable group and we would like to be able to find something where this is gamma internal and this is stably dominated. That would be a good group version of the definition of metastability. But it's not true. Gamma is the thing over which I'm metastable. So for now I suppose T metastable of a gamma. So for all G, I want to find H stably dominated such that, blah, blah, blah. Yes. No, it's not, it's just to say it's stably dominated. But I rewrote it anyway. And this is not, okay. But this isn't completely a teleforce, sadly. So for example in ACVF, there is an easy counter example that nothing like that could, ever happen if you take the additive group of the field. So if you look in ACVF, you look at the additive group and this has no internal quotient, no gamma internal quotient. And it's not stably dominated. What is happening is something a bit more subtle, which is that GA can be covered by translates, or of the valuation ring. So for every gamma in gamma, you have what I call gamma O, which are the sets of axis of the V of axis greater or equal to gamma. And these are stably dominated. Not just a trans-ex, it's actually a trans-ex, you have some groups? It's multiplicative, don't say it. It's multiplicative. These are, this is a chain of, this is an increasing union of subgroups of GA that cover it and such that each of them are stably dominated. No, I'm defining gamma O, this is gamma O. So it's everything that has valuation higher than gamma and this is an additive subgroup and it covers. So now, if we look at our wish, we can't really hope for H to be stably dominated, but we can hope for H to be an object like that, which is covered by stably dominated things. But this is still false, because if you look at SL2 of K, this is, well, nothing works like you want and essentially the only, the best. Yes. In SL2K, you can find, so in particular, this would imply that you have a maximal thing which is covered by stably dominated subgroups. But in SL2K, you can find stably dominated subgroups that are not included in the larger one. For example, you take SL2O, it's a stably dominated subgroup and all its conjugates also are, but they are not included in anything bigger which is stably dominated. So SL2K is, well, there is no hope of anything happening, so that's why I am further restricting to abelian groups, because there things are going to happen the way we want. And so for abelian groups, we get the result we want, but first I need to define the class of groups that look like this. Yeah, yeah, yeah, so. Yes, I believe that. So actually, yeah, if you look at what's happening in SL2. Yes, indeed. But so if you look at what's happening for SL2, you can make the following conjecture, which is that instead of having such a nice picture, you find an, I need to define something first, sorry. So first I need to define groups that look like that. So G is limit stably dominated if there exists a type, an infinite definable family of H gamma where gamma realizes some type Q, where Q is a type on gamma to some power n over some small c. So first H gamma is stably dominated connected. So I never define what connected it means, but here it means that the orbit, you don't just have a small orbit, you actually have the orbit is a singleton. To every stably dominated H inside of G is, well, for every, there exists a gamma such that H is in fact a subgroup of H gamma. So you find a family that covers all possible stably dominated groups. And also for technical reasons, you require that the family H gamma is filtered. So small, well, for small sets of realizations, small sets. Okay, let me just write the H gamma. The family H gamma is filtered. So whenever you have a small set of such as gamma, you find a larger one that contains them all. So whenever you have a small collection of H gamma, as you find a larger one, which contains them all. So it's just to know that the union of the H gammas, I mean, you fall, well, okay. In that case, what did I, I did not define stably dominated. This is not what I defined at all. I defined something else, I'm sorry. I was thinking. Sorry. A stably dominated, a limit, a stably dominated family is what I just defined. Of G. No, no, that's an infinite, so now that's what I defined. A limit stably, so if G is a group, is a definable group, a limit stably dominated family of G is something like that. Actually, yes. No, that's why I changed my definition. H, which is the union of all gammas of the H gammas, which is a subgroup of G, is unique. It's totally, it's the continuation of the remark. I defined a stably dominated family, which is this thing, and now I'm saying that whenever I have a stably dominated family, if I look at the union of everything, I get a group because it's filtered. And this group is unique and does not depend on the family. Because of number two. The group does not depend on the family because of number two, because it actually covers every possible thing. And so, and you can check it's also infinitely definable. No, it comes out of the fact that it's a union like this. Yes. No. So the theorem I'm going to write now is that if it's a billion, H exists. So the H is infinitely definable. So it's an infinitely definable family of groups. Well, it's an infinitely definable family of infinitely definable subgroups, if you want, but that's what I mean by an infinitely definable family. The theorem is what I'm going to write just now. So the theorem is that if you start with a G, which is a billion, then these groups exist. If G is definable in a billion group, so there exists a limit, a stably dominated family. What? I mean, since here, I've assumed that T is metastable of a gamma. Now I'm in a theory which is metastable of a gamma. No. Yes, I know. I'm finishing my theorem and then I'm done. So there exists a limit, stably dominated family. And in fact, and you get more, you get what we really wanted from the beginning, which is that the quotient is gamma internal. And this also depends on its limit, not just on the existence of a family. No, I think this is true even without the existence. No, that as soon as there exists a family, the quotient is internal. Yes. And so if you add finite dimensional type hypothesis, like the fact that the stable part has finite, well, every definable set has finite moly rank and such things, you can actually get H to be a definable to see that H is actually definable and not type definable. H is not its self-statement. No. It's a, so this is an hyper-imaginary. And you, so it says that you have a map from G into some powers of gamma such that the fibers are finite. This is not, yes, this is hyper-imaginary. It's a set of hyper-imaginary. Yes, it's a set of hyper-imaginary. Yes. But when H is definable, then it's much more reasonable. Okay, so I wanted to say a word about the proof, but clearly there is no more time for that, so I'll stop here. Thank you.
In their work on the model theory of algebraically closed valued fields, Haskell, Hrushovski and Macpherson developed a notion of stable domination and metastability which tries to capture the idea that in an algebraically closed valued field, numerous behaviors are (generically) controlled by the value group and/or the residue field. In this talk I will explain how (finite rank) metastability can be used to decompose commutative definable groups, in term of stable groups and value group internal groups. Time permitting, I will quickly describe the applications of these results to the study of algebraically closed valued fields, in particular, the classification of interpretable fields.
10.5446/59325 (DOI)
that says that it says something a little bit weaker than saying that in any unstable NIP theory, well, that any unstable NIP theory defines or interprets an infinite linear order, because the order will not be actually definable. But let me just start maybe with a little bit of background. So there's a theorem of Schellach that says that if you have a structure which is NIP and not stable, then some formula has the strict order property. So T has strict order property. So I'll recall what this means. So recall that formula phi has the strict order property if there's a sequence of parameters for y, say, such that the formulas that are defined, or maybe it doesn't matter, are strictly defined sets. If you vary the formula, if you vary the parameters over the sequence, you define sets which are strictly increasing. Having a formula that has a strict order property is equivalent to there being a definable partial order with an infinite chain. Because here, if you have this, you can define an order from that. So I think pre-order and quasi-order means the same thing, no? So pre-quasi-order, by saying that, say, B is less equal to B prime, if the formula defined by B is less, included the formula defined by B prime. So this is, of course, a transitive relation. And the assumption that phi has SOP exactly tells you that this order has an infinite chain. And conversely, you can figure out what to do. So this is telling us we have a partial quasi-order, but if you allow quotients, then you have an interpretable partial order that has an infinite chain. And then there was a question, which I'm not sure to whom it's attributed. I think Schell absolutely raised it. I heard it first from Ryszowski. I don't know. I'm sure other people came up with it. Is can we get a definable, with the same assumption, so if T is nip and stable, does it interpret an infinite linear order? So note that you cannot ask for an infinite definable order because if you just take a structure which has an equivalence relation, all classes of psi 2, and then you have an order on the quotient, so you have an order, but you have two cover of it, then there's no definable order on that structure. But of course, there's an interpretable one. OK, so this is still open. However, what I can prove is the following. So the assumption is still the same. T is nip and stable. Then there is a finite set A of parameter. There's a type over A and some type definable relation such that this defines a strict linear quasi-order on realizations of p, such that r defines a strict linear pre-order or quasi-order on realizations of p. Yeah, but here I have this because it's strict. So there's a choice to make. If you take the one that's not strict, you have v. If you take the one that's strict, you have. Yeah. So some days I mean. We're having a strict type defined one thing. Is this the best possible thing to have a type defined one? Yeah, so that's why I stated like this, so it looks better. Well, we always thought that it's a progress. What do you think it's something? Why is it interpretable? Yeah, exactly. So let me explain what I mean by a pre. So what does it mean, a linear pre-order? It means that it's transitive. So OK, where's the interpretable? So if we have this, so from this, we can define the relation e, which you define as the link that you have neither of the two. You don't have that x is pretty small than y. You don't have that y is pretty small than y. Now this is a v-definable equivalence relation. And the quotient of e by this equivalence relation is linearly ordered by r. So that's what it means to be this linear pre-order. So in the quotient is linearly ordered by r. Oh, yeah. And no, it's more than that. The quotient is infinite. So the quotient is infinite and linearly ordered by r. Is this still? And everything is over r. Over a, sorry. This is over. So is the statement clear? So yeah, for me, it's somewhat easier to think that you have a v-definable equivalence relation. So I'm going to go back to the question. For me, it's somewhat easier to think that you have a v-definable equivalence relation. And then you have a linear order on the quotient. And if you think of what that means, then it means that either the strict order is type-definable or the non-strict order is v-definable as you want. And an immediate corollary is that if t is omega categorical, then the conjecture is true. Because the point is that here a is finite. So if the theory is omega categorical, everything is definable. Right, so that's also a good question. So p can be any, yeah, so p can be a one type. In fact, it can extend any unstable type. That you start with. So what I should say is that, so this looks like a strengthening of this. But it's a theorem of a very different nature. Because this is a non-structure theorem. It tells you you have this thing that says that you have some complexity in it. With this, you should really think of it as a structure theorem. It's giving you a linear order as a very rigid object. It's a very, and so you could hold it. But you have to use those linear orders to actually build some structure theory of models. So build some classification theory of models of an NIP theory, t. So some partly speculative applications. Well, almost all of them are speculative. Only the first one is only partially. So well, the nicest case is when the theory is omega categorical. And so it's natural to start there. And there, we can hope to have a classification of NIP, finitely homogeneous structures, extending the stable case. So the stable case is known by work of a number of people, like Lan, Chirlino-Schofsky, Harrington, well, quasi-finite. Well, there's a very, anyway. So this, when I started doing it, I gave talk about it in Paris, and it's in progress, joined. We found some shows. And I don't really want to talk about it today. It seems that one could hope to really have some coordination. I have no idea how to write this. What? What's the next letter? Hi. I'm very tired. Coordination of models by linear orders with the idea that if you compare with, let's say, the omega stable case, compared with the omega stable case, so a strictly minimal set would be replaced by linear orders. So we do these two things connected? Yeah, yeah, this is the omega stable. This is the omega categorical version of this. But here you can hope for much more than, because here I don't know what that means. Oh, wait. Here you're not assuming omega categorical? No. No, I'm not assuming omega categorical anymore. Well, OK, so in stable, so let's say omega stable, so you coordinateize maybe models by looking at dimensions of regular types. So here those dimensions of regular types, what they could be replaced by are isomorphism types of certain linear orders. So it's no longer going to be a number. It's going to be the isomorphism type of a certain order. And then in the stable case, in very nice cases, you know the model up to isomorphism by knowing those dimensions. Here you could hope for, so for example, what happens in real close fields, in real close fields, let's take maybe divisible order to be in groups. In divisible order to be in groups, if you know, so how does it work? Actually, let's take real close fields. It's a little bit. So what classification could you hope for if, so if you have a real close field, you can look at the valuation given by the convex hull of the reels, and then becomes a valued field. And then the residue field is a subfield of the reels, and then you have the value group. If you know the value group and you know the residue field, so the residue field lives in some bounded thing. So if you know the value group and the residue field, you still don't know the whole field, but there's a maximal one with this given value group and residue field. But actually, if you just fix the value group, then what I said is also true, because the residue field anyway is a subfield of r. So for a fixed value group, there's a maximal model of RCF that has this fixed value group. And this is the idea that one could hope to have something similar, where here the value group is playing the role of my ordered set. So one could hope that you somehow have ordered set, and they don't give you the isomorphism type, but say if the structure is this tall, they will give you at least there's a maximal one that has those. But maximal you mean? Maximum under inclusion. But I haven't really thought about this, so if this looks very vague, it's normal. But this is kind of the idea. When could look at algebraic structures? So this is also very vague. But for example, there's this conjecture that if you have an NIP unstable field, maybe it has a definable valuation, well, if you take such a field, it's going to have a generic type. If the generic type is generically stable, then I don't know what to say. But if the generic type is not generically stable, then some kind of upgraded version of this theorem is going to give you an order such that the generic type is going at minus infinity along that order, which you expect the type to be the generic type at minus infinity along the valuation. So this is going to give you an order such that the generic type goes at minus infinity along that order. And then using the fact that the type is generic, you could hope to somehow create a valuation by looking at stabilizers of convex subsets of the order or something like this. But. So k is a fixed type of field. k is an NIP field. And I'm assuming this generic type is not generically stable. Oh, it has a generic type. It's an F-generic type. F-generic type. I've added two groups. For example, you can have one that does for both. But OK, this is speculative. I haven't really had time. But it's just to give an idea of why this is likely to be useful to give us strong structure theorems, hopefully. But there's some big things that need to be figured out first. Anyway. So now what I want to do is I'd like to give you some idea on how this is proved. And I usually don't like giving proofs. But the reason I'm doing this is because the proof is probably much easier to tell than to read. And I mean, it's the kind of thing that it's not so hard to actually sketch the proof. But it can be very hard to read it. So I think it's worth actually doing it. I'm not going to do the general case. So I'm going to just do a special case that will give you the idea of what's going on. And you'll see there's just one idea. It's not so hard. But we have to play a little bit with indiscernible sequences. And so I need to recall certain things about indiscernible sequences in NIP. OK, so probably you know that if you have, so let's just, from now on, assume that my theory is NIP. And one of the first things, usually, that one learns is that if you have I an indiscernible sequence, say this is I, and you have a formula with some parameter. So the sequence is indiscernible over the empty set. It's not necessarily indiscernible over the parameter B. So when I evaluate that formula along the sequence, then it's going to cut out a finite union of convex sets. So this formula, if x varies along the sequence, I'm going to have finitely many convex sets such that, say, the formula phi of xB here is true. And then here it's false. Then here it's true. And then maybe here I just have one point where it becomes false. And then it's true outside, something like this. But the point is there are only finitely many changes of truth value along the formula. So this is usually one of the first things that one learns about NIP. And it's equivalent to NIP. And this is strengthened by the so-called shrinking of indiscernible. Maybe I'll move here. Which explains what happens when you evaluate not a formula with one variable, but you look at the formula with several variables, each variable having the same size as a tuple of the sequence. And you want to know what happens there. And then the conclusion is essentially the same that there's a finite, that there's an equivalence relation with finitely many convex classes such that the truth value, if you evaluate the formula on a tuple from the sequence, the truth value only depends on, if you look at your tuple, say it's an increasing order on which equivalence relation each element lies. So let's write it down. So i is indiscernible. You have any formula. Then again, you can cut. So maybe some classes have just one element, or finitely many. But there is a convex finite equivalence relation, E on I. So what I mean by convex finite, I mean the classes are convex, and there are finitely many classes. Well, such that the truth value, when I evaluate it on the tuple, so I take a1, a2, a3, a4, a5, and I evaluate my formula on that, then to know whether the formula is true or not, I only need to know if the tuple have taken an increasing order, I only need to know in which classes each of them lies. So such that if, say, a1, an, or an i in increasing order, b1, bn, or in i in increasing order, and ai is equivalent to bi for all i, then the formula is true on a1, an. OK, yeah, shouldn't have called b. If and only if it's true on b1, bn. Yes, I'm allowed to take two in the same class. And each of those classes is going to be definable, I mean relatively definable by an instance of that way. So there's a unique, maybe I should say, well, there's a unique finest, yeah, courses one. There's a unique courses one that has this property. So if you do this. So now if you have i an indiscernible sequence and you take, say, a finite tuple b, you can do this for every formula. So every formula is going to give you a finite convex equivalence relation. Now you should think of i, usually indiscernible sequences, especially in stable, they're indexed by omega. That's not what I want to think about. I want to think in a, in a, in a, in a, in a, in a, in a thinking about indiscernible sequences that are indexed by a dense linear order, which is usually very big. But definitely dense. So you should think of all indiscernible sequences as being indexed by a dense order, so that I don't have problems of consecutive elements, things like that. Then if I have a finite tuple b, I can take every formula phi. For every formula phi, I get an equivalence relation. And then I take the intersection of all those equivalence relations. What I get is an equivalence relation with convex classes on i, which now has maybe two to the parameter. The parameter is fixed. i is fixed, b is fixed. Yeah, thank you. Yeah, b is fixed. But phi varies. So now what I get is I get an equivalence relation. So we get convex equivalence relation eb. So it depends on b on i with at most maybe two to the t classes. Such that what? Well, such that the classes, there might be some finite classes. But the infinite classes are mutually indiscernible. And actually also over the finite classes. So maybe older classes, but the finite ones. So the infinite classes are mutually indiscernible. What does it mean? Over b. Yeah, thank you. They're mutually indiscernible over b. What does it mean to be mutually indiscernible? It means if you take two of them, each one is indiscernible over b and the other one. Or well, if you take any number of them, each one is indiscernible over b and all the other ones. OK, so maybe you have to think a moment why this gives you that. And this is really what I prefer to think about. So the way I think about this, you have this sequence. Ah, it's indiscernible, so you don't see any. It's the same everywhere. If you add a finite parameter b, then the sequence is cut into a bounded number of pieces. And those pieces are again mutually indiscernible sequences. OK. So now there's a very nice situation is when you only have finitely many such classes. And in particular, if you only have two such classes. And so there's something that measures that, which is called the DP rank. And I guess I'm only going to define it for finitely many. So in the over categorical case, you have to define it. In the omega categorical case, in the finitely homogeneous case, it will be finite. No, otherwise, no. No. Because they can still be infinitely many formulas with more and more variables. OK. So the DP rank is going to bound the number of classes that you can have. So say that the DP rank of p is at most n. Let's do it only for an integer. If for any indiscernible i and any b realizing p, well, if you do this, you only need to cut into at most n classes so as to get this. So it's not completely true. Well, first I guess it should be n plus 1. And yeah, it's not completely true. OK. What? Yeah. OK. OK. OK. So I'll get back to this. I'll finish this. I want to look a bit more closely at what happens here. So this, I mean, some of the whole idea of the proof rests in understanding the minimal situation. You have an indiscernible sequence, and you have a topol b. The topol b breaks the sequence into many pieces. Let's look at the simplest thing it could do. Well, there are actually two simplest things that it could do. One thing is that i breaks into two infinite pieces over b. So maybe over b, i breaks at some point into two infinite pieces. And imagine that there's no limit point on either side. I mean, there's no end point on either side. And there's some formula with parameter in b that's true here and false here. And that's it. The two pieces are mutually indiscernible over b. OK. So this is the first simplest situation. But there's another one that could happen. It could be that there's just one point that behaves badly. And that if you take off that point, the sequence that remains is indiscernible over b. And the fact that there was those two minimal situations is the main observation behind the definition of distality, actually. So you should think of this situation as being stable like. Because what happens in a stable situation? If the theory is stable, the sequence is totally indiscernible. And you have this stronger property that if you get a finite set, you can just remove a bounded set from the sequence. And whatever is left is indiscernible. So when it happens that you can take a point and you remove it, and the sequence i is indiscernible over b, let's think of this as a stable like situation. And when the sequence is cut in two and there's some formula that's true, below, and false above, then this is like an order like situation. So distal theory, it's not important for this talk, but the distal theory is a theory where this never happens. When you never have this situation, you only have this one. So now if you want to, so why was I in trouble here? Because the number of classes is not the correct number you want to look at. If you have your sequence and you have a parameter b, and let's say it cuts it into finitely many classes. But there could be cuts that are order like cuts, where some formula changes truth value, and then there could be isolated points that you need to take off. And what you want to count are really the number of those isolated points and cuts. This is the relevant number. So you see a point gives you really three classes, but I want to think of it as just two. I want to think of this as one thing that happens in the sequence. And if you count that, then the Dp rank is good. And any b, they are at most n cuts in i over b. And what I mean by a cut is either one of those two situations. So either there's a formula that changes truth value, or there's an element I need to take away. Well, this doesn't happen to consecutive because my sequences are dense. So then the Dp rank is at least, oh, sorry. It's n minus 1. So if it's 1, no, it's good, actually. So the Dp rank would be 2. So it's known, so this is proved by Tai of an Alex, that the Dp rank is additive. Sub-additive, yeah. Thank you. I hope I'll write it correctly. Looks correct. So the Dp rank of a topol, AB of the type of a topol, is bounded by the sum of the Dp rank, we should say rank, the Dp rank of a over a, and the Dp rank of b over a. So something you would expect. It's not so obvious to prove, but it's true. And of course, there's one natural thing to do with the picture that I drew here is to split the Dp rank into two parts, just counting the number of, you could just count how many of those things you can have, and you can count how many of those you can have. And this does work. It does give you two notions of rank, which also sub-additive, each separately. I'm not going to, in this talk, I'm not going to use them because what I'm going to do right now is restrict to the case where the Dp rank is 1. So I was willingly being vague about that. They're cuts given by a formula with one variable and parameters in b plus maybe other parameters from the sequence itself. That accounts for the formulas with more than one variable. So the correct way would be to define it like that. But then, if you'd have to use the sequence, you could get arbitrary many funny things going on. Yeah, yeah, but you're only allowed to cut, yeah, but still, it's going to be bounded. As long as you don't cut on one of the parameters themselves. But let me tell you what the Dp rank 1 case is, and that will be. So from now on, we restrict to the case where the Dp rank of any one type, so the Dp rank of the structure is 1, which means the Dp rank of any one type is 1. And then what does that mean? It means if I take an indiscernible sequence i, and I take now a single ton a, then either the sequence stays indiscernible over a, or there's a cut, and I get two and the two infinite pieces on the left and on the right are mutually indiscernible over a, or there's one point that I need to take off, and when I take it off, the remaining thing is indiscernible over a. OK, and that's a precise statement. This is equivalent to being Dp rank 1. So from now on, I'm going to explain the proof in the Dp rank 1 case, which still shows the idea, but simplifies the number of technical things. And then one has to understand how to restrict to that case. If you could just find the type of Dp rank 1, you'd be good, but that's not always possible, so you have to do more complicated things, but this I won't explain. OK, so the first thing is we want to be in this situation. This is the situation that's order-like. But we can, because if this situation always happens when you take any indiscernible sequence and any single tone, then the theory is stable. So if the theory is unstable, you have this situation happens at least once. If this always happens for every single tone and every indiscernible sequence, then the theory is stable. Because stability is restricted to formula with one variable, and you can test stability by evaluating. So if T is unstable, it is some I and some single tone, A, such that and a formula. So let's write it all out. And there's a formula. OK, let's call it B. There's a formula for your FxB. And here I'm going to assume there's no other parameter in phi. So to answer that, maybe I would need to add things to the base to ensure that. But everything I'm doing is invariant on the adding finitely many constants. So I can certainly get down to this situation. There is I, B, and a formula phi of xB, such that phi holds on the left on an infinite initial segment, and the negation of phi holds on the final segment of the sequence. So I'm in this situation. Just to emphasize that it could be tuples, that the sequence is not B is a single tone, but the sequence might not be. But then I'm going to forget about the bar from now on. So is this OK? You can forget about almost everything and just so that we are done to this situation. We have this indiscernible sequence, and we have one formula that's true, that's not a formula. And we have one formula that's true, then false. And also we have this DP minimal that will play an important role later on. We want to get an order from that. So there are two natural things to do. One natural thing leads to Scheller's theorem that there's a definable partial order. And what is Scheller's argument in this case? It's saying, look at the type. So this is an indiscernible sequence. I can look at the type, let q be the type of an element that fits here in the sequence. So that if you add it here, you get an indiscernible sequence. Let's look at that type q over the sequence. So q over the sequence, which is the type of an extra element in this cut here that fits in here. And you can define naturally a partial order. Well, you have a formula that has the strict order property. Yeah, I didn't need to define q actually at this stage. So if you take a1 and a2 here, it is not possible. OK, let's define p to be the type of b over the sequence. This maybe will be the type more important for me. So it's not possible to have a b prime that realizes p, such that negation of phi of a1 b prime holds and phi of a2 b prime holds. If a1, a2 are two elements such that when you add them both, you get an indiscernible sequence. Why is this not possible? Because look at what the formula phi does. It would be true here. Then it would be false on a1. Then it would be true on a2. And then it would be false again. But that means it changes truth value. That would contradict the p minimality. It changes truth value too much. So this is not possible. So you assume that an a1, a2 preserves this element. Yeah, yeah, I'm assuming that both. So maybe I shouldn't write that. I'm not assuming that not only that they both satisfy q, but that together they fit in the sequence. And then it's inconsistent to have this, which is exactly saying that phi of a1 y and there's a phi of a2 y and there's an inclusion between them, which you probably know what it is. So as a2 goes there, this is bigger. So this should be bigger. And the strict comes from the existence of b. I can find a point that changes truth value between them by indiscernible. So this is how I get the strict inclusion here. And the inclusion is just this statement. The y steps compete, right? Yeah, I think it's correct. OK, so this is the proof of Scherlach's theorem in this easier dp minimal case. Now to get the linear order, we're actually going to look at it a little bit differently. So Scherlach's theorem gives us the partial order on realizations of q on elements that fit in the sequence. I'm going to construct an order in realizations of p. So p is the type of the parameter b. So almost done. There's a little bit more. Just a little bit. Yeah, I know. I approach us for that. OK, so what is the natural way to try to define an order on realizations of p? So if I have b and b prime, if I look at phi of xb, it's true here, false here. Phi of xb prime is true here, false here. That's because they satisfy the same type of the sequence. But now what I can do is I can add another piece of the sequence in the middle. And then phi of xb, let's see what happens. Phi of xb is true here. So let's look at bb prime. So phi of xb is true here. It's false here. And well, it's the same argument I had over there. By dp minimality, there's one point where it changes truth value, where it's from true to false. Be a cut that is sort of indexed by b. Now there's also one point where phi of xb prime changes truth value. A cut, a cut. Sorry, a cut. There's a cut in the sequence where the formula phi of xb prime changes truth value. And now if this happens, I want to think of b as being smaller than b prime. OK, now this doesn't quite work. But let's see what does work, what we can get. So first we can define, well, there's a natural equivalence relation that I won't actually need, but that will turn out to be this equivalence relation. Which is, I can say b and b prime are equivalent. If no matter what I put here, the cut is the same. But you're assuming b is the same type of an i. I'm assuming they have the same type of an i, yeah. P is the type of an i. P is the type of an i. P is the type of an i. It's on the right place. P is the type of b over i. So P knows that the formula is true on the left and false on the right. So there's a natural equivalence relation if the cut, so let's call this the cut defined by b, and this is the cut defined by b prime. So the natural equivalence relation is if cut b equals cut b prime for all, say j, let's call j, whenever I put a piece j in the sequence, the cuts are the same. That's an equivalence relation. Inj, the cut in whatever. Inj, yeah, in this thing. OK, now my strict order, what I want to be a strict order relation, is going to be r of a b prime. And here you have to be careful if you cannot have the cut of b prime less than the cut of b. So if we never have cut of b prime less than cut of b. So no matter how you put j, maybe the cut of b and b prime coincide. That happens in particular if you take j equals is empty. So that's going to happen for sure sometimes. And it could be that b prime can be separated, say, but you cannot have the opposite. So wait, is that what I want to write? No, this is the large one. Yeah, this is the large. Sorry, so this is what is going to be the large relation, the non-strict. And note that this is v definable because it's saying you do not have something. So the complement is there is something. So this is v definable. It's obvious that it's infinite. I mean, that's probably the typical problem. Wait, that's the question. No, no, the fact that the quotient is infinite is obvious. Because it's indiscernible, so whenever I put a j, there is some b that has a cut here, there's some b that has a cut here, there's some b that has a cut here. And those all give different classes. So yeah, so what is obvious? That the quotient by a is infinite. That's obvious. The other thing that's obvious, a little bit less so, but you just have to write 1, 9, is that this is a transitive. And this is just one line. If you write it, maybe I don't have time to write the line. Yeah? Yeah, yeah. What? Yeah, yeah. What? Yeah, yeah. What? What do you want to write? Oh, I have other things to write than the line. Yeah, yeah, what is the question? The formula is always the same phi. Formula is always the same phi. We just vary the. Yes. The formula phi is fixed once and for all. OK, so I claim that this is very easy to check just from the definition. So what is not clear is linearity. So what does linearity mean? It means our two points comparable. So what does it mean that two points are not comparable? It would mean that you have b and b prime. And there is some j such that the cut of b lands below the cut of b prime. But then there's also some other j prime such that it's the opposite, the cut of b prime is below the cut of b. And if you have that, well, you can't compare them. But now, so this is sort of the crux of the proof, is to observe that if this happens too much, we're going to contradict an IP. And what does too much mean? So this might happen. At this stage, it could very well be that this is not yet linear. So if it's linear, we're done. We have all the properties. If it's not linear, it means there are two realizations of p that can be switched by just changing the thing in the middle. So then what we're going to do is we're going to pick one of those realizations, add it to the base, increase the sequence, and then iterate and work here. So you don't need to worry about this. Eventually, if this process keeps going, you will get a sequence like this of points which are all realizations of p. You will get an indiscernible sequence with the property that, well, if you look at phi x bi, it's true up to this cut and then false. And with also the extra property that any two consecutive ones, you can change what's here, put another sequence, so as to switch the two. And now, what we want to do is just show that then we can rearrange this middle piece so that to get any given order on the base. And why can we do it? So this is sort of the main technical thing, is to understand why this works. And the reason is because of this DP rank. So I'll explain it. What do you need to do? What I want to do now is I want to, OK, so maybe I should say the conclusion to the end. So now what I claim is that by changing the sequence in this middle piece, I can arrange to get any permutation of the cuts that I want. Why does this contradict an IP? Because then the formula phi of xy would have IP. Because now if I just take a point here, it's related by phi to everything to the left and to nothing to the right. But everything to the left can be just any subset of the bi's that I want. So if I can do that, I'm done. Now we have to understand why we can do that. And the reason is, so this additivity of DP rank tells us that for the DP rank of, I think I'm only going to need it for 2, the DP rank of any two points is at most 2. Because DP rank of each is 1. So it's actually exactly 2. Because if I take any 2 of, sorry, those are b's. If I take any bi and bj, I do have two cuts that they induce on this sequence. So the main, OK, so now I'm going to do this permutation inductively by switching consecutive points. So I'm going to explain one switch. So say you are at a different situation. You have here some bi. So let's do it just at the start. I want to exchange b3 and b4. What I know by construction is that if I forget about all the other points, I can erase this, put another sequence that exchanges them. But I have no idea a priori what happens over the other points. Maybe the other points now, the sequence changes. Maybe the type has changed. But DP rank will tell me that this cannot actually happen. Because now if I take, so now let's take b3 and here I have b5. So in the new sequence, if I look at what happens over b3, b5, if I look at the new sequence, it's here. I know two places where the top of b3, b5 cuts the sequence, here and here. So there cannot be any other. So what this means is that this whole left piece and this whole right piece have to be mutually indiscernible over b3, b5. And what this means is that this piece that I've added, b3, b5, they don't see anything here. They don't notice that anything has changed. And therefore I can just do the same thing here and replace them and move this here and iterate. And it contradicts an IP. And that's it. Thank you. Thank you. Thank you.
A longstanding open questions asks whether an unstable NIP theory interprets an infinite linear order. I will present a construction giving a type-definable linear (quasi-)order, thus partially answering this question.
10.5446/59327 (DOI)
Pyong Han Kim, Alexi Kolesnikov, and Cheong-Buk Lee. And they will start from some motivations. So this is about generalization or a variant of the notion of last-carga-la group for a type. So we are only looking what's going on inside the set of realizations of a type. And there's actually a couple of natural ways to define such localized Galois groups. We will focus mostly on one of them, which we will denote by Galois 1L of p. I also discussed briefly the other possible definitions and what are the advantages and disadvantages of each of them. So as to motivations, so there are two of them. First, description of so-called first homology group, H1 of p, which is a way to measure how far is p from having free amalgamation property. So this is a group of the group H1 of p, which measures how far is p from having free amalgamation. I'm not going to speak about this. So don't worry if you don't really know what is this. What is, well, free amalgamation is the same as independence theorem, probably. Almost of you know what it is. And the description here is for p, which is a strong type. So p is a complete type over algebraically closed set. And also here we assume that it's a type of algebraically closed set, so a type of a tuple, a, where a is algebraically closed. And then in this case, we get under these assumptions, the description tells us that H1 of p is, well, the quotient of our localized Galois group of p by the commutant of this and all the stabilizers of representatives of these, of elements of this group. So this group is already quotient. And then we can look at representatives and then we take classes of all functions, of all automorphisms, which have a fixed point. So commutant times the group generated by all stabilizers. Stabilizer x or x satisfying p? Here t is n eto. Yeah, here t is n eto. Yeah, yeah, just motivation. So don't worry about it. This is like the last time I mentioned this H1 unless somebody has more questions. But yeah, this is like a kind of different subject. And secondly, in paper by Krzysztof Ludomir and Pierre, they consider this group and also analogous group for KP types instead of Laskar types. Consider an epimorphism from Alice group of the flow. Because of a monster model acting on the space of types of some bounded length alpha over c, global types, onto Galois L1 of p or KP in an attempt to understand this flow. So this is the main object of interest, well, one of the main objects of interest in this paper. And this may be also seen as a variant of some epimorphism, of some epimorphisms considered earlier for different flows, but again, this is just for motivation. So now let me start some more systematic stuff about those groups. So first, OK, so the original Laskar-Galois group occurred already in a couple of talks, but let me just remind you the definition. So for any theory, we consider Galois group of t, which is the question of all automorphism of a monster model by strong automorphisms, which are, by definition, this is the group of automorphism generated by automorphism fixing some small substructure of c. So f out of c, such that there is a model, such that f restricted to m is identity. And we want to define something similar, but we don't want to look at automorphism of all the monster model, but just automorphism of a set of realizations of types. So we only look at restrictions to the set of realizations of a type, and then we want to quotient out by something which we think of as automorphism, strong automorphism of this set. And here you may actually have a couple of ideas how to define it. So we will use a couple of notations for different variants of this notion. Let me just remind you here that this is not visible immediately from the definition, but this does not depend on the monster model c. And definitely one of the properties that we want to have with our localized groups is also that it does not depend on the monster model. And actually not with all candidates for the definition, this will be clear. So consider for the moment just a partial type, but later in the talk it would be almost all the time a complete strong type. So consider a partial type p. Let's say over the empty set. It doesn't change anything, but for simplicity. And yeah, now, OK. So general question, what should Galois L of p be? So some kind of answer is already given by those motivations because, well, this particular notion Galois 1 of p fits the best probably to this context, at least to this one. But I guess also over there. Just look at the good information of having the same last strong type of electric restrict utilization. Yeah, this is one of the options. Yeah, of course, this is one of the options. Yeah, OK. So if you define this like this, one of the problems is that you don't know whether, well, at least we don't know whether it's independent from the choice of monster model. The last type? Sorry? Distinction to one single last type? No, one type. No, no, one. Why last type? No, no. No, no. OK. So at the moment we just restrict to any partial type, anything. It's OK. Well, OK. So did I say, yeah, a high p is over the empty set for the moment. OK, Galois 1 of p is automorphisms of p of c. Well, these are just restrictions to this set, restrictions of automorphism of the monster model c. Modulo the group of automorphisms of p of c such that for every realization of p, a is Laskar equivalent to f of a. So just the automorphism that preserves Laskar type of single realizations of p. OK. So this is actually, OK, this is a special case of more general definition where here you put lambda, but this seems to be most natural. Sorry. What is the point of time? Yeah, one is just that you take realization of p, not the tuple of realizations of p. OK, it should be clear in like two minutes. So second one, we don't want only realizations, but we want arbitrary long tuples, even if the tuple enumerating all that set. So Galois fix, I think, Galois fix of p is again automorphisms, quotient out by those automorphisms that let's call this tuple c. So c is Laskar equivalent to f of c, where c is enumeration fix. This is fix, where c is an enumeration of the set of realizations of p. And it turns out that actually it's enough to look at tuples of length omega, which is not. The theory is comfortable. No, the theory is arbitrary. You don't really need this. So if you look at tuples of length lambda by induction on lambda, you can actually see that for omega it's already implies for n lambda, the preservation of the Laskar type. So this is equal to, this is isomorphic, well, this is literally equal. It's not to say you can find out the subtree volume. Only for g compact, not for, here t is arbitrary. It is enough to say for countable for any theory. So this is the same as automorphisms of p of c, modulo f, OK, I will call it here out f l omega of p of c, where in general out f l lambda of p of c is the set of, is the question of the group of automorphisms of p of c by those who fix the type of, oh sorry, this is just a set, of course, yes, thank you. Just a set of those f which fix the type, Laskar type of tuples of length lambda. So for each a in p of c to lambda, a is Laskar equivalent to f of a. And yeah, interesting cases would be mostly lambda equal to one or lambda equal to omega because then we get this group. We're preserving the type of, the Laskar type of all the tuples. So this, this stabilizes in future very lambda, the stabilizer omega. That's right. Sorry? The, the, the, the lambda and out f l omega are the same with lambda, isn't it? Yeah, yeah, exactly. Yes, yes. So this is interesting only for like countable lambda and, well, it's probably not very natural to consider this for lambda equal to 35. So one and, and omega are the most interesting cases. Okay. And third one. So this is of course smaller group than this, but we can look at even smaller group. So Galois res from restriction L of p is the quotient again of automorphisms by those automorphisms that are restrictions of strong automorphisms. So this may actually seem at the first glance as most natural one, but, but there are some problems with it. Maybe these problems can be solved, but, but we're not able to. So f, there is f tilde, which is automorphism of the whole monster model such that f tilde restricted to the realizations of p is f. Okay. Oh yeah, of course, out f. Thank you. Well, like this is the, the quotient is the largest. Yes. So, so this projects onto Galois, omega or Galois fix of p and this projects onto Galois one of p and we know that this and this in general are not equal, but we don't know about this. So here do we have, do we have isomorphism in general here? No. For you, just every automorphism of PFC come from from automorphism. From automorphism, yes, it's just restriction of the restriction of global automorphism. Okay. So first notice, obvious thing that P is the last car strong type if and only if Gal 1 of P is trivial. So which is kind of desirable things to us. We don't really want to study Galois groups of, of last car strong types, which could be not trivial if we use other definition, the second or the third one. Okay. Second thing. So let's say proposition Galois lambda of P does not depend on the choice of master model. Okay. And we'll prove it. So here is the observation for any lambda for any small lambda. Yeah, there is one. Okay. Here is lambda. So the quotient Gal lambda. Yeah, this is a tuple. This is Cartesian product. So this is the restriction of lambda. Can lambda be larger than separation of the mass? Well, like in the definition, you can take anything you want, but here lambda, well, lambda is small, but like if you take huge lambda, it's still the same as Gal omega. So the point is for omega and finite lambda. Okay. Yeah, so here is this automorphism module of lambda. Okay. Yeah, no problem. Questions are welcome. So okay. So yeah, the proof is easy, but I will, yeah, I will present it to, to let you see where is the problem with, with the definition Gal L res, the one coming from restriction, which also may seem as a natural one, but I don't know how to show that for these definitions independent from the master model. Okay. So, so of course we can just, any two models we can embed in one bigger. So it's enough to show for one model, one master model and another much bigger than it. So fix master model C and the bigger master model. So C prime will be C plus saturated and strongly homogenous. Okay and we just write the isomorphism. So okay, maybe picture C prime C and then our set of realizations of P. We start from something here, which comes, which comes as the restriction of automorphism of this model, smaller master model. So fix F, which is equal say, or maybe just, okay, fix F, automorphism of model C and we want to put the value on the restriction of this. So phi goes from Galois lambda of P to Galois, P, but here we've index C with respect to C and here we respect to C prime, which means of course that it's computed in C prime. And we give it in the following way, just extend F in any way to automorphism of C. So there is automorphism, now let's say F tilde of C prime such that F tilde restricted to C equals F. Well F is automorphism of C, okay, we want to put the value on P of C, so we need to check in the end, our definition is correct when we use this F, but the F is on C. Okay, so put phi of the class of F restricted to P of C to be the class of F tilde restricted to P of C prime. And now why phi is well defined? Okay, well defined, so take two functions, two automorphism of C, which have the same restriction here. So suppose F1 restricted to P of C is equal to F2, well the class, so this class is the same, F2 restricted to P of C. Okay so in other words F1, F2 inverse restricted to P of C is Laskar strong in this lambda sense, so F lambda L of P of C. Okay so it preserves the Laskar type of tuples of length lambda, and now we want to know that if we take extensions G1 extending F1, G2 extending F2, then G1, and these are automorphisms of the big monster model, so automorphisms of C prime, we want to know that the restriction to P preserves lambda, the Laskar type of lambda tuples. Okay we want that G1, G2 inverse is in out FL lambda of P of C, so C prime, yes C prime, and then, yeah yeah of course, of course, thanks. So take a tuple like this, take A, which is in P of C prime to lambda, but now lambda is small, well formally it's not small, but without loss of generality we can assume lambda is countable, because for big lambda is the same as omega, so we take this and then since lambda is small, there is A prime in the small model in C, small monster model, which is Laskar equivalent to A, and now everything is easy, so F, not F, G1, G2 inverse of A prime, this is Laskar equivalent to G1, G2 inverse of A prime, because automorphism preserves Laskar equivalence, but here, sorry, if we start from A, here we have A prime, let's say, thank you, and now this is in the small model, so this is the same as F1, F2 inverse, F1, F2 inverse of F prime, but this was strong automorphism by assumption, so this is Laskar equivalent to A prime, so this shows that F is well defined, and by standard arguments F is now onto, this is no problem to check, and for very obvious reasons, it's one to one, if we get something trivial after extending, of course we are trivial before extending, and phi is onto, so let me just draw a picture, because it's a very standard thing, we have C, we have C prime, and we have some automorphism here, F tilde, let's say, then we fix, we take n a small submodel m of C and another n, this is sent somewhere by F, but we can also find a copy of m over n, say m prime, inside C over n, so just send this to this copy by F, and then you are done, phi of the class of restriction of F is the same as the class of F restricted to P of C prime, so this step is exactly the same as the usual Lasker-Galois group, but this requires some step to get that phi is well defined, and it's not so clear how to make this when we work with the definition Galois-Ress, we don't really work with n small tuple, we cannot find A prime like here, and in fact you can easily see that if you just don't care and you define, you try to define this morphism like this for Gal-Ress, then this doesn't work in general, you may have some other type which doesn't have anything to do with P, and then you don't have this canonical extension, there might be two automorphisms which here are trivial, but one of them does something on another type, so this is not so clear. So in the rest of this talk, I will talk about the first Galois group, the Gal 1 of P, and actually it will be dedicated to considering the question how far is this from, sorry, how far is Galois of a type of A, how far could it be from Galois of type of ACL of A, so why do we care about this? Because it's the type of A over the empty set, over the empty set. So for, okay, over the empty sets where we assume, well, we will assume, we will assume DCL of the empty set is ACL by, we may name parameters, so all our types will be strong types, and then in this description, so this is just a motivation, you don't need to focus on this, but the first motivation that I gave was a description of first homology group which worked only for algebraically closed tuple. So to understand homology group for any tuple here, in addition to knowing that the H1 of algebraic closure is described by Galois, by some quotient of Galois group, we also need to know how far is this from this, and the situation is not as good as we would like to. So there will be some special cases, some positive observations, but there will be also examples which are negative. Okay, so to start with, we have that in a G-compact theory, if T is G-compact, then if type of A is Laskar type, then also type of ACL of A is a Laskar type, and this fails, this fails if G is not G-compact. Can you remind me if G-compact is embarrassing? Yeah, this means that Kp equivalence, I think it was in some other talks, but this means that Kp equivalence is the same as Laskar equivalence, Kp types are the same as Laskar types. Okay, so Kp types of any tuple equal Laskar types. Yeah, I will just say that, so Kp type is the smallest bounded type definable equivalence relation on the most amount, and this is the smallest bounded invariant. Yeah, yeah, but here we just work over the empty set. Okay, so G-compact is over any, no. Okay, but we work over empty sets, so here we don't work too much. So this fails if G is not, sorry, if T is not G-compact. There is example, I think today I will not have time to show it, but it's not like super complicated one. And then one can translate this, in other words, we may see this, that if H1, sorry, H, not H, Gal. Gal of P is trivial, then, well, where P is the type of A, then Gal 1 of ACL of A is trivial, and in other words, yeah, I mean, you may see that, okay, this means that in case where this is trivial, they're just the same, and you may ask whether this is true in general. So question is Gal 1L of A isomorphic to Gal 1L of ACL of A for G-compact, okay, T. So if we don't assume G-compactness, of course, this fails badly already under assumption that this is zero, so it doesn't make sense to ask about this without G-compactness. In G-compact case, it's also not true in general, so the situation is quite complicated. So example, so consider the structure where we have omega-many circles, say S1 with index K with circular order, so let's call this just maybe S or R, K, and rotation maps G1 over N, where this is rotation by, maybe here, 1 over N, and also index K, rotation of Sk by, let's say, 1 over 2 pi N radians. K is just the number of the circles, so we have first circle and so on, S11, S12, and we add double covers between each of them, so there will be pi 1, pi 2, and so on, okay, so these are K for all K. So what are those pi? Yeah, yeah, so let me just write, so pi i is a, okay, natural double cover, cover of S1 by S1, okay? So in terms of complex numbers, just square. Yeah, they're not, so this is like, this is a structure, and yeah, they're not quite compatible, so that's why there will be like growing algebraic closure, so now we start with A here, and then those two points are in ACL of A, so C0, C1, and then those four points also. And the important thing is that in each component we have something from ACL of A, and thanks to this, because of this, the Galois group of ACL of A, Galois of ACL of A is actually the Galois group of all this structure, and this is inverse limit of double covers of tori, this happens that is, what? Sorry, this is all ends, for every K we take all the ends, all the ends. Yeah, pi K, yeah, just a single function, just one, okay, we just take one cover, and then inverse limit over K of tori, so this is our modular 2 to K times Z with natural projections, natural projections, and this is not isomorphic to circle, which is Galois 1L of A. So I will not go into the details here because I want to state at least one positive result. So since even in G compact T, even for G compact, yes it is, it is G compact and ACL, sorry, it is G compact, yes. Being in the same Lasker type means being infinitesimally close on every component, okay, on every circle. Yes. In each circle you have all the rotations. Yeah, yeah, exactly, and comes. So it is not this business. Yes, it is quite, it is not exactly the typical example of TIGLOR, right? For each circle you have all the rotations. Yes, for each circle we have all rotations, that is why being Lasker equivalent is being infinimeterously close, okay. In the last two minutes some positive things, so okay, so we restrict to finite tuples. So if we don't have isomorphism, but we still have, if T is G compact, we have that Galois of ACL of A is inverse limit over finite subsets of ACL of A of Galois 1L ACL of A. And now, so can finite. Sorry, A and C. Just A and C, yes, or C. So now a question could be, can this be non-isomorphic? So okay, theorem, this is quotient of Galois 1 of A by a finite subgroup. And it may actually happen that it is still not isomorphic. So even if we named, although G is, this group is connected, Gal 1L of A, C is connected. It may be not isomorphic to this finite quotient, Gal 1L AC over F, let's say. So this is by a finite subgroup F. And there is, sorry, no, this is quotient by F. So this is Gal 1L of AC, sorry, of A. And we have example where they are not isomorphic and, sorry, AC. Sorry, this is A, which is the question by F. So they may fail to be isomorphic. I don't have time for example. The positive thing is if this group is abelian, they are isomorphic. If this is abelian, they are isomorphic. So it's mostly negative results, only in a very special case. We have isomorphism. And for most questions, they are not exactly the same. Okay, thank you. Thank you.
The notion of the localized Lascar-Galois group GalL(p) of a type p appeared recently in the context of model-theoretic homology groups, and was also used by Krupinski, Newelski, and Simon in the context of topological dynamics. After a brief introduction of the context, we will discuss some basic properties of localized Lascar-Galois groups. Then, we will focus on the question about how far GalL(tp(acl(a))) can be from GalL(tp(a)). This is a joint work with B. Kim, A. Kolesnikov and J. Lee.
10.5446/59329 (DOI)
for Alex's previous talk and then also preparatory talk for the next speaker. Believe it or not, I contain all the definitions. So if you want to look at the definition, just stop me and then I'll show you, spend more time for the definition, even the basic one. So I think the, you know, as you know, this who defined this definitely shall I define an SOPN theory, long time of his 500th paper and then nobody really what's going on in SOPN theory. And then suddenly, I will define, I will define. What did you say? I don't know what you said something. I could have said, SOP1. He defined the notion of SOP1. Very good. Yeah, sorry, sorry. Just, okay. SOP1 and I'm the speaker. Sorry, sorry. Okay. And then Zoe actually found very nice and interesting example on non-bounded omega-free piece field. Well, there is very interesting notion of, even if it's not simple, it has a very nice notion of independence theorem and also the symmetric and things like that. So people suspect that there must be something going on. And then actually, even before Zoe, the so-called, the person named Granger, who actually left immediately his PhD thesis, student of Mike Prest, he studied already before Zoe on the vector space, infinite dimensional vector space over algebraic cross field with the void in your form. And that seems to be, he didn't say it is NSOP1, but it seems like to be NSOP1 and also has nice independence property and satisfying the independence here. And I think the Chonik of Ramsey's work that they did a lot of nice work in this paper, particularly they give criterion. And as you see this criterion, I mean this SOP1 theory is where this Kim's Lampe, so-called the Kim's Lampe fails. So once you're expert in simple theory or whatever, then as you see their criterion, their re-description of SOP1 property, something should going on on NSOP1 theory. And then breakthrough was made by Kaplan and Ramsey. They really prove something symmetry and then the three amalgamation, tie-bomb amalgamation, so-called independent theorem and also extension axiom, so-called in terms of Kim's dividing, Kim dividing. And then question remain as to whether everything works over SAT and then that's the thing I'm going to talk about. Okay, so working in saturated model and then you know all dividing, right? Dividing means that there is an indescent of a sequence and then you collection of the formula is inconsistent dividing and then fork. And then it's all Shala's notion. This notation is does not fork and then why Shala uses forking instead of dividing because dividing is not clear for forking. By compactness you can have this nice condition in any theory for extension. Okay, then symmetry, I mean any theory in theory has a symmetry over any SAT. If and only if it has trans-tivity for every SAT, then it has a local character that equivalent and simply is one of the equivalent properties. And you know on stable, this order property and stable, it's not on stable. And stable is simple. Okay, now the Moli sequence in the type of age of A is just indescent over independent, non-forky independent. And then the fact is simple theory by the definition of local character, any complete type has a Moli sequence. I mean some theory in some complete type doesn't need to have Moli sequence. This is, this is called Kim's lemma, so means that the dividing actually is simple theory equivalent for some Moli sequence or for any Moli sequence. Because you know the Moli sequence always exists, this for any is not a vacuous notion. It really you know makes a sense notion. And then using this Kim's lemma one can show that this forking and dividing and genocean coincide. And then play, the play we showed that the so-called independent theorem, but people tend to change the name because so many independence, so many independence. So people tend to call steady amalgamation or type amalgamation, you know this. And then characterize it there. Okay, so I spent some time in these examples. So we know the standard example in infinite set algebraic cross field vector space, random grab, blah, blah, blah. To parameterize the equivalence relation, why? So parameterize the equivalence relation is just two sort of structure. I mean, okay, so two sort of structure and then turn out a relation such that imply G is in this parameter set. And x, y is a defines equivalence relation on P, on Q, right? And then any finite structure is present in the exist that is random parameterize the equivalence relation. Okay, and then P is better than me, like any curve has a rational, absolutely reasonable curve has a rational solution. Omega P means the unbounded so that its absolute color group has pre-pro finite group having omega many generators. So infinite, this is interesting example. And when I first launched student, vector space in water theory setting is you don't have field sort, you know all the colors. It's funny that, you know, it's very strange, a little bit to me, line and the plane is not the final set. It's just algebraic closure. But doing that, you can have nice property like a strong and minimal and then forking independence is actually independence. And I always wonder why we don't name the field sort. So if you field sort, Granger proved that. Now field sort we divide in your form, you catch the dimension. You can have t1, t2 or distinct theory such as tn, you say stable theory with algebraic cross field sort, capture the dimension. But then I now realize why didn't model theory didn't put vector space in this manner. Why? Because simply finite dimension only finite many linear independent element, right? But general theory, Mollie sequence. So you have infinity many independent point. What does that mean? That means that the, the sporking independence does not catch linear independence in this theory. Now how about infinite? So infinite dimensional vector space with final uniform and with the sort algebraic cross field. Then it's even worse because it's non-timple theory so that, so non-timple independent is even not symmetric, not symmetric. So for example, looking at this formula, so t infinity in vector space, infinity dimensional vector space named sort for algebraic cross field looking at also one or two. So it's just hyperplane. Here's the b0, b1, x. This is a formula. Now hyperplane or one-dimensional line plane is all defined as that. Infinity-dimensional, this definitely devise over the empty set. So if you take the vector, vector b0, b1 along this line, then this plane is x minus b0 in this apparent space, let's say, then also one or two. Also one or two. Also one or two. Maybe, maybe, maybe. Okay. So this hyperplane is a defined concept but this formula devise. But actually this does not, Kim devise over the empty set. Why? The idea is you moving the parameter not along the line but moving along the independent, Bolli sequence. Right? Then it looks like two-dimensional but actually infinity-dimensional hyperplane so that whatever you along with, you know, Bolli sequence, it has still intersection. Still intersection. So it does not devise. This is typical, nice example, devise but does not Kim devise. And this Kim dividing captures linear independence. So that's the point. Okay? Good. So, you know, the tree property and simple t doesn't have the tree property. Okay. So SOP1, now I can talk about SOP1. Maybe this side. So SOP1 is just binary tree, binary tree so that the funny thing is that, you know, whenever you branching out 0 and 1, then here's R-par. Anything beyond this is inconsistent with this one, this parameter. So that's it. But this, an SOP1 is clearly not having SOP1 property but this, you know, you don't get any point here. But so there's a nice criterion here given by Chonika Ramsey that t has SOP1. If I know if you have, right, so here's say A0C0, sorry, A1C1, A2C2, blah, blah, blah, blah, so that each AICI having the same type over the left, over the smaller sequence, smaller component. So that if you take this sequence of all the type omega plus omega, then you can easily see that, you know, Kim's lemma fails to hold it because one direction, one path, one sequence is consistent. The other sequence inconsistent but both smaller sequence of the omega part, right? So as soon as you can see this criterion, you immediately see that, you immediately feel that something should going on in NSO-Pian theory. And then they made it Kaplan and Ramsey. And then they actually introduced the notion of global model sequence so that a given A model, we save a global type or monster model, is M invariant. It's just any M invariant or automorphism invariant. Then any type has global extension which is M invariant. The point is, given a set, even simple theory, simple theory. You don't need to have global invariant extension, right? But model is the general theorem that says that always any theory has global extension. Then we say this sequence is global model sequence. If some M invariant global type Q such that AI satisfy Q restrict to QM A less than I. So here I can found in the monster model. I sometimes, when I gave some of the place, I got confused. So this I actually can find in this monster model. And then this formula, Kim device over A, Kim device over the model if there is a certain global model sequence. And then collecting all this formula, then which is inconsistent. And then type folks. And then one may curious why they start to work with usual model sequence. Why they start working with global model sequence instead of the usual model sequence. That is because I think my guess is, you know, so they want to prove Kim's lemma over model. Usual model sequence, even if Kim's lemma fails, you cannot have this property. It's not immediate. This property is not. Even if Kim's lemma fails with user model sequence, it's not hard to get this. But with global model sequence by the property of invariant, when this global model sequence Kim's lemma fails, then you get immediate to get this. So they painlessly get this Kim's lemma for Kim independence. Right? But the real breakthrough is, I mean then extension is not so hard. But still, Kim device. Yeah, yeah, Kim device, over model, fixer model. Okay. So for any, for some, for some, for any. Right, right, right. So for any means that even if, you know, all the model is the same type, but all the global type may not have distinct type. May have distinct completion. But still invariant gives you, this failure gives you this Chonic of Ramsey sequence. But the hard part is still symmetry. Still symmetry. Symmetry, they have to develop quite innovative notions called like three more sequence. Right, that's the hardest part. And then they manage to prove it, prove symmetry with respect to this Kim independence and type of amalgamation of order. And then later on, using all this technique, they can prove that, you know, Kim dividing is the same amount of saying that not just global model sequence for any model sequence. So this comes later. I mean, if they try to work on this first, they even don't get the Kim's lemma, right. But as they work on global model sequence, they got this and then they proved later on this. Sir, would you mind if I just give a model sequence of global model sequence? Model sequence here. I promise, I told you that I have all the definition here, right. So even this notion is defined here, right. Okay. Global model sequence is a model sequence you mean in SOP1? In this sense. But model sequence doesn't need to be global model sequence. But global model sequence is a model sequence. Not your model. For my talk. For my talk. What do you mean? My talk. Okay. Okay. Now, in simple theory, we had this type amalgamation with respect to Laska type. Laska type is basically, is connected by indiscible sequences, right. There's an indiscible sequence later, then you can move one realization to the, as far as you can do, then it's called Laska type. Okay. Then, now from now on, for the last of my talk, T is NSOPIAN, has non-forkening existence. Non-forkening existence means that any formula over the set does not fog over the set. If forking is dividing, then any formula over the divide, any formula over the set, definitely it does not divide over the set. But, you know, it's possible that formula over the set can fog over the set. Equivalently, that any complete type has a model sequence in it. We work reasonably hard to show this, that NSOPIAN theory actually has non-forkening existence, but still cannot manage it. There's a, actually, Yan actually found, observed, particular case, it is true, but in general, we hadn't figured it out. So, without this, you know, you can still say something about, like, a Kim dividing, but doesn't make, it's a vacuous definition, right, without this. So, assume that T has NSOPIAN as non-forkening existence, so that this so-called Kim dividing over set is not vacuous notion, but there always is at least one sum-molar sequence, right? Then this does not Kim divide, and this notion, co-parent to the case M is a model, because just the Ramsey and Kaplan showed that this global moly sequence, you know, is the same amount of same there, is the equivalent. Okay, then, right, any questions? Okay, so, now we managed to prove that Kim's lemma, under this hypothesis, that the NSOPIAN with non-forkening existence, we can show that Kim's lemma, that the Kim divides, if and only if, for any moly sequence, you collect all this formula, then that is inconsistent. NSOPIAN, right, yeah, NSOPIAN. I assume that for the rest, T's NSOPIAN has non-forkening existence. Okay, so that we managed to prove, this is spent a lot of time to prove this, and then extension comes from this Kim's lemma, and similarly, we just mimic the same, I think the same proof, but type of amalgamation still you need to walk. Then we actually prove this type of amalgamation for last types with respect to this Kim independence, so that T is G-compact. G-compact already, someone defined this, so this is the one you want to, I didn't put the precise definition. Right, so the rest of the slide, I'm going to talk about the, oh, one thing I want to say is application is, you know, result is T is countable, and number of countable model is finite, more than one, then you have the same thing as stable theory, simple theory, you can have the same situation such that you have collection of tuples, finite tuples, so-called the, its own weight is strictly omega. So, so it's, NSOPIAN, I told you, I told you, I told you, didn't I tell you, told you, NSOPIAN has non-forkening existence. For the rest, we assume T is NSOPIAN and has non-forkening existence. Right, so that, that, that, that I'm not going to talk about that. So, for the rest of the time, I have about 15 minutes. Last slide, this one? Right, right, right. Yeah, yeah, we, so finite, so, yeah, finite ways, theories, is try to, people try to working on this way, way stops. So, super simple theory, possibly, super NSOPIAN theory, possibly doesn't make much sense for some reason, I don't know why, but super NSOPIAN theory doesn't make much sense, but on the other hand, you can talk about, you can talk about the finite way. So then, finite way, then, then, it contains, it contains properly, super simple theory. So, if it's less, then maybe people try not to develop theory, but it contains more. So, maybe that's the right context of talking about, if you're really involved in super stops. Anyway, okay, so, okay. So, now I'm going to talk about the proof, as much, for the rest of the time, I'm going, for this result. Under NSOPIAN. Under NSOPIAN with existence, I'm going to prove, I'm going to talk about the sketch, sketch of the proof of this, Kim's Lemma, particularly Kim's Lemma. So, there are basically three steps. The first step is, under existence, here, here is the point where this existence action is strongly used, that, given any molly sequence, you can find more than independent, non-fucking independent from I, the place is important, because non-fucking usually doesn't satisfy symmetry, so, left hand side, I, right hand side, such that I is a coir sequence, so it's a global molly sequence. And then, the idea is, if I don't take, if I have time, I don't know. But, this proof is not too hard, but, need some idea, and then, basically, use the notion of fundamental order, actually, introduced by a long time ago, I mean, like, by Boazah, and then used by many people, like, Pillay and, Laska used, in his paper, and I used, in another joint paper, and then you can get this. So, the claim two, then, using claim one, you get claim two. Then, proof, claim three is, I'm not going to say anything about claim three. So, we proved claim one first, and then, we spent some time, couldn't get it, and then, just, why not assume just Kim's lemma, and then prove claim three? Kim's lemma. Kim's lemma? This one. This one. For any further? Right. For some, but for every. Okay. And then, use, under the assumption of Kim's lemma, you can prove this. Then, this, actually, listen to me, get proved, claim two. And it turns out, the proof is not too hard, but, you know, you need some idea. I'm talking about the proof for claim two, for the rest of my talk. So, the point is, let's say, you have, so, plus A as empty set, so, you have two indisputable sequence, I and J, starting with, say, A0 is equal to B0, right? So, assume that, you know, one, vertical, multi-sequence, assume that that is consistent. Then, enough to show that, the horizontal multi-sequence, consistent. That's what you want, right? So, now, I hear a bold pace. One has to use this both, multi-sequence. So, J's multi-sequence is used, and then, I's multi-sequence is both used. So, since J's multi-sequence, then, by standard argument, you can find copy, this LJ copies, such that, everything here, every component, have the same type over J. So, it's just using the multi-sequence, such that, this, this hold, right? After assuming this. So, you know, this, this, all this realize same type of J. Moreover, this together with the rest of the J, have the same type as J, right? So, you have this configuration, then, you move this L shape to right-hand side, a little bit, so that, you can have this. So, all this same type over that. Now, then, you unfold this, unfold this array, and make a tree. Make omega less than omega tree. How do you do that? Because, first, you have, first, you have, say, everything, same type over this. Now, take automatic image while fixing the rest, and moving this to this, right? Then, you can have this fan-shaped things, right? And then, moving this to this. Then, again, you have, right? And then, now, they also realize same type over this. So, you know, just to, sorry, you just keep expanding it, kind of unfolding this array, and then find the tree, so that, so that you get omega less than omega tree, such that, any pass, any pass has the same type as J. Any pass has the same type as J. On the other hand, each sequence of siblings. So, in this, this, this, right, any alpha here, and then, all these siblings have the same type as I. My I is originally a vertical one, and then, J is now a path. Now, we should pass through this modeling theorem, you know, joined to work with Hyungjun Kim and Skaw, so that, I mean, I'm not going to talk about the detail of modeling theorem, but what you get is, now, this sequence of L sub k, such that, L sub k, so here's L0, L1, L2, blah, blah, blah. That is actually more or less like, this is L0, L1, L2, right, they are all by compactness, and then, the kind of Ramsey Ramsey, you know, you know, this old Ramsey. You can assume that the, in this another sequence, moreover, moreover, now, basically here, the baseline is a J. Because, now, it's a rotate a little bit, rotate, so that, this is J, right, but due to this modeling theorem, this indissernability, everything here have the same type. I'm not saying everything here, same type of J, but anything has the positive value, that has the same type of just one up. After I twist, now, this vertical line, not clear whether every path has the same type of this space, but at least every path, I mean, path means this. Given any function from omega to positive number, this has the same type as this one. But, here, you can use, this two sequence is, again, by indissernability, this is Chonic of Ramsey sequence. So, consistency, inconsistency must be preserved, because I'm working on Anna's opion theory, providing existence of Chonic of Ramsey sequence. What's a CR sequence? Right, right. Maybe somebody wants. So, here, AI part is, non-Anna's opion, this does not happen. So, whenever you have this condition, then consistency should be preserved. Okay? So, it's enough to find some positive value of the function G, such that, along that path, it's consistent. So, along this, finding some positive value to G, along that path, consistent, then that should be consistent with this. And then, Anna's opion, you know, that should be consistent, should be managed here. Okay, now, now, I'm going to use claim one. I'm going to use claim one. Claim one says that, so, here, J, so, originally, J Mollie sequence is used because you can get this tree. Omega, let's say, Omega tree. Now, since I is Mollie as well, you can apply this claim one, so that you can find J, in like a forking, non-forking independent, and this sequence is Mollie, I mean, indecentable sequence, sorry, indecentable sequence. So, you can assume that they have the same type, so, by claim one, you can assume that, you can assume that all this is M indecentable sequence. Actually, not just, but each piece, each vertical line is actually, actually, a queer sequence, so, global Mollie sequence, global invariant Mollie sequence. Now, so, working with, starting with this, and then, now that. Now, because this collection of AI, along this vertical line is consistent, so, this, say, a one, say, k one, does not Kim divide over the MTC. Now, we use Kaplan, Ramsey's type of amalgamation over Mordor, because we find Mordor here, right? So, that the, now, this, by pigeonhole, you can assume some sub-sequence must have the same type over this one, but still is a global Mollie sequence. So, that means something you can find here, which is independent, but this formula is already known Kim device over Mordor, so, by Kaplan-Ramsey consistent. Now, this, the second line, again, pigeonhole, something must have the same type over this two. So, you can find another pass, and then three, now, another one, you know, by pigeonhole, and, you know, this, you can find, because as far as some sub-sequence, which has the same type over this three, two first, then, this one, and that is independent, then, using the independent theorem over Mordor, so, you can find the pass. That's it. I still have five minutes, so, I have actually several versions. I've had five minutes left, ten minutes left, five minutes left, so that the, okay, so, recent observation. So, recent observation, recent, not observation, this is result by mainly by Ramsey, again, Ramsey Kaplan, or the Ramsey Kaplan, Shala. So, the application is that, say, T, again, same, same assumption, same assumption, local character hold. I, or, finite, D, and any set, there is a zero, a, such that, and, say, D, a zero. This comes from, so, this comes from the fact that there does not exist D and finite, D, and, A, I, sorry, continuous, increasing, sequence of size is less than T, and the length is, I is T plus, this is, so, here, continuous is important, and then, size is important, and this is also important, so, such that, such that D, A, I, A, I plus one, four, or I, but everything, anything else can happen, in the random parameterized equivalence relation, you cannot find a increasing sequence of a countable omega length, such that, each just four. So, I think the super stable, super NSOP in the precise sense, in super simplicity, doesn't preferably make much sense, and then also, if you don't have a freedom of, what did you say just now, what doesn't make sense? I will talk about, I will talk, so, if you have a freedom of choice of the size of set, then, you have increasing one of arbitrary length, and then, non-continuous, also, with this restriction, you can have non-continuous, so, you know, here, the continuity, size bound, length bound is very important, everything else can happen, so, the fact is, oh, actually, and the trans-dividity lifting, hold, so, believe it or not, all the actions that, all satisfies, so, only thing, it doesn't satisfy the one direction of trans-dividity, which is base monotonous, so, another thing, then, our stuff, say, now, P of x, a, 0, Kim device, overset A, if and only, if and only, for any, say, A i, which is this independent small e, e, say, A i, is less than, right, sorry, right, and in this number, P of x, A i, i is inconsistent, so, believe it or not, this is actually very, very important theorem, a question is asked in Ramsey Kaplan paper originally, but much weaker condition is actually true, so, this is very nice, I stop my talk. So, now, there's a time for questions.
Let T be an NSOP1 theory. Recently I. Kaplan and N. Ramsey proved that in T, the so-called Kim-independence (ϕ(x,a0) Kim-divides over A if there is a Morley sequence ai such that {ϕ(x,ai)}i is inconsistent) satisfies nice properties over models such as extension, symmetry, and type-amalgamation. In a joint work with J. Dobrowolski and N. Ramey we continue to show that in T with nonforking existence, Kim-independence also satisfies the properties over any sets, in particular, Kim’s lemma, and 3-amalgamation for Lascar types hold. Modeling theorem for trees in a joint paper with H. Kim and L. Scow plays a key role in showing Kim’s lemma. If time permits I will talk about a result extending the non-finiteness (except 1) of the number of countable models of supersimple theories to the NSOP1 theory context.
10.5446/59330 (DOI)
the general setup of what machine learning looks like in general and what it looks like in the specific case of equivalence query learning. So the general setup is we start with a set x, and we take a concept class to just be a collection of functions from x to 0, 1, which we just think is coding subsets of x. And we'll freely identify subsets with the corresponding function. And for our purposes, we can usually think of this as being generated by the uniformly definable sets of some formula from some model. So what typically happens in machine learning is the learner knows what the concept class is, and there's some target concept which they're trying to learn about. And they receive some data about this in some shape or form, and then they have to learn. And what exactly it means to learn and how the learner receives data depends on what notion of machine learning you're looking at. So if the learner has complete knowledge, are you trying to learn? So the idea is I know what the entire concept class is, and someone else is hiding a particular concept that I'm trying to identify. So you know what all of the possibilities could be, but you need to identify which one. So there are several connections between notions of machine learning and notions of complexity and model theory. One of them, perhaps the most well-known one, is the connection between probably approximately correct learning or PAC learning and formulas that are NIP and a connection between online learning and formulas that are stable. And what I'd like to talk about is a connection between equivalence query learning and formulas which are stable and NFCP. So in equivalence query learning, so this is a little bit different from some other notions of learning. In other notions of learning, what often happens is the learner will receive some samples. For example, in probably approximately correct learning, you get a sample which is drawn randomly with respect to some distribution. And how well you learn reflects the quality of the data that you get. In equivalence query learning, we're going to ask that we learn about everything. And the way that we do this is we guess what the entire concept is. And we can make guesses from a hypothesis class which may be the concept class or it may extend the concept class. So in equivalence query, we submit a hypothesis. If we've got it completely right, then we're told so and we're done. Otherwise, we receive a counter example in the symmetric difference. So we're told why we're wrong. And the concept class is equivalence query learnable or EQ learnable with queries from a certain hypothesis class H if there's some fixed natural number N such that any concept in the concept class can be identified with at most N many equivalence queries. So here's a very simple example. Are you showing me that? Yes. If you were to the concept class C, is why would you want to select from a larger hypothesis class? In some cases, being able to select from a larger hypothesis class can tell you more. Excellent. You will see in the example. So here's a very simple example. Let X be an infinite set and let C consist of all the singleton sets. So essentially, I'm, say, hiding a number behind my back and you have to guess which one it is. And if you guess wrong, I can either tell you that the number that you guessed is wrong or I can tell you what number I'm hiding behind my back. This is not equivalence query learnable if all you have is the concept class itself. Because I can always tell you, no, that's not the right number. Try again. But it is equivalence query learnable if your concept class includes the empty set as well. Because you can then query the empty set. I have to give you a counter example, and the only counter example is the number that you need to find. So this is a simple example, but it's sort of illustrative of what you need your hypothesis class to have. OK. So the main question that we're asking is what are the conditions on C and H that make equivalence query learning possible? The first condition, which is completely unavoidable, is a stability condition on C. So here is what I'll call a binary elementary of height 3. So I have three levels that I label with elements. These are possibly distinct, possibly not. And I have sets labeling the leaves. And what I want is that membership of any given leaf, membership of any given leaf should be on its predecessors, should be reflected by the path that we take to get to this leaf. And we identify going left with non-membership and going right with membership. So for example, C101, I do want it to have AMD set. I don't want it to have A1. I do want it to have A10. And there are no conditions on any other membership. So the little stone dimension of concept class is just the largest n such that we can properly label a binary elementary of height n. This is just a notion of shellaturing for set systems. And so when we generate our set system from a formula, set system is a concept class, if we generate this from a formula, then we'll have finite little stone dimension if and only if the formula is stable. And it's a basic fact of equivalence query learning that we can learn with equivalence queries if our hypothesis class is the entire power set if and only if the little stone dimension is finite, if and only if our formula is stable. And we can do so in at most little stone dimension plus one many queries. And in the case where we do have the full power set as the hypothesis class, this is actually essentially equivalent to online learning. So we know that. What is online learning? So online learning is a notion where you have a teacher who is feeding you examples one by one, and you have to be able to guess yes or no, and eventually you can guess all of them right or wrong. Or you can guess all of them right. You only make finitely many mistakes. So we need to have that the little stone dimension of our concept class is finite, but we can ask, can we use a smaller h? Can we use a smaller hypothesis class? So to do this, we need some definitions. So I'll call a partially specified subset. This is just a partial function from x to 0, 1. So it decides for some elements, whether I want them to be in A or not. And there are possibly some elements that I'm agnostic about. The domain of A will just be everything that A has an opinion about. The size of A will be the size of the domain. And we'll say that if I have two partially specified subsets, A and B, B will extend A, and A will restrict B if A and B agree on the domain of A. And B possibly has opinions about things that A doesn't, but on the domain of A they agree. Sorry, in general, you work with these subsets. Yes. Yeah. So it's worth saying that this work is, there's been some work done in equivalent square learning in the finite case, which is what most machine learning theorists focused on. And so the work generalizing all of this to the infinite case is new. Generalizing this to the case where x and c are infinite is new. Is new. So we'll say that a partially specified subset is n consistent with our concept class C if every restriction of size n has an extension in C. And we'll say that A is finitely consistent if it's n consistent for all n. So the consistency dimension of hypothesis class H with respect to C is going to be the least integer such that for all subsets, which we just think of as totally partially specified subsets, for all subsets, if D is n consistent with C, then D belongs to H. So what you can think of this as witnessing a sort of strong compactness where the compactness is realized in H. And this gives a condition for allowing us to learn when the concept class is infinite. Going back to the example, when C consists of the singleton sets, we have finite consistency dimension if and only if the empty set is in our hypothesis class. And so the first main theorem is that if we have C with little stone dimension D and consistency dimension little C, then C is equivalent squarely learnable with at most C to the D mini queries from H. I won't say too much about the proof, but the prevailing strategy in what we do here is pretty much always to try and reduce the little stone dimension of the possible concept classes. As we learn information, we can narrow things down. And when we get the little stone dimension to 0, we've identified a singleton and that's it. So can you repeat the n consistent point? n consistent says that if every size n restriction is consistent with C, then OK, so every n size restriction is consistent with C. So for example, in the example with the singletons and the empty set, the empty set is one consistent actually. Because it's finitely consistent. Because if I take any finite restriction of that, that's saying I have finitely many elements, I don't want any of them. And I can just choose my singleton set from C to be something that has a different element. Did you say before that if H is the full power set, and the little stone dimension is D, then it's e cubed normal with at most? Little stone dimension plus 1. Yes, the D plus 1. Yes. So it seems like C to the D is going to be worse. It is worse. There are fewer possible guesses. Yeah, we have fewer possible guesses. What we're, having the full power set may be seen as having a little bit too much freedom in what we can guess. So can we get away with less? And yes, but the penalty is it'll take us longer. Now what's the connection to model theory? Well, if, OK, so first definition will say that phi does not have the finite cover properties, NFCP. If there's some finite n such that for all partial phi types, if every size and restriction is consistent, then p is consistent. It's worth pointing out that this is a variant of what I think was the original definition. I'm saying all partial p types, the original definition, had only positive instances of phi types. And partial p type can be some connection of instances and negations of phi. Yes. Yeah, both positive and negative instances. But this isn't the equivalent of the original definition. It's not equivalent on the level of formulas. It is equivalent on the level of theories. So this is related to not having the finite cover property. So if the opposite formula does not have the finite cover property, and it's opposite just because of the way that we've chosen parameters here, then if we take h to be all externally definable sets, then the consistency dimension will be finite. So if we take, if the opposite formula is NFCP, and we take all externally definable sets, then we can learn. Is there a difference on the other side? Yes. I'll say that on the next slide. So, or maybe this slide after this. So there's sort of a boundary in between the two different ways that we can reach finite consistency dimension. So we'll say that C has consistency threshold N if whenever the consistency dimension is finite, the consistency dimension is at most N. So the following are equivalent. C has finite consistency threshold, and that's equivalent to the consistency dimension being finite if and only if h contains all finitely consistent subsets. In the language of formulas, this is equivalent to the opposite formula being NFCP, and that's also equivalent to containing all externally fee definable sets. So in this case, taking all of the externally definable sets gives us a, gives us a minimum hypothesis class under which learning is possible in the NFCP case. Outside of the NFCP case, outside of the finite consistency threshold case, we may have to have, well, we must have access to some inconsistent sets in order to learn. OK. So as Gabe. Yes. So this theorem, this implication, this, so this finite consistency threshold is equivalent to this if and only if. So one of the directions is always true, or in the second rule, it's in the theorem. So you say that if you don't have finite consistency threshold, then you must have more semi-consistent sets. Yes. So do you also need all consistent sets or? Yes. Yes. So in general, you always need all of the finitely consistent sets. In the NFCP case and the finite consistency threshold case, that's all you need. Otherwise, you need more, but you definitely need all of the finitely consistent things. So as Gabe pointed out, see, this theorem gives a C to the D, an upper bound of C to the D, and this is not good. So the question is, can we improve these bounds? The answer is, without modification, no. We'll discuss the possible modification, but let me just give a quick example of why this, why we can't improve the significantly if we don't change anything. I'm going to fix C and D. I'm going to take my base set X to consist of a bunch of distinct elements, which I'm labeling with C-ary sequences of length at most D. And I'm going to take my sets. I'm going to index them by C-ary sequences of length at least D. And they're going to contain all of the elements which have index an initial segment. And I'm just going to take my concept class and my hypothesis class to be just these sets. Then the little stone dimension is D, and the consistency dimension is C plus 1. But it may still take as many as C to the D many queries to learn. Essentially, what has happened here is in some sense, we're still in the case where we're guessing singletons. Because if you make a guess, I can always choose my counter example to be this A tau where tau is of length D. So we're essentially still guessing singletons, but I've added the other elements in a way that is artificially inflated the little stone dimension and deflated the consistency dimension. But it can take as many C to the D many queries to learn. And so the theorem gives us C plus 1 to the D. So moving down isn't a huge improvement. So unfortunately, we don't have an improvement, but we have some idea of where an improvement might come from. And this is strong consistency dimension. So this is basically the definition for consistency dimension with the changes that I'm now asking for all partially specified subsets before I was just asking for all total subsets. If I have for all partially specified subsets, if it's inconsistent with C, then there's some extension of that in the hypothesis class H. This is known to be a better metric. OK, so the distinction between consistency dimension and strong consistency dimension is very subtle. And it's mostly of relevance to algorithms, or at least that's what it seems to be. Most non-quantitative results can be proved for both consistency dimension and strong consistency dimension in the same way. So for example, you can show that you're learnable with equivalence queries saying nothing about the bounds if and only if C has finite little stone dimension and C and H have finite strong consistency dimension. It's immediate from the definition that strong consistency dimension is bounded below by consistency dimension, but it may be quite a bit larger. In the example before, the strong consistency dimension was C to the D. And so the hope is that C is that the strong consistency dimension will provide a better metric for how the consistency dimension is for how many queries you need to learn. This is known in the finite case. So in the finite case, we can learn in at most strong consistency dimension times the log of the size of the concept class, many equivalence queries. And the hope is that we can find some way to replace the log of the size of the concept class with little stone dimension. This is actually a fairly common thread that we've found in looking at these sorts of connections is that in generalizing things to the infinite case, we often look to replace a log term with the little stone dimension. And so the question is, can we find a bound for the number of queries needed to learn, which is something like strong consistency dimension times little stone dimension? We don't know. We're working on it. But that's the hope. And that's what this previous result is suggesting. Let me just give a few final remarks and questions. The first remark is we were sort of looking for hypothesis classes. We were looking for smaller hypothesis classes. What sort of hypothesis class can we get away with? And maybe one of the things you're concerned about is finding a simple hypothesis class. And simple might mean in the sense of little stone dimension. And we can always find a hypothesis class such that the consistency dimension is finite. And we also don't grow the little stone dimension. In the NFCP case, taking all of the externally definable sets is fine. Otherwise, you have to be a little bit careful, but it's not hard. Let me just mention some other notions of learning that are relevant. First is there's a notion of learning where you can learn with equivalence queries and membership queries. So a membership query, if there's a certain element that you want to know about, you can ask the teacher, is this element in this set or not? And you'll get an answer. And in this case, consistency dimension is good enough. And you can learn in, at most, consistency dimension times the little stone dimension many queries, both equivalence queries and membership queries. Another thing you might be wondering is the notion of equivalence query learning that we've been talking about for most of this is under the assumption that the teacher providing you with counter examples is adversarial. And you might wonder what happens if the counter examples are drawn randomly. In this case, the little stone dimension is sufficient. We don't really need to think about the size of the hypothesis class at all, at least in the finite case. And a question is, can this be resolved to work when the concept class is infinite? A much broader and general question is looking from is, can we find more connections between machine learning and model theory? As I mentioned, there are connections with NIP formulas, stable formulas. One question is, can we find something that works with, that corresponds with the notion of independence? There have been a couple of methods of learning that have been devised that are possible candidates. But people haven't been able to prove both directions of that. And that's not my work. And there may be other notions of machine learning that are in the machine learning literature, but we haven't been able to identify a connection yet. And that's all I'll say. Thank you. Any questions?
There are multiple connections between model-theoretic notions of complexity and machine learning. NIP formulas correspond to PAC-learning by way of VC-dimension, and stable formulas correspond to online learning by way of Littlestone dimension, also known as Shelah's 2-rank. We explore a similar connection between formulas without the finite cover property and equivalence query learning. In equivalence query learning, a learner attempts to identify a certain set from a set system by making hypotheses and receiving counterexamples. We use the notion of (strong) consistency dimension, an analogue of the the negation of the finite cover property for set systems. We show that finite (strong) consistency dimension and finite LIttlestone dimension characterize equivalence query learning, drawing on ideas from model theory. We also discuss the role of Littlestone dimension and strong consistency dimension in algorithms.
10.5446/59331 (DOI)
I want to thank the organizers both for inviting me to this and for allowing this talk because I see this as being in the opposite direction of a great many things. First of all, in neostability as I understand it, you start with things you understand, these stable theories, and then you try to broaden, go out, and so on. Here we're going to be starting with countable super stable theories and going in. We're going to be adding more and more conditions, trying to get at the heart of what's going on. Secondly, in many, many of the talks, there are brand new exciting results immediately from the beginning. Here most of this is going to be material that's 35 years old. In some senses, it would have been easier to give this talk 20 or 25 years ago when classifiable was maybe more on the jargon. The first part of this, people say age is 50 and over can just tune out for a bit and for the younger people. But there's going to be a plus with this that over time, the definition, again, everything will be equivalent to, if you think you know what classifiable is, it's going to be exactly the same. But the development can be altered and I think in a more motivating way, which is good because if you start even 25, 30 years ago, if you said let T be countable super stable, end up with no top, then the entire audience will go away. Now let's start. What is what our classifiable theories and what's the whole game of this? So throughout the whole talk, T will be countable and super stable and when I'm defining new notions, they'll all be equivalent within the guise of T being countable and super stable. But as people are familiar, when Chalal gives definitions, they're much, much more general and so on. Here I want to claim this gets more to the heart of what's going on. So, everything is taken independent triple of models. We know what they are. Or as Leo said, to put out two fingers and it's a V of models and the idea is going to be how can we complete the union, M1, M2, to be a model and we want to do this adding sort of as little information as possible. A natural try is sometimes we're going to be lucky and the algebraic closure of M1, union, M2 might be a model or even the definable closure which will be good but not quite sufficient in this. So let's get started. First of all, what things do we have just an accountable super stable theory? First of all, there's the old notion of being L isolated which is either locally isolated or Lachlan isolated and it's a variation. He had the idea of this in an early paper and the idea is like isolation but it's local in the sense that for any formula theta of x, y, there will be some formula in your type which isolates the theta type of this. Okay, and I want to con view this as a poor man's isolation. The advantage though is that for accountable super stable, in fact for this particular fact strictly stable would suffice, the L isolated types are dense. In other words, if you take any set B and consistent formula, there's going to be some L isolated containing that formula. Actually the key thing that's going on is accountability. We're using accountability quite hard in this and then all you do is you just enumerate out all of the formula's theta and choose witnesses of smallest two rank, theta two rank all the way down. This gives you a type. We're in great shape. So then for any set B, there's an L constructable model N over B. That just means that we start with B, we do something L isolated, get a bigger set, do something L isolated and keep going and come up with a model. Great. Okay, so thus just given any independent triple, there certainly is going to be an L constructable model M bar just over the union M1, M2. Justability with what I've said here. Superstability is useful with this in that if we have the witnessing formulas, the phi X B bar witnessing this, you can say something that if we take a formula of minimal infinity rank in the type C over B, then these witnesses can be found in the algebraic closure of the parameter needed. So for this just a stable suffices, but implicitly we'll be using superstability later. Okay, now, so we can't do much with just L isolation and L constructable. It's a nice thing. And to get something better, note that for a countable super stable theory, there's a wonderful test for aliph one saturation. It's simply that if we take any regular type, say if you want over N and then shrink down to a countable set on which it's based in stationary, we want it to have dimension at least aliph one. So this is old result of Shala. So in other words, among countable super stable theories, aliph one saturation is just saying that all dimensions are large. Okay, so this suggests a definition that a theory T has no new dimensions that whenever we take a triple of aliph one saturated models and we take this L constructable model, then N bar will also be aliph one saturated. Okay, so referring to this, how could this be? That is that if we have a regular type over N bar, then it will be non-orthogonal or in other words just highly related to a regular type over either over N one or over N two. And I'm not going to be using the word, but for those over 50, among countable super stable theories having no new dimensions is equivalent to end up. I will certainly welcome, I think that it would be good to have a more modern development of this and no new dimensions clearly is not the optimal term. If anyone has any suggestions for what should go here, I'll try to do. What? That is certainly an option, but it doesn't make the audience, but in my mind this is really what's going on. Okay. Yes, no, we can debate this over there. Okay. You have to really stress the end. It's in end up. No. No new dimensions. Oh, yeah. The alliteration gets you. Okay. So let me give an example where this can act up. So for those who know the E and I Dopp checkerboard, what's the picture? We have three sorts. We have UV and then W is going to be the range in between. The language, I think I wrote down there, so UV, W are three sorts. R will be a subset of U cross V cross W. And if we take a look at R, so if we take some A in U and B in V, then they're going to point to a box, R of AB blank in W. And if we range over all AB in U cross V, this partitions W. And so we have a box here. We have some other here, et cetera. And we have these functions, Fn. Each Fn goes from U cross V in to W. And if this is A and B, then every one of these Fn of AB will land in the AB box. And so the Fn of AB for n and omega are all contained in our AB blank and distinct. So inside each one of these, just from A and B, we fill up this box with infinitely many things. Great. Now, in terms of the independent triple, suppose we have a triple of models. Here's M0. Here's M1. Here's M2. We might have some new guys, U, A in U of M1. And we might have some new B in V of M2. So I need to put in the box, but these Fn's take care of that. M bar is just the definable closure of this. It's the only elconstructible model. It's both prime and minimal, but there are many new dimensions that happen because we have the type of saying, I'm in this box and I'm not one of these. And you just have complete freedom about how many times you want to realize that if at all. So this actually has two to the capo models for all infinite capo, including countable. And among countable, you can code arbitrary graphs in this way. OK. So this is bad, even though M bar is definable closure of M1 and M2, but we're going to get rid of that. So now what can we say if T is countable, super stable with no new dimensions? Then if we have any independent triple, then we know there are elconstructible models. In fact, it turns out that every one of the elconstructible models is minimal over M1, M2. So in other words, there's no proper elementary substructure of M bar that contains M1, M2. And a very curious thing, which is actually a new result, is that every enumeration of M bar is an elconstruction sequence over M1, M2. So this is something very tight. Are these both the same? 0, M1, M2 countable? No, sorry. Very much that you've got it. Yeah. No, no. So here they can be countable. But again, even for countable guys, this is new. Because think about just the theory of equality. You could have a model until you enumerate all but one guy, then the remaining guy is not isolated over the top. This is a really strong thing. But note it does happen if you take algebraic closure of M1, M2. That will have this property, which is helpful. But sadly, even though elconstructible models are nice, there can be continuum many non-isomorphic models over this. Here I really should say when M1 and M2 are countable. But if M happened to be constructable over M1, M2, so constructable is the same as elconstructable. But each step of the way, you really have something isolated over what happened before. Then the M bar would be unique. OK, great. So how we get a lot of these things is with these independent triples, we can think of them as being approximations to models in that we have a notion of extension. So here's M0, M1, M2. So how we extend to N is we have some N0 independent from M1, M2 over M0. And then we have N1 and N2 being, N1 is controlled, is dominated by M1 over N0. M2 is dominated by M2 over N0. These are very nice in that, again, this first clause, the first clause here about that M1, M2 is a Tarski-Vot subset. This just uses stability as well. So if it has a solution with parameters downstairs, if it has a solution upstairs, it has a solution downstairs, and then for independent Bs, the notion of being L atomic or being atomic over this are the same. So atomic just means, oops, getting used to this, atomic means that for any finite tuple it's isolated. OK, great. One other idea of this is we need something slightly better than an elementary submodel. So if we have any formula, phi xb, and any finite set F contained in M, we want to say that if there's a solution to phi upstairs in C that's not in the algebraic closure of F, then we can actually find an A prime realizing phi in M staying away from the algebraic closure of F. So view this as a slight strengthening of just being an elementary substructure, but playing keep away with fine algebraic closure of finite sets. So an independent, an NA triple is just one of these where each one of the MIs is an NA substructure of the universe. And again, just using accountability, this is just like the downward Lowenheim-Skolm theorem. Given any countable set, you can find a countable NA substructure. Nothing to that. So thus, given any countable triple, there will be a natural extension to an NA triple. Just go component, first do the M0 to N0, and then the N1 and N2. OK, great. So we're now faced with the question of we given, say, an NA triple M with M1, M2, we really, really want to have a construction sequence for M bar. So let's say we've gone part of the way and we have B being a subset of it. It's not necessary for what I'm saying, but implicitly say we have good control. So for example, maybe with this B, it is atomic over M1, M2. Then certainly we could pass to its algebraic closure. And if B happened to equal M bar, we'd be done. We'd be happy. So if not, then we can find in here a type definable semi-regular group G. So first of all, get a semi-regular witness in the difference. This is using NA. But then also, we get a type definable semi-regular group and a definable transitive action G on the space of realizations. Yes? Semi-regular is that it's going to be Q simple and its domination equivalent to a product of Q's. So that means that the only relevant regular type is Q and it's also P simple, or Q simple in this case. OK. Now, Hrushovsky, as most people know, if we have a type definable group, then we can always extend it to get a definable group. This is fine, but the question, the key question, which is going to, we're going to be seeing more and more, is when you pass to that, is there a definable connected semi-regular group extending it? The worry is that everything is fine, say, on G, but when we pass outside, then all of a sudden we get a wealth of cosets. In other words, we have many, maybe we could have continuum many generic types in H. So. Just for this thing, so can you find this Q? Yeah? So, OK, so in more detail, so I'm going to pick among all witnesses Q, I'm going to, all witnesses C, I'm going to pick one of smallest infinity rank. Then there will be something semi-regular in the definable closure of it. Then by, end up, it will be non-orthogonal to either M1 or M2. Each one of them is NA. So then what I'm going to do is pick a formula theta over M1 and say that it is of smallest our infinity rank. And it will follow that C will actually be non-orthogonal to Q of C. In other words, just take, sorry, C will be non-orthogonal. C will, no, no, no, no. Yeah, this is just a binding group. So, so what I'm, let me, I'll be sure if I can write it out. We're going to have C will be independent, be over theta of C. Where? C is a regular, C is some regular type of M1. So I'm picking C, I'm making it semi, choosing it's something in the definable closure, so it will be semi-regular. Great. So, also it's non-orthogonal to say M1. So I'm going to pick theta, a formula in L of M1 of smallest our infinity rank that's non-orthogonal to Q. And then by this double use of picking C to have smallest infinity rank of the difference and theta to have smallest infinity rank here, we end up with this statement. Right, right, it's non-orthogonal. So we get the binding group. This is what's going on. That the thing we can't necessarily control is, is there one of the definable extension which is connected. If it's connected, then the type of C over B is isolated by the obvious formula. G will be connected, yes. Because it's a transit of action. There's a unique generic. But when we extend, we have this. So this is sort of the threat that's going on. And what's annoying with this, which makes things difficult, is we survive if there is a definable connected semi-regular, but we might also survive even if it isn't. So this is just a one way thing. So let's give a name to the subtraction. Anand is in pain, but if anyone can think of something, but it exactly fits the situation. It is what will get in the way. So it's troublesome if there is no definable connected Q semi-regular group H. Okay, and it really just depends on the non-rhythmic class of Q. And the conclusion is, this gives one safe way out that if T is countable, super stable with no new dimensions, if we start building, we have M1, M2, and start building B step by step, if we never encounter a troublesome type, then M bar will be constructable, hence prime and minimal, and hence unique over M1, M2. What's the condition on G and H? So G is a type definable connected group, and the question is, is there an H with what property? Is there an H which extends it? Which... Does it want H to be also semi-regular? Yes. Yeah. H will still be Q semi-regular. But the question is, now you might have introduced a whole orbit of generics. So the question is, can we shrink it down to just have a connected, a definable connected group? The question was, G itself is definable. It doesn't need to necessarily be G. What's relevant is simply given Q, if you have a definable connected group, anywhere, is there a definable connected Q semi-regular group H? There's a strong interplay with being definable as opposed to type definable and connected. Okay, great. So the easy examples of this, that if T is omega stable or T trivial, then no problem at all. So those are less interesting for us. Now finally, Chalal started off by saying just a bald-faced decree. Here it is. T will be classifiable if it's countable, super stable, no new dimensions. And given any countable independent triple, there's a prime and minimal, hence unique, model M bar over this. Okay. So this says nothing if T is, nothing new if T is omega stable or trivial. A couple of comments about it. First of all, minimal is redundant. You would get the same thing if we just said that there is a prime model M bar over M1, M2. But the flip, this E and I dot checker board, which I erased, says that even if we say that there's a prime and minimal model M bar over M1, M2, that's not enough. There's no new dimensions. Still needs to be stated. Okay. And the third thing is that here in the definition I gave, we have countable models and getting a prime model, Shala actually had by a complicated AEC argument that if your T classifiable, then M bar is actually constructable over M1, M2 for all independent triples like this. New proofs of this come out from our analysis with this. We don't need to go to the AEC argument. Okay. So that's what classifiable is. To me, this is almost just a, we're going to demand that this happens. There's some justification for it. If T is classifiable, good things happen. Namely, given any independent tri of countable models, there will be a prime and minimal model. This is easy. You just iterate what we have. And more interesting is you start with any n, no matter how large, we can find a maximal independent tree of countable NA substructures of this. And then n will just be prime and minimal, hence unique over that. So all of the information about n is captured in this independent tree. This is what really is allowing the counting of the number of models of some large size. And how Shala finds this so interesting, if a countable T is not classifiable, then the uncountable spectrum is maximal. Okay. So these are certainly nice. But to me, there's just this wild statement, we're going to declare that we always have a prime and minimal model over an independent triple. What have we really done? So number one is it doesn't rule out troublesome types. They can still exist, but they are highly constrained. So troublesome, the way I defined it had nontrivial, so there will be some definable group with regular or at least semi-regular generics, non-rhythmal, the Q. The first one is that Q must be locally modular. The non-locally modular guys get eliminated by this. And the arguments are a rather cool shell game. You take a explicit witness to non-local modularity and twist things around. This part here actually appears in an old paper of Hrushovsky and Shala, the dichotomy. Okay. So that's old. So thus it's locally modular and if it's locally modular, then we can restrict to simply having a definable group with regular generics. And the Q has an additional property which is called limited. That is, when we have a definable group, then non-forking is given by a division ring of definable quasi-endomorphisms. And here I'm insisting that these actually be definable as opposed to type definable. And we want to say that it has bounded size. Equivalently, if H is defined over a set of parameters A, it's equivalent to say that every one of these quasi-endomorphisms is definable over the algebraic closure of A. Now if this seems unfamiliar, it's because if we have anything in low rank or anything of finite rank, so if Q is non-orthogonal to a type of U-rank one, it's automatically limited. So the first time that this can arise is for infinite U-rank. Okay. But some surprising good news which will be helpful for us is that if we have such an H in a classifiable theory, then we can add a new sort. We add a new P and we interpret it as being the subgroup of non-generics in H of C. So first of all, simply because H is a regular group, the non-generics form a subgroup. That's automatic. But if we add this predicate, then the resulting theory remains classifiable and their U-rank is true for all old types. The U-rank don't go up. This is a very mild extension. Okay. And this will turn out to be quite handy for us. Okay. So we've done this. We trouble some types. Can still be around. But in my paper with Brad and Udi, when we were counting the number of models, the relevant thing was if we have some N, or just some N star that we're trying to analyze, we have some M. And we want to ask, we want to look at weight one extensions of the top level N over M. So N over M weight one, which means it's going to be non-rhythmal to some particular regular type P. And again, because of the assumption of no new dimensions added, any such, if this is at the top, in other words, depth zero, then this P is non-trivial. So it's either going to be non-locally modular or locally modular non-trivial. And we're going to handle the two cases one by one. So the first thing is in the non-locally modular case, if we're taking M to be an N-A substructure of say N star, and we have this P non-rhythmal to this, then there's a strongly regular representative of Q. So in other words, if I'm taking some A, there is a Q, so there's a formula around in here so that any realization of a type of, there's a P weight zero, sorry, there's a P simple formula over M, a phi of X, and for any, if phi holds of A and the P weight of A over M is equal to one, then it will be necessarily, then A over M will be this type Q. Okay, so thus there's a unique choice within this, secondly, which I find really amazing on this, is that domination, so if we have this C over B, P semi-regular, and then we look at any type, any way of extending this still further by something dominated by C over B, then it will be isolated. The second condition is quite strong and has the consequence that if N over M is non-orthogonal to P, and again, the use of it is when this is regular, but even more generally if C over M is P semi-regular and N is L constructable over MC, then actually N will be constructable and minimal, hence unique over MC. Putting these two guys together, that says that in a decomposition, if we have M, N, A living in N star, and we have N over M of weight one non-orthogonal to a non-locally modular, then N is unique up to isomorphism over M. There's only one way of doing this, which is certainly very relevant in the spectrum computations. Okay, maybe more interesting is what happens if P is locally modular and non-trivial? Then there's sort of two separate questions here. The first one is if P is non-orthogonal to M, is there a strongly regular representative for this? There's maybe yes, maybe no, but it's exactly if and only if P is troublesome. There's a strongly regular representative if and only if P is not troublesome. Okay, so that gives a fan of possibilities if P is troublesome, but that aside, now let's look at if C over B is non-orthogonal to P is regular, then what I want to say is that C over B is, well, that there's going to be some weight one group lying around and a generic A of that group so that the type of C over, I'm expanding by independent parameters, of C over M star A is isolated. So in other words, if you want to know what freedom you have, everything is described by which coset of the group or which choice of generic am I choosing from G. But sorry, but you have an experimental and expanded length. Yes. And then you mentioned the type, so we have to specify whether the previous type of C over M star A is the old length or the new one. It will be in the new one in the expanded language. It will be isolated. And what the expansion is, is simply going to be by adding predicates for non-generic elements. So it will be a tame expansion, but it is literally an expanded language. So this is saying that if we just take something of finite infinity rank, so here C is finite, so the infinity rank of C over B is bounded, then we actually get some M star definable weight one group so that C becomes isolated over M star and a realization of A generic type. The freedom we have is which generic type A over M star we're choosing. And if N is of weight one, so an infinite set, then what we'll see is that it's going to just, then we can form G star a projective limit of these various M definable groups, also in an expanded language where N over M star A star will be atomic. So let me just give a sketch. I know dinners are rising, so promise last slide. The inductive step of this is we're trying to, we picked some other independent parameters, so we have C over M, and we have an A in say proving the second condition here. The second fact. So again, because P is locally modular, we can find, by expanding to some M, we can find some A generic of some group G so that forking occurs between C and A. That's not expanding the language, that's just easy. So what we get is we're going to have C forking with some generic of some weight one group G, M definable. Great. So now if C over M A is isolated, we're done. If it's not, then as we saw, the construction, the failure of the construction points to some troublesome type Q off to the side. So in other words, there will be some C prime in the algebraic closure of MCA, take away the algebraic closure of A, with C prime Q semi-regular for some troublesome type Q. So in other words, here there's going to be a choice of cosets to choose from. But that aside, so I can certainly choose some M definable regular H and an action just of the connected component H0 on this strong type. Okay. And what we want to do or what we can do with this is there's going to be a two level group configuration. We have the bottom group G and again under my assumptions, it's locally modular. And now this H acting on it since it's troublesome, it will also be a locally modular, Q is locally modular but they could well be orthogonal to each other. And the idea, again dinner is in five minutes so I won't go through it in more detail. I'm happy to talk later. So what we might need to do in order to get this going is I need to add a sort for the non-generics of H. Yeah. Well, my talk ends in five minutes. That's all you guys signed up for. Okay. But I can certainly continue the way. Okay, so we expand by adding the non-generics. Then with that, we can actually be getting this two level group configuration on top of the previous existing group. Get a generic A star and definable surjection of the big group on the smaller so that the generic of the top goes down to the bottom and the good news is the infinity rank has dropped. So thus, if I just do this finitely many times, each as I loop through maybe C over this new M star A star is isolated in which case I'm done. Otherwise I keep dropping until finally I get down to zero and I say oh, C is on the algebraic closure. Why is H over M A or something? Why is H over M? Well, the M I freely added independent parameters to get into the situation. I started just with C over B and for otherwise reasons I've passed to some independent parameters and in doing this step I now have more. I've gone from M to M star so I have more independent parameters. So now I'm going to be re-looping this with the M up here will be now replaced by M star. The M star A star will replace the M A up here and the infinity rank drops. So I'm done. Great. Now then the final thing is just to roll it back, if N over M is weight one, if I'm trying to capture all of N then it's infinite. I can't just concentrate on a single finite tuple of finite infinity rank but there are just accountably many finite sequences on this. So what I can do is get a larger and larger projective system of these groups, each one definable. And that will lead to that finally, so now this, yeah, so the G star here is going to be sometimes called a star definable group so it is a projective limit of these M star definable groups. So this M A star is probably a pretty complicated object. It's generic for the projective limit but what this is telling us is that if we want to know N over M originally the only freedom we have can all be expressed in terms of the groups. So the number of choices we have of N over M is given by choosing which cosets or which generics to choose of the groups along the way. So the troublesome types exist, they're still around but they're exactly, this is sort of an exact answer to what is needed to pass from M up to N being non-rhythmal to a locally modular non-trivial type. Okay, so with that I'll stop.
We give (equivalent) friendlier definitions of classifiable theories strengthen known results about how an independent triple of models can be completed to a model. As well, we characterize when the isomorphism type of a weight one extension N/M is uniquely determined by the non-orthogonality class of the relevant regular type and discuss when N is prime over Ma for some finite a∈N. This is part of an ongoing project with Elisabeth Bouscaren, Bradd Hart, and Udi Hrushovski.
10.5446/59332 (DOI)
But I'll talk about it nonetheless. It's actually nothing particularly deep. It's a personal obsession of mine. It will be clear soon enough why it's a person. Well, OK. Let me call some fact that by now should be simply considered as folklore. So maybe let t and soon enough there's going to be also t prime. So always in a countable language. And whenever I say model, I mean countable model. Or separately if you do t continuous logic. But oh, another thing. I'm actually going to talk about classical logic mostly and not about continuous logic. This is a weird talk. OK. Weird talk. Weird talk. Weird talk. Yeah. So let's say that they are alephanaut categorical. OK. So maybe first thing that everybody knows is that if I have t, then I can associate to it some group. Let's call it g of t, which is just the automorphism group of its unique model. I said models are by definition always countable. And I'm going to take it as a topological group. So I'm going to take it with the point-wise convergence topology. OK. This is a Polish group. OK. In classical logic, it's subgroup of s infinity, continuous logic, not but Polish group. OK. So this everybody can do. Now the point is that actually if you have t, which is OK. So let's write this by interpretable. Because by interpretable is too long. So if t is by interpretable with t prime, well, then g of t is going to be isomorphic as topological groups to t prime. So far, this is really general nonsense. The first thing was there is actually a result, but again, folklore is that the converse holds. So if g, yeah, so the converse. Maybe call it 4. It's actually how do you prove 3? You don't prove directly 3. What you prove is actually some kind of procedure by which from g of t, you produce some, let's call it g. If I have a g, which is given as g of t, well, sorry. If I have g of t, but only given as a pure topological group g, then I can produce a theory t prime, let's call it t of g, which is by interpretable with d. So if now I have two guys with the same associated group, they're going to be by interpretable because they're both going to be by interpretable with this guy. And five, I can actually characterize the groups g such that, OK, which are of this kind. Not every group can be realized as g equal g of t. So when people call this part maybe Albert Ziegler and then claims that it appeared along before that, I have absolutely no idea about history of the thing. It definitely appears in one standard reference in that paper by Albert Ziegler. Who is the student of Lasker who now proves there is what comes from the state? I think Cocon's thesis with Lasker. So what would be the topological group? Yeah, yeah, yeah. That's topological group. Yeah, and for continuous logic, I guess some of the first parts were early observations by Tzankov, Rosendal and myself. And the converse is in a paper with Adrian Kayshu, former student of mine. Anyway, so let's just consider this as standard. And this is really sort of anecdotical unless it became of interest at least to me recently, because there is the turnout to be some correspondence. I mean, obviously, because you can recover t from g of t, that means that you can see properties of t up to biopter predation. So for example, stability, nip, simplicity, you should be able to see them directly from g of t. The point was that actually some of these properties, such as stability or nip, can be seen as dynamical properties of g of t. So for example, certain dynamical systems are representable in certain kinds of barnax spaces and stuff like that. So it was nice. It's sort of related. So that's as close to neo-stability. Sorry, Annand, as I'm going to get, just mentioned that there is a relation between this observation and stability properties or stability-like properties of t in more than, I mean, sort of naturally topological groupish kind. Forget now. So that's why I'm interested in those things. And of course, there is a big drawback is that this is restricted to aleph not categorical, whereas all these properties are mostly of interest to us in the general non-aleph not categorical case. OK, so how do I kill aleph not categorical? OK, so let me state maybe. Let me first state a result and then let's tell you what I can do. OK, I'm going to state it intentionally vaguely, and then I'm going to make it more clear. OK, I can do 1 till 4 for non-aleph not categorical classical theories. And I'll explain what I mean by that. So generalize, which leads to obvious questions I'm going to ask at the end, number five in the case of continuous logic. Number five says exactly. Number five says in continuous case, it's just the group has to be rocope compact. In classic a lot. It's called classic groups, which are. Yes, classic logic. The group is of this form, g equals g of t for sub-aleph not categorical t, if and only if it is rocope pre-compact, say, in bed as a closed subgroup of s infinity. And if you want, finally, many sorts, there is an extra condition. But this is for continuous logic? No, for continuous logic, it has to be rocope compact. Polish rocope pre-compact. Yeah, of course, because you said sub-proc. Yeah. So the point is, there's a complete characterization in purely in natural language of topological groups. If we move the structure of the theoretical, then everyone can get a completely different structure. Yes. But you cannot. But how do you cover the theory? OK, no. The whole point is this. OK, I mean, this is what you want. You want to be able to do reconstructing, because you want to be able to read property of the theory of your object. And I'm saying object and not a group, because it's not going to be a group. And here you will be still talking about comfortable models or not, in this theorem? I'm not talking about models. I'm saying that to each theory, I'm going to be able to associate an object. I haven't said group. I'm talking about the group of objects. That is the dis-hold. And moreover, when your theorem reduced to that situation, the Alephso-Rakina-Korinco coast. That's another thing I'm going to talk about later. But the answer is still yes. Yes. Yes, but OK. That's one last remark that I almost forgot to put in my notes. But actually, I did put in my notes. Yes. OK. So let's T be really arbitrary. Sorry. So I'm working, OK, let's say classical. I mean, in any case, at some point, I'm going to have to restrict to classical. So T is what you would call a first order theory in a countable language. I'm not even requiring it to be complete. Just theory, any theory. And let's try to be naive and say, well, it doesn't have a unique model. So I might think of all, maybe, all models of T and all automorphism groups of models of T. This family, OK, think about it. It's very clear that this does not carry enough information. We countable up to isomorphism models. Countable up to isomorphism models. Countable up to isomorphism models. Doesn't matter. It's not enough information. You said all models. Yes, you said all models. Yeah, when I say all models, I mean all countable models and up to isomorphism. Countable still is a problem. But it doesn't matter because it's not enough. It doesn't matter. OK, question on the model. Can you provide model, do you still mean countable? Yes. Good. Today, model is countable. Every model is countable. Today, also doing some iron stock. OK. I'm going to stop you now. OK. OK, so you might think, wait a minute, but OK. But I mean, there is some interaction between models. OK, so maybe let's try something better. Well, I mean, let's look at all isomorphisms between pairs of models. Well, I mean, technically speaking, this doesn't look like it carries much more information than this one if you look at each space separately. But then if you want to start putting them together, you realize that this object is a groupoid. So let's think a bit, what does it mean to be a groupoid? So first of all, there is a formal definition of a groupoid. A groupoid is like the one that most of you probably know is a groupoid is a category in which every morphism is invertible. So this is obviously a groupoid. The objects are the models. And you have the isomorphism. So obviously, every morphism is invertible. There is a slightly different way of looking groupoids, which I find preferable for this kind of approach. So purely algebraic. The point is that you have objects and morphisms, but you can identify an object. So you can basically, if you have a category, actually that's true for a general category, you can identify the object as subsets of all. This is if you take the disjoint union of all homomorphisms, then each object can be identified with the identity of that object. So in fact, all you need to know is this set. And so on this set, you have a partial composition law. So if you want a groupoid, it's just set G together with a partial composition law and an inverse, satisfying the obvious axioms. And in particular, how do I find the set of objects? Well, they're exactly the identities here. So exactly the things that if I take the square, so it's a groupoid. So let's call its base of G. It's just a set of all E, such that E squared equals E. In a groupoid, something is the identity of a morphism, only if and only if first its square is well defined, and second, its square is equal to itself. So if I have such an object, I can also talk about the topological groupoid. The topological groupoid is a groupoid, which is topological such that this partial multiplication is continuous on its domain, and the inverse is continuous. So it's clear what should be the definition of a topological groupoid. So it looks like we're getting closer to something that would remind us of that. OK, so let me give you. The topology of morphisms. What? The topology of morphisms. Ah, the topology of morphisms of m to n is clear. It's just a point-wise convergence. No, it's how you put a topology on the whole family, which is complicated. It's actually the point that the topology is going to allow you to not only look at each one separately, but something that sort of goes from one to the other. That's the whole point. And so you sort of need to put them together. So let me suggest one way to do this. What is exactly the universe of g of the two-point? I haven't. Well, I mean, the unit. All the homes. All the homes. Yeah. The disjoint union of all homomorphisms. OK, so in this case, it's disjoint union of all homomorphisms. Yes. But now let me give you a better definition than this. I mean, obviously, one natural choice rather than look at all models m and n. Let's look at all models with universe, the integers, or maybe a quotient of the integers. That's sort of the obvious thing to do. So let's call it g0 of t. There is a reason why I put a 0 here. OK, so it can be. So how will I do it? So one way is look at all triplets. So I start with, how did I say that? Yes, I'm going to have something like first model on the natural numbers. Well, OK, only there's going to be, so I'm basically going to put relations on the natural numbers, which give me a model of t. Some of these relations might be quality, so it's actually going to be a quotient of the natural numbers. And then after that, there's going to be a second model. OK, so it's going to be not on the natural, maybe on some disjoint copy of the natural numbers. Again, model. And I want a map, an isomorphism for this. So this is basically just an extension. So the last piece is going to be, this guy is going to be actually an equivalence relation on n union and prime such that n modulo, this is equal. So each guy here is equal to somebody here, and each one here is equal to somebody here. So this gives you the rejection. So this is one way of saying this, and you put the natural topology just by putting it. Sorry, what do you think? G0, t is what exactly? Just all triplets like this. The triplets. All triplets like this. OK, let me give you an equivalent definition, and then it's going to be much easier. Just, I mean, usually people, whenever I speak to people, they always say, oh, you want to put models on the integers. I actually prefer to think of it in the following way. For me, it's easier. I look at all types of two infinite tuples, these are infinite tuples, infinite sequences, such that there exists some model of t, a, and b enumerate this model. It's the same thing. Same model? Yeah, they enumerate. There are two enumerations of the same model. Different enumerations. Well, could be the same enumeration of the same model. Could be. If it's the same enumeration of the same model, then it's exactly an identity. Is it clear how we would compose these things? Let's say. So how am I going to compose the type of? So maybe before that, let's mention. OK, sorry. Let me. OK, no. I want to compose type AB with type CD. I'm only going to be able to do that if the type of A, of B, and C are the same. If the type of B and C are the same, then I can do the composition. I say, well, this is equal to the type of AB times the type of B D prime. And that's going to be just the type of A D prime. The unique way to get this equal to this. There's only one way to do that. And the type is the type of AB in the model M, which is enumerated by AB. Yeah, or in a monster model, whatever. It doesn't matter. I mean. In a monster model. In a counter-moderate monster model. The point is that it's the same. It doesn't matter. And this is the anthropological space. And this is the top. Yeah, and this guy is a type of S of 2 times N. So there is an obvious topology on here, and it gives you the same topology as a natural topology you would put here. I really find this more convenient to work with. What is the base of this topology? So what is the base of this group? It's just the base, so let's call it B0 of T. It's just going to be all the type. So technically speaking, I should write type A A. But I'm usually just going to identify it with type A, where A enumerates sum M model of T. And just a bit of terminology. If I have something. So if I have G in G, I'm going to write the source of G is equal to G inverse G. And the target of G is going to be equal to G G inverse. These are both going to be in the base. They're going to be identities. What is G? You have G0 now, you have G. Any group, G any group. For any group, in any group, sorry. In any group, every element, it has a source and a target. And if you think of it as a morphism, it goes from this object to this object. So GF is defined if and only if the target of F is equal to the source of G. This follows from the axioms I didn't write down, but which follow from the usual presentation of group points. So just think of it as F is a map from the source of F2. The target of FG is a map from the source of G to the target of G so they can compose if and only if the target of F is equal to the source of G. This is just terminology. And in this case, the source of this guy is the type of B, the target is the type of A. That's somehow the. So the source of type AB is a type of B. And the target. I'm going to point you to a group of the objects corresponding to automorphisms of the vanishing moment. I suppose. Do you mean AMB in the other one? Well, no, because if you think of it as in functional notation, it goes from here to there. I mean, you have to choose the direction. You have to choose the direction. Let's follow this convention. Otherwise, everything works the same way the other way around. And just to remark, if I take this G0 of t. OK, sorry. If I take, so A enumerates, if A enumerates AM. Sorry. Yeah. It don't mean injective, right? It just injects. Yeah, enumerates. Yeah, yeah, enumerations could be with repetitions. Yeah, yeah, yeah. Enumerations always. And soon enough, I'm going to actually require repetitions because it's more convenient for me. Yeah, enumeration can be with repetitions. So if A enumerates AM. And so let's E. Let E be. So these 2 equals A and B, they're always omega-t4. Yes. Yeah. And E is a type of M. So this is something in the base. And so G, if I take now G0, E times G0 times E, it's all the guys, it's all the G such that tG equals SG equals E, and this is just odd as a topological group. But the point is that I also have information that links automorphisms between. Actually, if I put E and E prime, if you want, I can put E, let's put B. And let's put here, E prime would be type of B. Then this guy would be isometry. Sorry. Isomorphism, sorry, Anand. Isomorphism from N to N. With a topology. So the point is that the topology allows you to link. Yeah. OK. I actually want a blackboard. OK. Let's think how we should do that. So the problem is that there is a bunch of obstacles. There's going to be a bunch of obstacles. And maybe the first obstacle, so the question is, can we recover, OK, question. Can we recover t from G0 of t? And the answer is, as far as I know, no. So there are actually two obstacles here. There's no question. Yes, you can ask the next question. I mean, given a theory t, you have to type some response. Yeah. And the same things, but this is not up to binary ability. Yes. Well, once in a time, does the type space function up to binary ability? Well, no, because type space function knows the source. And I want some object that doesn't know, that even doesn't know sorts. Up to binary ability. Yes. I want something that doesn't even know sorts. And I'm getting something like that. But let's see. OK, so the first obstacle, it's not even clear. Usually, if t is byinterpretable with t prime, that definitely does not imply that G0 of t is isomorphic as topological robot to G0 of t prime. If you add a sort, add an interpretable sort, and you change everything. OK, they are going to be equivalent up to something. And at first, I thought that would be fine. But then you run into a worse obstacle, which is that B0, which is why I always still use this 0, is not compact. And once I start doing the actual positive work, you'll see where this compactness becomes very quickly. Let's say, but if you try to do any reconstruction, you hit your head against this obstacle pretty quickly. It's not a compact space. Yeah, it's not a compact space. It's not a compact space. Is it a compact space? It's a collection of types of a new ratio to models. It's not a compact space. So let's first try to remedy the second one, and then miracle starts happening. So how do we remedy number two? OK, so let's choose some enumeration of formulas. So it's going to be phi n, and it's going to be a formula in n plus 1 singletons. And I'm not writing it, but it's going to be as rich as possible. So basically, if anything you want to happen there, going to happen at some point. I'm not going to write exactly what I mean by that, but just any richness property of this sequence, you've got it. We're in accountable language. OK, let's write it rich. OK, so now let me write two things. I'm going to write one formula, two kinds of formulas, phi n prime. So I think it's going to be in, again, n plus 1 variables, but now I'm actually going to write them all x's. It's going to be, well, it's sort of a hankin. It's sort of a hankin thing. If there exists y such as phi n x less than n y, then phi n of x less than n xn. If there is a witness, this guy is. And let's say phi double prime, so that would be x less than n. That would just be the conjunction for all i less than n of phi n prime. So OK. I, same thing. And let's even define a partial type. So the partial type phi double prime without an index, so it's going to be in infinitely many variables, and it's just going to be the conjunction, the infinite conjunction of all phi prime n. This is the partial type. If I list. OK. You say something in the model. I'm saying something in the model, is a model, but in a sort of planned head fashion. Then you fix that. It's like fixing the enumeration in a Perkins. Yeah, it's. You're saying you have a model. Yeah, I know. I have a model. I won't. It's a way of making this guy's model type definable. It's sort of the most naive way of doing that. So let me maybe write, let's write d index. So phi prime is the same as the type. This is the partial. This guy is the partial type. It's the conjunction of all these guys or the conjunction of all these guys. But I like it. OK. OK. It's a little sweet. Yeah. Right. OK. OK. So this guy is going to be the sort of everything that. But. Did I count parentheses right? No. All the intersections. The m of i. So I'm going to define this guy as all the a infinite that satisfy phi double prime. And maybe let's look at truncation. So phi just n is just going to be all the a less than n, which satisfy phi n double prime. So this is a definable set in n space. And this is a set of infinite, a tight definable set of infinite tuples. Actually, I want to say that this is also in definable set in, OK, if you present in continuous order, that's exactly definable set. It's what Udy and Lozer called, I forget what. It behaves like a definable set, even though it's defined by partial type and. One of these strict. Strict. Strict something. Yeah. Exactly. It's strict, probably fine. Which is exactly the same thing as, if you think of it as infinite tuples as a metric object, it's definable in a sense of continuous. But OK. Never mind. The point is that if you have any guy here, you can always extend it to a guy here, because you just add witnesses. Moreover, if you have any guy here in a given countable model, you can always extend it to somebody here, which enumerates that model, because you're going to have infinitely many opportunities where there is no constraint about which element you put in, so you can make sure you enumerate your model. Let's just send them again. What is the second question? I can always, if I assume that this already, I have a tuple, this belongs to some countable model. So I want to extend it to something that satisfies the whole thing. So sometimes I have to put witnesses. Sometimes the formula is going to be the tautology, or the entheology. So I don't have any constraint about the next guy. But then you set another thing up to that? Yeah. So I can enumerate the whole model. I just put in elements, of the more arbitrary elements of the model, when I don't have constraints. So I'm not going to use this explicitly during the talk, which is already off. OK. No, this is not good. OK. But it plays an important role. Maybe let's call s m times phi, which is just a set of types in d phi to the power m. So m sequences, each of which enumerates a model in that particular way. So this is compact. Well, I mean, just types in this, something like this sort. Types in this sort. So this is compact. And now I'm going to define g phi of t as g0 of t intersection s2 phi of t, which is just all types of A, B, such that A and B are both in d of phi and enumerate same set, which is necessarily a model by that stuff. So it's a subset of this g0. OK, so now this is OK. So I solved my problem. Now B of t, namely B phi of t. Sorry. You never mentioned t. Where does t turn up? Well, I look at all things that satisfy phi double prime, and which are in model of t. But where's that? In the top? That comes in here. Just this. It comes in here. OK, so this guy, what is it? It's just all type, sorry, this is just all types of A, such that A is in d phi. So this is just s phi, 1 phi of t. So this is compact. What are you talking about when you're saying enumerate some set? These are infinite tuples. Same. Same. Same, same, same. Sorry. Sorry, same set. OK, so I got it compact. Cool, but now I have a different problem. I mean, I showed this phi. So the obvious question is, let's call it obstacle number three, because I managed to get one foot out of the mud. I got the other foot into the mud. So obstacle number three. So assume I have phi mc. Do I have g phi of t? What is the relation between this guy and, say, this guy if I have another enumeration, c? Well, actually, so maybe ideally you would want them to be isomorphic as topological group points. And actually, they are. So this is where miracles start happening. Why? Because you know in advance, in each case, when things are going to happen, so you can plan ahead, and then, therefore, you can, you know in advance, a given phi. I know how to skip things such that if I just look at these indices, I'm going to get something that's going to satisfy c double prime, and enumerate the same set, and vice versa, and this allows you to prove that. So this is. Sorry, I don't quite understand. What are phi mc? What do you assume? When phi is enumeration of phi n, and c is enumeration of phi n, cn, as there. Rich means that in every formula you can write that uses a single y and several x's, appears infinitely times with dummy variables in the sequence. I mean, the obvious riches of a property. Rich. Rich. Rich. If I check between the bases, if I check between the base, yes. Oh, the bases are actually, the bases, I mean, purely topologically, the bases is just a counter space. There cannot be any, unless t, OK, let's assume that t, I only need to require that there is no model of t which is a single term, and even that can be ignored, but OK, let's assume that t has no model at single term, then the base is just the counter space. But the point is that I have a more intelligent, by I home your morphine than that, just that, which is your morphism that comes from the fact that if I have enumeration which satisfies phi double prime, then I know in advance that if I do some skipping, I get something that gives me phi double prime and the other way around, and they give the same set. So I can really move from one to the other. And now, and once you understood how you did this proof, you realize that you can also do, and if t is by interpretable with t prime, then also g of t. So now I'm going to write g of t when I want to, because I don't have to write phi, is going to be isomorphic to g t prime. So I thought I would explain. Basically, if you add a sort, you don't change anything, and you can get, and think that they are by interpretable, just mean that if you add some sort on this side and add some sort on this side, you get exactly the same theory up to choice of language. But I mean, does the formalism also work for many sorts? Yeah, the same thing works for many sorts. I mean, you just have to choose the x's. You have to choose sorts for the x's, and you do it intelligently. It works. It really is. I don't want to get into the technical details, but there's an obvious generalization to multi-sort. Yes. Sorry. I should have said that. OK, so I got obstacle one, got obstacle two. No. I need other direction, reconstruction. This is the first thing. This is just saying that the whole point is to get, I mean, it's nice, it's cool, but it's not reconstruction. And I only have 10 minutes to do reconstruction, so I'm doing to do it very, very, very, very briefly. Very. So for any fee? As a pure topological groupoid. As a topological groupoid. OK. So let me maybe give you an idea. Let me try to give you an idea of how I would go about recovery. So I mentioned this in Paris maybe a year ago. I mean, what I'm going to say now I mentioned in Paris a year ago, but the point is that a year ago I stopped there, and now I can actually go a bit further. So I know that G, we said that G, so G being equal to G, I'm just going to write G phi of t. I'm just going to write it G. It's a subset of S to phi of t. And it is dense there. It's just these are all enumerations of these are all things that satisfy phi double prime and phi double prime. And these are all guys which satisfy phi double prime, phi double prime, and moreover enumerate the same set. So it's easy to check to be dense. So in order to recover this guy from this guy, so recover, let's say, what does it mean to recover S to phi of t? It just means to recover the set of all x's subsets of Sorry, in this definition of yours, on the second point, this green one. Sorry, where are we? The second point, this green one, this A bar and C bar, if not the isomorphic? No. No. They just, they don't have to have the same type. No, they do not. That's going from, because it's a group, so each type of enumeration is an object of a category. And this would be a morphism from one object to a distinct object. That's legitimate. OK. So sorry, let's do it like this. I want to find, so find, recovering this space basically amounts recovering the set of all clopen sets of this guy, namely all the traces on G. So I want to find this guy. Basically, I want to find which are the sets in G which are defined by a formula. OK, so here's the first proposition. This set is equal to all y in G, such that it is clopen. And there exists some u in G, which is a neighborhood of B. Neighborhood of the base, such that u, y, u equals y. One direction should be obvious if it is a formula. Then it's defined by, with finitely many arguments on either side. So just take q, say, at most n arguments on either side. So take q to be defined by the first n guys, the first n x's are equal to the first n y's. That contains a base. It's clopen, so it's open. So it's a neighborhood of the base. And because y only depends on the first n elements, if you multiply on either side by this open set, you don't change anything, because it only changes things at later tuples. The converse is something to be shown. And I'm not having enough time to do that. So corollary from that. So say, how do they say that? Oh, OK. So assume you have a formula, assume you have a formula for e x, y. So this one defines an equivalent solution if an olive defines in g a subgroupoid. So it's going to be a clopen subgroupoid h. So it's going to be subset of g. And it's going to end which contains a base. Again, one direction is pretty clear. If it's a formula that defines an equivalence relation, obviously it defines a clopen set. And it's easy to see that it's actually a subgroupoid. The converse, well, you want, first of all, to know that it's actually a definable set. Well, it comes from there, because h is a neighborhood of the base. And h equals h, h, h. So this one is easy. And this one, well, basically because of this. And the previous guy. Oh, right, right, right. I did want to say, I promised that earlier that I would say why I really, really, really wanted b to be compact. It's for this to make sense. I want the property that you being a neighborhood of b to be significant, and it's only really significant if b is compact. That's how you run into this requirement. OK. And OK, I'm really not going to have enough time to describe how you get the whole language. So here I told you how you get things in two arguments. And it's pretty much the same to get the language in formulas in n arguments. Basically, what you show is, I guess I'll just state two things. So what do I want to state? OK. No, first of all, I'm going to state this. I'm going to state that d. So let's look at d phi divided by this equivalent. So assume this holds. Assume the corollary holds. I mean, both conditions holds. So this is going to be the same thing as d phi n over e. So now ignore. So if e only looks up to n free variables, then you just need to look up to n and just ignore the dummy variables. And this is the same thing. So for example, if you're in a model, then this is going to be the same. And actually, I can recover it in my groupoid. It's going to be if m is enumerated by the set A and the type of A is some e, then this is just going to be the set of all h times g where the source of m. There is a natural bijection between this and this. So I can recover, let's call this g times e over h. So I can actually recover in any model, I can recover this sort. So now I can define some model m e, which is going to consist of all, sorry, it's going to consist of all d, let's say, all g e over h for h. So these are the sorts for h clop and subgroupoid. So I just get all these sorts. And these are exactly all the sorts which are interpretable in t in this model m. And I didn't tell you what language I put because I don't have time. So but I can recover the language pretty much the way I did here. I just need slightly to, rather than say that it's invariant by u, I need to say that it's invariant by h. And I'll have to allow for several h's rather than just 2. But I can do that. Just. And so you recover, in the end you define t. So you get this language l of g. And then t of g is just going to be the theory of the class of all m e in this language. Let's call it l of g. In this language, l of g. Of all these m e for all e in b. And it's exactly all up to isomorphism, each countable model of t. I'm sorry, up to isomorphism and change of language, which is uniform, each model of t appears here. So it's by interpretable. So that gives you the final, that gives you that direction. Just one last remark about, I think it was Anand who asked, or maybe somebody else, about what happens. I mean, if t is aleph not categorical, now I have two theorems. Was it you? If t is aleph, not categorical, well, then g of t is simply, again, by the same techniques that I used earlier to compare g phi and g phi, et cetera. g t is isomorphic to the counter space times g group associated with t, times the counter space, where x, g, y times y, f, z is defined only if these guys are equal, and it's equal to x, g f. z, so for all intents and purposes, and you can recall, for all intents, it's as close as possible to being just a group and not a group point. And sorry. Any questions? The place where you work is without equality, right? No, I can have equality. I mean, I have equal relations. So yeah, as a compilation. So no, I mean, in each quotient sort, there is a quality relation. No, no, no. I mean, why?
It is by now almost folklore that if T is a countably categorical theory, and M its unique countable model, then the topological group G(T) = Aut(M) is a complete invariant for the bi-interpretability class of T . This gained renewed interest recently, given the correspondences between dynamical properties of G(T) and classification-theoretic properties of T . From a model-theoretic point of view, the obvious drawback is the restriction to countably categorical theories. As a first step, I will discuss how to generalise the original result to arbitrary theories in a countable language.
10.5446/59333 (DOI)
about the, of course, about PRC fields, like always. But in this time, I will speak about definable groups with F generics in PRC fields. So this is a joint work with Alf and with Pierre. So the idea is that I will speak a little bit at what is a PRC field for the people who doesn't know. And I will explain for you what are the principal tools that we use to describe the definable groups for the particular case with F generics. So where is a PRC field? So the PRC fields are, in fact, a generalization of something who is pseudo-algebraic closed fields, P-A-C fields. So just for remember that, so a P-A-C field is a field such that it's extensionally closed in the langage of rings into each regular field extension of M. I am only interested in the case of characteristic zero for this talk. So regular extension for me, characteristic zero, is just that I have two fields. What is this thing? Here. So if I have two fields of characteristic zero, M inside N, so we say that N is a regular extension of M if the algebraic closure of M intersection N is equal to M. This is the case of characteristic zero. So in particular, in a P-A-C fields, you cannot have orders. So I want to understand the definable groups in pseudo-algebraic closed fields. So maybe it's a good idea to know what happened with the definable groups in pseudo-algebraic fields. So some examples before that. So the algebraic closed fields and the pseudo-finance fields. So what happened with the definable groups in bounded P-A-C fields? So for that, I need some definitions. So a virtual isogenic is an isogenic between finite index groups of G and H. So in particular, Pele and Krzysztofsky showed that a definable group in a bounded P-A-C fields is really similar to an algebraic group in some way. So there is an algebraic group ash, definable over M, and a definable virtual isogenic between G and H. So the idea is I want in some way to see if we can obtain something similar in the case of P-A-C fields. So what is a P-A-C fields? So the idea is generalize the notion of P-A-C fields. But now in the field that I can have orders inside. So a P-A-C fields is a field of characteristics 0, such that is extensionally closed always in the langage of rings, into a regular field extension to wish all orders of M extend. So I need to put this hypothesis that we need to really extend all the orders, because I need to be sure that I don't have 0 osbietis that we have problem with the orders. So this is what I need. And this is the model theoretic version. So there is also an algebraic version who say that a P-A-C fields, if you have an absolutely variety definable over the fields, such that these variety have points in each real closure. So you have a point in the field. Can I have a question? Just add to that. Is there a notion, too, of an ordered P-A-C field that they give an order? Yeah, yeah, yeah. There's another notion. What do you mean? So the idea is that here, it's really, in particular, a P-A-C fields is a PRC field. No, I'm asking you. The structure here is a field. There's also a natural notion of an ordered field being PRC. PRC. It's a one PRC field. This is one PRC field. Yeah. It's a one PRC field with one order. It's just a PRC field. We always have definable orders. Pardon? Any P-A-C fields that are specified may have seven to five orders. Yes, I will just pick up that in five minutes. I will speak of that in just two minutes. It's not always. So the idea, this is the thing. I will say that the PRC field, I don't know how many orders we have. I don't know that. So I'm just giving you the definition of PRC field. So a PRC field is just a field. I don't have it. It has order or not. But it's a field such that it is existentially closed into each regular field extension to which all order is. I don't know how many. But all orders can be extents. So in particular, if you don't have orders, this coincides with the definition of P-A-C field. This is what I want to say. So this is, the person showed that this is axiomatizable nilangosherfix. So some examples, so as I say, P-A-C fields of characteristic zero are, or course, PRC fields because we don't have orders. So the process is trivial. Now the rear closed fields, of course, we have, for example, the maximum totally real extension of the rationales. We have a lot because for any field that you have and any orders inside, you can find a regular extension of these fields such that it's a PRC field in particular. So we have a lot of PRC fields. So now, what do we know about side of the rear closed fields? I will work only with bounded P-A, and this is maybe related to the question. I will only work with bounded PRC fields. So what is bounded? Bounded is that I don't have a lot of algebraic extensions. So for each n, you have finally many extensions of degree n. So I need to fix my theory. So I will fix a field, a PRC field, and I will take an elementary series structure of this. For technical reason, I will put the language of rings and with a lot of constants. So I need the constant to go to the algebraic extension. So what is the idea? If you have a bounded field, in particular, you cannot have infinite orders. So you have only finally many orders. So suppose n orders. For my k that I fixed in the beginning. So take the orders. Now, we can prove that, in fact, if you have enough constant, you can define each order inside with the language of rings and with the constant for an existential form inside the field. So now in our case, all the orders are definable, and are in particular definable for an existential formula. But we need constant for that. And now the theory that I will work, that will be the complete theory of k in the language with the constant. That will be our theory. With the rings and enough constants. Like I say, so I put the language of rings, and I take a lot of constant for the element for a fixed structure, because I need just enough constant to go to the algebraic extension. And with that, I can define the order in the model. OK? So we know that, in fact, in this theory, the algebraic closure is really nice. So it's going to see it with the algebraic notion of algebra. We also know, so we have a geometrical structure with a good notion of dimension. So everything is kind of fine. The forking is also good. So looking, if you have a piercing field, remember that now I will work with theory of that. So any theory of that has exactly n orders. So for each order, you have a fixed real closure. You always assume that an order line. Yeah, now I assume that we have n orders, because I take, I fix a bounded piercing field with exactly n orders, and I fix n. But n is not 0. n is not 0, yes. Because if not, we are in PAC fields. Yes, of course, n is not 0. So we have at least one order. So the forking is good. So if you have a, well, you fix a real closure. So I will use that to say that the type of a or a b doesn't work over a. And what I put an i here is because I speak about the type in the real closure. So the forking in the pseudo-real close fields depends really of the forking in each fixed real closure. So what we have, that was the theory of my thesis. So we have that in particular, forking and dividing it are equals, and you have that the forking depends really of the forking in each real closure. Fine? OK, so now we know. And that was also for my thesis that in fact this theory is not too bad, because in the complementary of a piercing field is in Tp2 if and only if it is bounded. So now we know that we're working in Tp2 theory. This is why I will speak a little bit also about definable groups in Tp2 theory in general. So now if I try to understand a little bit what happened with the definable sets in PRC fields, so we don't have a nice quantifier elimination and explicit quantifier elimination, but in some way we can understand the definable sets. So suppose that you have a model of PRC, model of majority PRC, so this is all the orders. For each order you have an topology. And percels show that in a PRC field different orders define different topologies. So if you take a model of PRC fields, so we will define a notion of multisales. So remember that in the real closure, in the real close fields you have always multisales. And every definable set is in the final union of cells. So here we will have a notion of multisale. So what is the idea? So the idea is that we take, what is multisale? A multisale is the intersection of cells in each real closure, but one for one. An important thing is their cells of the same type. So we take all these intersections, one different cell for each real closure, and we take the intersection with m. And this is what I call multisale. Who is the? But the endpoints could be outside, because they're in the real closure. Yeah, yeah, yeah. This is important. So the endpoints maybe are not in m and in the real closure. So I take this. This is for me, and multisale is always non-empty when I took the same type of cells, because of the approximation theory that you have different entopologies who comes in particular from orders. So the intersection of an open in each topology is always different to the empty set. OK, so this is a multisale. So that would be nice if every definable set is an union of multisale, but this is not true. But in any case, we have something who is not too bad. No, but it's a multisale definable. It's a self definable. What? It's a multisale definable. No, but it's not true. The multisale is definable, yes. But it's not true that any definable set is equal to. It's definable. No, a definable set that would be, this is what I want to say, that definable set is that would be dense in a finite union of multisales, any definable sets. But the multisales are also definable. Because they're definable in the field closure. The trust that will be definable in the field. Sorry? You can prove that, in fact, the trust that will be definable in our field. Because you have enough constants. This is because we have enough constants. OK? So what is this? So in particular, looking that the multisales in M are so points or mult intervals in M. So I will define a topology with the mult intervals that I will call tau. So tau is a multi-topology, because we have multi-orders. So that will be the topology generated by the mult intervals. And we take the product topology as well. So and I will call semi-oji-brike like a way, it's just an union of multisales. So what do we know about definable sets? So definable sets are described in some way, in a density way, for multisales. So what I want to say. If you take a model and you take a set of parameters and you take a definable set, so you can find finally many multisales such that. So the set, the definable set, will be inside the union of the multisales. Each set, the definable set is dense in each multisale. So you don't have a quality, but you have density for the topology who comes for all the orders of the same time. And also, each multisale will be LA definable in M. So we can speak about the multisale even if the extreme points are not in the model inside. But the parameter assignment model, of course. So we can define, this is definable in my model. So we don't have quantified elimination, but this is not too bad. So we kind of understand the definable set in a density way with the topology who has all the orders. So now, this is a kind of a victory. So I just want to give you, it's kind of big now. And here is more big than in my computer. So the idea is, I will give you now the theorem that is kind of really big. But what is the idea of the theorem that I want to prove, that I will not prove, but I will explain you what are the principle ingredients of that? So the idea is that we want to understand the definable groups, but I don't understand the definable groups in general, but I understand the definable groups with F generics. I will give you all the definition of that that you don't know in some minutes. But the idea is that with some hypothesis that if you don't know what is that, I will give you the definition in some minutes. So with some hypothesis, which is a strong generic, if you have a definable group, so I would read that in two minutes. But the idea is that in some way, locally, my definable group looks really similar to an algebraic group, and in some way also to a multisemi-algebraic group. Remember that the easy group that you can find in our piercing field, the easy sets that you can define are the multisemi-algebraic sets. So the idea is that you can find, and so it's kind of technical, but you can find a subgroup of a finite index. You can find something who is central that is just technical. The important thing is that you can find an algebraic group such that there is a local homomorphism for a big part of G1 and a big part also of my group Ha. Who is? This big part of my algebraic group is in particular open with the topology. Who has all the orders? So in some way, you have a local, you have something that is really similar to a part of the algebraic group. What is local? This is a local, there is a local group of homomorphism from generic part of G1. An homomorphism between generic part of, yeah. So it's not defined that you. Yeah. It's not closed under a particular. Yeah, it's exactly that. Yeah. Generic means generic that you can finally many of this cover the group. Yeah. So the other part is that in fact, if you take the quotient between G1, remember that G1 was defined index subgroup of G. So if you take the quotient with the central thing, so you have the, this is the final isomorphic to a final index subgroup of a multi-semi algebraic group. You said subset, you said finite index subset. What is the open neighborhood? The identity of the algebraic group. What is finite index subset? The finite index subset of an open neighborhood. Is this image generic? What's a finite index subset? Ah, it's not generic. Yeah, yeah, yeah. It's the same idea of generic. No, you have a big part of, what is the G1? You have a big part who is generic. So it's local because maybe it's not closed for the multiplication, but you have a monomorphism between these who is a finite index subgroup, a big part of G1, local, and also between a big part of the HDM, who is a basic. You should erase finite index, just a two-or open neighborhood. Yeah, yeah, yeah. No, it doesn't, am I right? Am I right? Yeah. It's a generic subset of HM. Yeah, of W1. So W1 is a generic subset of the algebraic group. That's mean of the, yeah, of HM. An open neighborhood, W1, without any finite index subset. Just open neighborhood. Why is that? Because you want to see more of that. Yeah. Is the open neighborhood W1? Yeah. Yeah. The open neighborhood is called W1. So W1 star to W1. No. No, no. Wait, wait. What is the problem now that I don't get it, really? Oh, my mark. Oh, my mark. Is W1 a finite index set of some other T-open neighborhood, or is, are you looking at a finite index subset of the algebraic group? A finite index subset of, W1 is my finite index subset of my, where is my algebraic group, H. I mean, I think it's not a correct, it's not a statement, isn't it? Simply get rid of finite index subset. Just a two open neighborhood W1. Yeah. It could be a dense subset. This is the thing. But in any case. The image has to be open. Yeah. Sorry? The image has to be open. Yeah. What is finite index set? Okay. What's the meaning of generic? Finally, many terms late covered. When I say generic, I say finally many terms late covered date. Yeah. I always say that. Okay, okay. We come back, guys. Really nice. Where we're now. Okay. Remember that I told you, I will give you a definition, but generic is finite. It's finally many, finally many translate of these covered. This is what generic I want to say. But in any case, the second part that I want to say is that if you take the finite index, finite subgroup, so that will be defined as more fake to a finite index subgroup of the previous set, and multi-semi-algebraic set, yes. I really don't know what is the problem of everybody though. No, it is. Okay, can I continue? Yes, thank you. Yeah, no, I really understand what was the problem. Okay, in any case. So I will give you, guys, thank you. So I will give you this proof. It's a big proof and we need a lot of things to prove that. So I will not give you a proof. I will only give you the principal ingredients of the proof. So the tools that we use of a lot of things to prove that, I will not give you the proof. So what is the first ingredient? So the first ingredient is that we need to do something about the stabilizer theorem and ease one idea. So the idea is for that, we are not in PRC fields. If you want, forget for five minutes or ten minutes the PRC fields, we work in any theory and I will just give you some definition because we need to give you, I need to give you a new and different version of the stabilizer theorem of Heschel's theorem that I will use. So this is just a lot of definitions. So you take a model and the final groups. You take an ideal and ideal of the final subset of G who is invariant by left relation of elements of G. And we say that the type is mu y if it is not containing a set of mu. Remember that the elements of mu are the small sets. So an ideal has, suppose that you have mu and A invariant, A invariant ideal. So we say that he has the S1 property if whenever you have an indistinal sequence and you have a formula phi of x, y. So you have that the E of the formula is mu for all i different to G, then phi of x, a i is mu for all i. So this is the S1 property in particular and we say that the ideal has the S1 property in a definable set, if x is not in the ideal and also this property is true for all the formulas in this set. And you say that the M is S1 in a type, if the type is M y and it's included in a definable set, who such that M has S1 in this set? So does the definition depend on A? The definition depends on the ideal. So we fix the ideal, who in particular is I invariant. Yeah, yeah, but if it's A invariant, we invariant. You have A in this interval sequence here. Yeah, so that's what I'm asking. Yeah, yeah, yeah, yeah. Is it really important with this A or B if these formulas are all being divided? Yeah, so yeah. It's important, the property of S1 depends really on the indiscernible of the sequence and the sequence that will be indiscernible in the same way that the ideal. But, okay. So in any case, what is, now I want to say what is the definition and set who is medium. If you have an ideal, so you can define what's at medium if the ideal is S1 when you restrict to the definable sets. Okay, so I type is medium if concentrate on the medium sets. We have, it's not difficult to see that in particular if you have a type, a medium and an organization of the type such that this type, the type of A over M B is Y, then the type doesn't work over M. Okay? So I need more definitions. So I need to define this stabilizer for the type. So take fix our ideal I invariant and also invariant on your relation with element of G. So we will define for that type who is Y, we will define ST of P, the elements of the group such that the intersection of GP and P is Y. Okay? Look in that in particular an element is in ST of P, if and only if, there is an element in some realization of P such that this type is Y, the realization of GA also realize P. And remember that my ideal is invariant for left for relation. So in particular if you take this relation that will be also Y. It's easy to see that this is closer by inversion. And now we will define the STAB of P who will be the subgroup generated by the E ST of P. Okay? And we, I now also need this definition that will be the non-forking product. So I know it's here if the first one satisfies realize P, the second one realize the second type, and this type doesn't work over M, the type of B over M A. Okay? This is all the definition that we need to give you to understand the new version that we will use the stabilizer. So the proof of that is really similar to the proof of the stabilizer on Tierra Frusowski. And the Tierra is also really similar. So the idea is you will fix M and lambda to ideal invariant. So here is M is a model, then I don't have any part about my Tierra. Suppose that is a stabilizer that my ideal mu is S1 in any element of lambda. Okay? And suppose also that you have a type who is wide and medium. So now I have a lot of hypotheses that I will just put here. So I have a lot of hypotheses about medium types, about realization of types, who are really, really technical, but an important thing that I want to say that is if you have all that, who is really not important in this talk, so you can construct the stabilizer on group who is connected type definable, wide and medium group. Okay? And in particular that will be equal to the state of P squared and P means 1 squared. So... Connected to... Yeah, I connected. And this is... The important thing is that also the state of P means A, C of P is contained in a small set because it's contained in a wide and definable set. Okay? So what is the idea? Of course I want to use this theorem in my particular way, so in the PRC fields. So in particular if I want to use this theorem I need to... Because I want to find a definable and definable type definable group inside something. So to do that we use the second ingredient who will be the algebraic group, trunc theorem. So how time I have? I think it's... Because I don't remember at what time we start... So that's 10, 15, which is the algebraic time, yeah? So in that case it's going to be 10, 15, 15 minutes. 15 minutes. Okay. So I'm going to do the computation. So in the second part, so the second ingredient is something that we call the algebraic group, trunc theorem. So what is the idea? Maybe this is a definition that we already know, but the theory that contains the theory of fields in the language that contains the language of fields is algebraic bounded if you can find for each formula, you can find finally many polynomials such that if you have a model of T and a parameter in the model, so the set that defined the formula, if this set is finite, so all the sets is contained in the roots of one of the polynomials. Okay? So the thing that I... I'm going to do is the algebraic theorem. So the idea is, again, this theorem can be used in particular for PRC fields that I want, but I only have the hypothesis that this is any theory that contains the theory of fields in the language that contains the language of rings and such that it's algebraic bounded and such that the algebraic and definable closure are any models of T definable closure in the algebraic closure. Okay? So in particular... So if you have a nice notion of dimension who concedes with the algebraic notion, this is for a geometric field, this is true. Okay? So if you take a model of T who is omega-saturated and you take a group definable in M. Okay? So I have some hypothesis, so I will suppose... A model of T is defined as a closed in its algebraic closure. So algebraic closure is a model... is not a model of T. No, inside the algebraic closure. Of course, the algebraic closure. That means... The field theoretic algebraic... The field theoretic algebraic... The algebraic... Yeah, but inside. It means the algebraic closure defines... In the language of fields theory? Fields theory. So the theory contains the language of fields. It's definable closure, it's equal to the algebraic closure and it's equal to... In the language of T. Of T. But actually it happens to be... No, it doesn't... T is something that contains this theory of fields and in the language that contains the language of fields. You say definable closure means... In the language of T. Yeah, but any model is... Any model is defined as closed in the language of T. Because it's a model. You want to make something else. You made something else. In particular... I mean something else. Something else is meant. Yeah, in the language of... No, it's true. In any case, they think... If you have a geometrical, this is true. This is what I want. Yeah. So what do we have? So we have the T and my... Can I? Can I? Can you imagine the T is just in particular... Do you think... Remember, can I continue? Because I will not finish. But do you think... This is the algebraic closure is nice. Can you see it's important? And geometric fields satisfies that. So the idea is that I want to... I remember that in particular in the PRC field, we have the algebraic closure is equal to the algebraic closure in the sounds of algebra. So what I want to say is, in particular, this theorem can be used for any geometric field. In particular, can we use for the PRC field who is the part that I am interested in? But we need algebraic bounded. So you take a model of T who is omega saturated and you take a group defined in M. So suppose that you have... I have some hypothesis now too. So suppose that T admits an invariant ideal MGMG who is stable in their life and right multiplication and such that the idea is S1EG. And suppose also that you can find some realization of the type P such that... and you have type Y and some realization such that this doesn't work. So if you have that, you can find an algebraic group H and a definable finite to one group on bummer fees from a tidy, definable Y2 group D of G and the algebraic group. So the idea is to prove this theorem we really need to use the Stabilizer theorem. So in one way. The idea is that we do something who is really similar to a configuration group. So take C who is the multiplication of A and B and we do exactly the same idea of configuration group to find this algebraic group who satisfy all that. So A, B-prime and C-prime are generics in the sense of dimension in H and are inter-algebraic with A, A prime with A and B prime with B and etc. And remember that in my hypothesis I have an ideal in G. So I extend that for an ideal in the product of G and H. Such that an L1 is in mu even only if the first projection is in mg. Okay? Now I will take lambda, the ideal of the sets in the projection for which all the projection have finite, the projection on G and on H have finite fibers. And we will take P, the type of A, A prime. Okay? So we need to do a lot of that, just give you the linear sketch. So the idea is that with that we prove that all these satisfy the hypothesis of the stabilizer theorem. Okay? So we use the theorem and we prove this that we can find this connected, tide-definable, wider-meaning group K inside G-thameside, H. Okay? And now the only thing that we need to prove is prove that in fact you can suppose that the first projection is injective. So the idea is to prove this theorem, we really need to use the stabilizer theorem that's before. Okay? So of course the idea is that now in particular if you took this theorem for the PRC field, so suppose that you have the theory of PRC field that I defined for you before. So we will use this theorem to construct, to find an algebraic group and to construct a wide-type definable two group of G who is a homomorphism to a HM. So this is the idea. Okay? A colliery of that who doesn't anything about PRC field but who is nice and we think that that wasn't known but I'm not sure is that if you in particular have a real-close field and a G who is torsion free definable in the real-close field, so that will be definable isomorphic to a definable subgroup of an algebraic group. This is just an application of this theorem. An easy application of this theorem. So now I will speak about the third ingredient. So maybe we have more ingredients but I will just speak about this tree of one. So it's the group with the generics in Tp2. So again, I don't have any notion here is any theory. So I will just give you a definition of generic. So if you have an M definable group, so you have the H is generic if finally many translate cover the group. And if you have a type definable subgroup of these, so he has bounded index if for some elementary and sub-structure of the model saturated. So you have the cardinality of the quotient. It's less than the cardinality of the extension. And G00 of M that will be this model type M definable subgroup of bounded index. When I say if you have a theory and you have a model of the theory and you have a definable group with a new study final group, I'll go with the same generic over A. If you can find, if for all the elements in G any translate doesn't work over A. So this is all the definition. Now we need some... We see in TP2, so in particular, we are theory PRC field. So take the mu M type only F generic. And if you assume that G is a definable group who has a strong F generic and you take P and F generic type. So G00 that will be equal to the ST. I put here the ideal to be sure that what ideal I am because remember that the definition of ST depends on the ideal. So I put the ideal here just to say that this is the ideal that I'm working for. So... And in particular, ST of P is containing the union of non-wise M definable set. So in particular, it's small. Okay. And the last ingredient that I want to speak about is this is in PRC in particular. So it's type definable group of algebraic groups. So the thing is that the only important ingredient in the proof is that if you take a mu of PRC and you take an ash who is an algebraic group definable in M and now you take a type definable set group of these. So what is this? So now I will take, remember that in particular in the PRC field so you have the topology for the multi-orders. Okay. So that will be the closure for this topology. Okay. So if you take and K is a type definable set group. So now you take L, the closure for the multi-topology of K. So we have that this K has bounded index in L. L who is my closure in for all the orders at the same time. And LK with the logic topology is profite. Okay. To prove that, in fact we need to work a little bit with multi-externally definable sets. So we do that a little bit. So remember that in the expansion of Shella, the thing that we do is that in particular we take predicates for any external definable sets. So here I will do something similar but not for any externally definable sets, only for any external multi-externally multi-cell. So that's me, a multi-cell who is externally definable. Okay. So this is just because I will just give you what happened, what we need in that in particular. So we take predicates for the multi-cell externally definable and we take predicates for the definable set because we don't have quantifier elimination and with that we obtain the the theory, the structure, MN who is the structure when we interpret the predicates in the natural way. It has elimination of quantifiers and is in particular in Tp2. So this is the principle tool that we use to prove these terms. So in particular for the case of, now we have all the ingredients that we need. So in particular the idea for the general case in PRC fields that we use the stabilization theorem to prove that we can use the algebraic chunk theorem and in the end we use this particular theorem with the logic, this profiling out of this group to find my multi-semi-algebraic group that I need. And I think so this is the principle tools that we need. Something that I want to say is the, is it still an open question, the general case? So because we describe here what happened with the definable groups with F generic, so it's still open what happened with the definable groups in general. So it's a work in progress, so we have some progress with, in particular with some simple groups but we are not really sure and also there's also an open question but I think that maybe it's easier is to see that everything that we did is fine too for the class of PPC fields which is actually the same of PRC but instead of order so you have bad evaluations. So probably it's similar but it's still in progress too. Thank you.
In this talk we focus on groups with f-generic types definable in NTP2 theories. In particular we study the case of bounded PRC fields. PRC fields were introduced by Prestel and Basarav as a generalization of real closed fields and pseudo algebraically closed fields, where we admit having several orders. We know that the complete theory of a bounded PRC field is NTP2 and we have a good description of forking. We use some alternative versions of Hrushovski’s “Stabilizer Theorem” to describe the definable groups with f generics in PRC fields. The main theorem is that such a group is isogeneous with a finite index subgroup of a quantifier-free definable groups. In fact, the latter group admits a definable covering by multi-cells on which the group operation is algebraic. This generalizes similar results proved by Hrushovski and Pillay for (not necessarily f-generic) groups definable in both pseudo finite fields and real closed fields.
10.5446/59335 (DOI)
Nice to be back and thank you to the organizers for organizing it. I also want to make a comment before I get started so Most of this work sort of began During my fourth year of graduate school, so I spent a week visiting Zoe at ENS and we spent many hours in her office each day And so Even though you know, sir, I'm risk I'm writing this many of the ideas came from Zoe and I just wanted to sort of acknowledge that So the plan is to talk about classification theory and the construction of PAC fields so most of the time in model theory we find fields, but this talk is going to be about building them and Here's the map that you know Gabe's gift to the model theory community and The question that we often ask in model theory is sort of which parts of this map have a field in them and If it has a field in it, what kind of field can it be? And of course many of the hardest conjectures in model theory about exactly this question So the staple fields conjecture the super simple fields conjecture stuff like that and I'm interested in building sort of counter examples and Finding new kinds of fields. So here is the SOP in hierarchy just one part of the map Maybe maybe not the most interesting part that it's there So we say T has SOP in if there's some type of PXY or you can make it a formula If you like an indiscernible sequence which is ordered by this type So P a I a J holds if and only if I is less than J and If you take a sort of cyclic union so PX0 through X1 X2 or X1 X2 X2 X3 all the way up to Xn-2 Xn-1 and then you loop back so Xn-1 X0 that's inconsistent Okay, so the idea is you think about this is an instance of the order property and you want to say how close does this get to being An instance of the strict order property and so as in gets higher you're getting closer and closer to transitivity Because transitivity says exactly that you can have no loop of any size. So this says that you can't have small loops Well, it doesn't matter if it's in a if it's in a complete type then by compactness it reduces to a formula So so sometimes it's useful to have this formulation when you want to work with complete types and other times when you want to do compactness arguments It's useful to have a formula so there it goes Yeah, I should have said that in bigger than two and at least three so for one and two it's defined differently Which is going to be something I mentioned many times in the talk. Yeah So I should have said that so so they're They're gonna be many theorems in the well not many theorems There are gonna be a few theorems in the talk that say something for SOP in and when I say that without qualification I really mean it for one two and in greater than or equal to three But of course the argument has to be different for one for two and then for in greater than or equal to three Okay, so this is the in greater than or equal to three case. I'm not gonna define One because young hand did that and I'm not gonna define two because we don't care so Here's the theorem so I'm gonna say the theorem first then I'm gonna spend a lot of time sort of talking about the ingredients So the theorem is just that the SOP in hierarchy is strict in fields So let me say what I mean precisely by that so for all in greater than or equal to three There is a field which is SOP in in SOP in plus one of and in particular It's a PAC field so the SOP in hierarchy is not only strict in fields It's strict in PAC fields or if you like PAC fields of characteristics zero Moreover, there's an open problem. We don't know whether SOP one and SOP two are different whether SOP two and SOP three are different But if they're different that will be witnessed by a field. Okay, so So, you know, maybe maybe someone can prove something by coding them in fields. I'm very doubtful, but okay This is the theorem Now the theorem is nice, but in truth it's much less interesting. I think than the construction So the the whole the whole goal of the project is that we have constructions for coding combinatorial objects and distinct groups You have something like the Meckler construction which takes an arbitrary graph and produces for you a group which is nilpotent step two and Many of the model theoretic properties of the graph are reflected in the group now What I'm interested in is sort of a analogous construction for fields I take a graph I build a field out of it and I want to know what are the model theoretic properties that are preserved by passing from the graph to the field and okay the SOP and hierarchy is just a test case But there there are I suspect many other model theoretic properties for which this construction will be germane Pure fields in the language of rings. Yes Yeah, it's not so hard to find an expansion of a field with this property. Just put a graph on top Yeah, so this is in the language of rings. Should have said that. Okie doke. So Why do we care about PC fields? So of course the starting point is axis 68 paper the elementary theory of finite fields and He characterizes the pseudo finite fields which are defined logically in Algebraic vocabulary. So what he shows is that a Field is an infinite model of the theory of finite fields if it satisfies these three conditions. The first is is that it's perfect Which just says that if it's characteristic p then f to the p is equal to p Sorry is equal to f It has Galois group Z hat And then the last one is this algebraic condition, which is the last missing ingredient Which says that f is pseudo algebraically closed So this was already mentioned by Samaria in her talk But f is pseudo algebraically closed if whenever you take an absolutely reducible variety to find over the field It has a point in the field. Okay, so so Algebraically closed fields are pseudo algebraically closed by the Nostalon thoughts, but you have lots more. So for example ultra products of finite fields Okay, so It became interesting within model theory to just ask how complicated can pseudo finite fields be so there was a question from the you know the famous paper of Chatsodakis van der Dries in Macintay where they characterize definable sets in finite fields about whether or not triangle-free random graphs are interpretable in finite fields and Roshofsky answered this question by showing that the independence theorem is satisfied in in in pseudo finite fields or more generally bounded PAC fields and Using the independence theorem you can show that you have non interpretation of triangle-free random graphs So this is sort of the But Roshofsky works in more generality So recall that a field K is called bounded if K has finitely many degree in extensions for every N and Working in in a general context where he studies definably closed sets in strongly minimal theories satisfying weak elimination of imaginaries and the definable multiplicity property together if that definable Closed set has a certain boundedness property then then the induced structure is super simple. So the Upshot is that if you think about a bounded perfect PAC field you can think about it as a definably close subset of its algebraic closure and That's strongly minimal and it satisfies all the requisite properties. So in particular. He shows that a bounded PAC field is super simple. Oh the perfect assumption was later dropped in this paper of Chatsadakis Pillay on Generic structures for simple theories. So in an appendix to that paper They show that actually if you drop perfect, of course, you won't have super simple But you can still get simple so a bounded PAC field is simple and later Chatsadakis showed this is the Characters this is a characterization. So an unbounded PAC field has TP2 So a PAC field is simple if and only if it's bounded. So we know exactly Which PAC fields are bounded and that's a consequence of a certain Galois theoretic condition? And so there's a sort of emerging theme Which which is already very very clear in the work of Trillian van der Dries Macintyre Which really sort of opened this up opened up the model theory of PAC fields, which is that it you can think about a PAC field Any question about a piece of PC field typically breaks up into two questions One is about classical algebraic information Either about the algebraic closure or the separable closure in the imperfect case and then you ask a question about the Galois group and Normally if you answer those two questions, you can amalgamate them and answer whatever question you might ask about the field so in particular this shows up in sort of the characterization of types that Trillian van der Dries Macintyre show in in their monster paper and in order to do to Give precise meaning to what it means to ask a model theoretic question about the Galois group. They introduced this inverse system so of course the absolute Galois group of a of a Field is a profite group and that's not really a good object. I mean a good model theoretic object For example, if you take an elementary equivalent Group there's no reason that that should have an interesting topology on it or be profite in a meaningful way but you can think about the you can think duly about the system of finite quotients and the maps between them and encode that in some first order structure so that if you take an elementary Equivalent structure then you can take the inverse limit of the associated system and get another profite group So this is really a way of thinking fitting Profite groups into a first order framework. So so here's the definition So you start with a profite group and you look at the set of open normal subgroups And so the domain of the structure will be all of the finite quotients by open normal subgroups Now we're going to have a sort of strange multi-sorted Presentation to this so we won't have disjoint sorts. This is what's a little weird about it but we'll have a sort for every positive integer and a binary relation less than and C which is I Guess binary relation. So sorry, we have two binary relations less than and see an eternal relation p So I don't know what these letters were chosen or why this is the language, but this is the traditional approach so s of g is in lg structure in the following way so the Coset I mean the elements of the sort x in are exactly the cosets of Open normal subgroups of index at most in So notice that the sorts are growing right? Because you get more and more cosets Then you have a comparison G of n is less than or equal to h of m if the Groups I mean the subgroups are have containment like this then This c relation codes the projection So if if in is a subset of m then whenever I take cosets g in I can project it and get a coset of gm and The graph of that projection is coded by this relation c and then on each finite quotient I have multiplication and the graph of multiplication is coded by this relation p So if I have in one into in three and they're all the same So these are all cosets of the same group the same open normal subgroup Then the relation just holds if and only if I have the appropriate multiplication Okay, now this is a pre-order this this partial order symbol so I can take equivalence classes and This is going to be the notation I use for this equivalence class The less than relation the less than or equal to relation is a pre-order Well, so so what this tells you is that if I look at G in and then I look at its image under the projection from G mod in to g mod m Yeah, I mean Yeah, yeah, it's the same as contain into the coset but I mean I prefer to think about it in terms of the maps, but sure it's the same Technically yeah, so okay if we were to do this in the normal presentation You would have district sorts and you would have bijections that sort of tell you the Identifications between the district ones and then you would have a different relation for each sort So so even though you have a sort of globally defined partial order It turns out that this can be like an omega. I mean for certain groups. It'll be Omega stable Because we're cheating here. I mean we're really thinking about this is only being sort what sort by sort But it's convenient to write it as a globally defined pre-order because we all know how multi-sorted logic works You know we can translate if we like if this yeah, so here the sorts are not just joined So but we sort of pretend like they are It works. I promise Okay, so this is a certain structure And I want to mention two sort of notions that are important in talking about the structure So these are duals and subsystems So suppose we're given l k fields and an embedding of the separable closure of k into the separable closure of l So that L is a regular extension of the image of Phi of k so This is confusing but imagine for a moment that this is just an elementary embedding of K into L and then you extend that to a map on the separable closures. So that that That case won't will be the important one for us. So we have this embedding From Ks into LS and then this induces a map from G of L to G of k So if you if you take an automorphism here That is the same thing as an automorphism of the image of this by the isomorphism so you look at the image of the separable closure of k sort of in inside L and then this Tells you that that extends because of regularity. So this is an epimorphism this map here And then anytime I take a quotient of this that's the same thing as an epimorphism from this to a finite group so by composition any time I have a quotient of G of k that gives me a quotient of G of L So then I get an embedding from the inverse system of G of s of G of k to s of G of L So this is an embedding which is called the double dual to Phi to Phi so Each time the direction of the arrow switches. So if I do it twice then I get an inclusion of the inverse systems This one so the way you think about this map is suppose I have a finite quotient of G of k So that's the same thing as an epimorphism from G of k to a finite group But then by composition I get an epimorphism from G of L to a finite group And so that tells me how to how to map a finite quotient here to a finite quotient here or more generally an open normal subgroup here to Okay, does that make sense? Okay now Subsystem this would be the appropriate notion of substructure if the language was different now a subsystem s is is Should I mean you should think about this as a substructure Up to an appropriate choice of the language So what it means is that it's downwards directed and upwards closed in the in the pre-order in Particularly it's closed on their equivalence because this says less than or equal to so you close under all the equivalence classes And then you you take you make sure that you're closed you're directed downward Oh, yes, sorry, I should have said that so to the image Yeah, I mean I guess this is implied by being a regular extension, but but yeah, yes That is my intention. Yeah, so it's a really the example that you should think about is just imagine that L is a regular extension of k So I mean forget forget about k and think about phi of k and then this is the inclusion Or this is rather the identity map. This is restriction and this is dual to the restriction that that's the I mean You don't lose anything by thinking about that case Okay, so let me tell you about this theorem which which I'm very enthusiastic about and it's a monster So I'm gonna I'm gonna draw a picture and then everything will be clear. I promise so so Here's here's a field f. I should not wait yeah, okay, so f and Here's s of G of f and we suppose that we have an instance of the hypotheses of the of the independence theorem in the field So so what does it mean to have the instance of the hypotheses of the independence there? You have a substructure when in this case this elementary substructure e you have a and b which are independent over e And you have c0 and c1 which have the same type over e and c0 is independent with a and c1 is independent with b These are the hypotheses of the independence there and then by by taking double dual you you get an associated map. I Mean an associated picture In The inverse system of the Galois group So here's s of g of c0 s of g of c1 and And suppose that not only do you have the hypotheses of the independence theorem here, but you also have a solution So this gives you S zero which amalgamates these two things Yeah, so so all of this is assumptions so I start with the hypotheses of the independence theorem in in the field f I look at the associated instance of the independence theorem in the inverse system and I assume that I have a solution Then then that means that there's a solution here so that s0 you can think of as sg of C so so we are starting the existence of this so The picture here. I mean the way it's formulated is a bit complicated, but this is really the idea so you start with an instance of the Hypothesis of the independence theorem in the field This gives rise to a similar instance in the inverse system if you can solve it in the inverse system Then you could solve it in the field So this tells you that amalgamation problems in the field reduced completely to the question of amalgamation in the Galois group Yeah, so so everything here the way it's written is they're all algebraically closed subsets so in particular the sub fields So so so they have Galois groups So so yeah, so so this is why it's a bit difficult to state So you want to you want to make sure everything is algebraically closed so it makes sense to take the Galois group, but But yeah, don't don't read this just look at just look at the picture Everything will be clear. So I didn't want to lie to you, but Yeah, this is what it says. Okay, so here's a cool corollary so Zoe proved in the paper where she proves this paper that if you have a PAC field and the theory of the inverse system of that field is In SOP in for in greater than or equal to three You get that the field itself is in a sop in and then my thesis I sort of did the Requisite work to extend that to one and two so in in the case of in greater than or equal to three there's a sort of direct Translation of SOP in into an amalgamation problem because you want to show that this type which is supposed to be inconsistent becomes consistent Now for n equals one two for n equals one you use chem independence in the Galois group and translate that and characterize chem independence in the field so a p a c field is in a sop in if and only if the Galois group is in a sop in and Or if if the Galois group is in a sop in and then for two you use strongly indiscernible trees So this is a somewhat different approach for the other two But it it's a corollary of the theorem. I said before in a very direct way. So there's nothing deep about this extent Okay, so this tells us that the question of whether or not we can build p a c fields With prescribed model theoretic properties at least for the question of SOP in reduces to a question about the absolute Galois group So here's a strategy. So we know that we reduced this to the absolute Galois group now What we want to do is we want to build groups. We want to build profanate groups that code graphs and then Because we can find enough graphs We can find enough groups and then realizing those is the absolute Galois group of p a c fields We went we find p a c fields that are SOP in in a sop in plus one because this is this is the strategy So what time do I need to finish by the way? Do you remember? I? Mean I can talk for half an hour. Okay. Good. Good. So I'm on schedule. So I'll stop it 10 53 Okay, 10 10 to 12 Yeah, well now now we just lost two minutes. Yeah, okay Okay, okay, so All right, well, I don't know I'll probably just talk and then someone will tell me to show up and that's what I Okay, so um, so the idea here is to make make use of a certain coding construction Which was developed by trillion van der Dries and McIntyre in their unpublished long paper on On Psc fields now there their question was just about decidability So they they were using this coding construction for a very coarse kind of problem They want to show that you can code graphs and then you get some undecidable Stuff in in a p a c field and then they were happy now I want to just sort of reanalyze that construction with a more fine-grained approach and understand which sort of sort of find structural model theoretic properties are preserved by by this construction So the this construction is theirs But but the analysis will be a bit more refined. So so here's the idea. So This this slide will just set up notation. So we start by fixing From now on to distinct primes odd primes P and Q now CP will be my notation for the the Z mod PZ but thought of as a multiplicative group which would be important just for notation and And okay DP is the dihedral group. So this has two P elements. It's generated by an alpha beta so alpha is an evolution and If you conjugate beta by alpha. Oh, sorry, that's a typo. They should say Alpha beta alpha beta inverse so if you are alpha beta alpha inverse beta maybe okay anyway But what you want to say is that alpha beta alpha inverse equals beta inverse. That's what it should say So that should that's the same thing as the relation alpha beta inverse beta Yeah, and there should be an inverse so so Yeah, so there are a lot of things missing here. Yeah, so okay, so So okay, so you want these relations so alpha squared So so here are the relations alpha squared alpha beta alpha inverse beta and beta to the P and That's it. Okay, so so a lot of things missing anyway, you know what the dihedral group is so Okay, okay Okay, so the relations are written here anyway, so You can you can? Okay, so there's a Z mod PZ sitting inside generated by a Generating by beta and the quotient by that gives you an epimorphism onto the cyclic group of order two plus or minus one Right, so let's call that tau Now using that you can define an action from the product dp times dp to I mean acting on CQ by The following rule so if I take a pair of elements from dp cross dp xy now I look at whether their image in C2 is plus or minus one and I raise a to the power the product tau x times tau y so that means that if tau of x equals tau of y Then I do nothing and if tau x is not equal to tau y then I invert That's the rule now because I have an action here I can define a semi direct product and W is going to be the semi direct product with the action defined here So that's W and of course this fits into a short exact sequence where W is going to be the semi direct product of dp I'm dp by Cura and just to give it a name lambda is going to be this map that Takes me from W to dp times dp. I just forget about the CQ part Okay so now we want to build the graph so first start with a graph on vertices a and with edge relation r and Form the product Where you take a copy of dp for every vertex and the copy of W for every relation r for every edge r then of course you can take any elements of this group should be called h as D bar times W bar where this is a vector in dp to the a and W bar is a vector in W to the r and We have the associated projections, so I'm just setting up notation here so pi of a is the projection under the coordinate a and Pi r is the projection under the coordinate r and Then we define a group g gamma as follows so I look at the elements in this product I Look at the elements of this product Yeah, so W Is here so it's the semi-direct product so so maybe I should just write this because I think it's worth remembering So W is going to be CQ semi-direct product Dp times dp Okay, so Yeah, yeah, sure sure sure so so we have tau Which goes from dp to C2 which we're thinking of is plus or minus one No C2 and then and then dp times dp acts on CQ by X Y acts on alpha by Alpha to the tau x tau y So again you take x and y you look at their images under tau if those are the same then you do nothing if they're different than you invert That's the rule Sorry Yeah, yeah, so I said that I had a typo so H is what I'm calling d to the A times W to the R Okay, but I won't use H ever again, so you don't need to worry. Okay, so G game is this group so Here's here's the way it works so So maybe one more piece of notation which is worth remembering is you have lambda which goes from W to Dp times Dp which is the thing that just forgets about the CQ coordinate, right? So so lambda here came from this oops it came from the sequence. So here's lambda which which I'll remember here So it's the projection from W to Dp times Dp that forgets about the CQ coordinates Okay, so so here's the definition of the group so I look at The vectors d bar w bar so that if I if I have an edge in my Graph which goes from a to b then I look at the elements in the R coordinate and I take its projection in Dp times Dp so W are it's it's something here So it's in a copy of W so I can apply lambda I can forget about the CQ part Then I want that to match the Dp parts so if I have an edge from a and b and then I look at the Like the coordinate that is associated with the vertex Then that's Then they're sorry if I look at the coordinate that they come from the edge And I apply lambda then I get the coordinates that come from the vertices vertices they match so So this is the definition and then just as a trivial remark As defined if the graph is empty then you get the empty set here Yeah Yeah, I mean This is a homomorphism. So it's it's fairly trivial to just check that it's close to their multiplication It's also profite because this condition is closed So here you have a big product of finite groups, which is Which is obviously profite because it's by T. Knobs theorem. It's compact separable I mean compact totally disconnected topological group and Then this is close to it's a close-up group of profite groups. It's profiting. So this is a profite group and This is just a sort of side comment if a is empty or sorry I mean if the graph is empty then in principle this definition gives you the empty set which we don't want So if it's an empty graph, then you just define the group to be the product of the vertices and this this coheres Okay, so that's a that's a group now Now I want to tell you an interpretation of graphs in profite groups so Given a profite group G which you know any one at all I mean this might give you nonsense but but for any profite group at all of you can define a graph gamma G AR so the vertices consist of open normal subgroups such that the quotient by them is DP okay, and You you say that there's an edge between in one and in two If they're distinct and there's an open normal subgroup G so that the quotient of G by M is W and M is a subset of the intersection of these two open normal subgroups so this is a fact that if you Take the graph you build the group and then you interpret the graph again you get the graph back So this doesn't do anything and there's a this is sort of nice picture of what this looks like in the inverse system So the inverse system you think of the sort Okay Okay, so in the inverse system you have the sorts let's say one P or sorry one two P and four P squared Q and I'm gonna have several Elements here or really equivalence classes which correspond to copies of DP so so these are like in two so these are elements of sort two P in the in the inverse system, so this is S of G and I have an edge between these two things if I can find some in here So this picture tells me how to interpret this Infer system so this interpretation although I've described it in this language is really an interpretation in the in the inverse system Okay, so this tells us how to build a group but the the real question is can I possibly have this group as an absolute Galois group of a PAC field if not then it's not so helpful so in order to talk about that I'll have to tell you what a profite group is Or say a projective group so a profite group G is called projective if it satisfies this following is this property so I take an epimorphism from G to a and from B to a and and What I want is I want to be able to complete the diagram with a homomorphism Yeah, it's the same thing as being a projective object in the category of profite groups Oh Sorry, sorry. No, I really meant to say that a and B are finite. I mean they could also be profite But yeah, so yeah, sorry, so a and B are funny. Thank you Yeah, so It's known that you can't always complete this diagram with an epimorphism that would be the natural The natural guests for this and notice that that would be dual to the extension property that you're familiar with for say for say classes So when you have the stronger property that you can complete the diagram with an epimorphism that that's called super projectivity or sometimes the Iwasawa property and Zoe showed in a thesis that the inverse system of a group with the stronger property is Omega stable And so these are very very nice groups and the PAC fields with those as their absolute Galois groups are NSP one Which which is a consequence of the theorem I mentioned earlier, but it was known much before this is something that you tie and I worked out Okay, so Oh, then I oh shoot. No, I just I just use the wrong arrow. So Okay, many typos here. So a and B are finite and this The text is correct. Although the diagram is wrong. So there's a continuous homomorphism Here Yeah Okay, is that good cool, so We need to know the answer to the following question. What are the absolute Galois groups of PAC fields? so the Is this sort of striking theorem of van der Dries and the Botsky which shows that the class of absolute Galois groups of PAC fields is Exactly the class of projective Prophetic groups so secretly here. There's there's one direction which was known already to ax which is that if you have a PC field Then it's Galois group is projective that occurs already in the 68 paper so the the thing that Lobotsky and van der Dries do is they go the other direction they show that if you have a Projective profanity group you can build a PAC field that has that as its Absolute Galois group. So what we want now is a construction not to just take a graph and build a Profanity group, but we want a construction that takes a graph and builds a projective profanity group such that I can recover the graph from that group so How do we build projective groups so if you have a profanity group, there's a certain subgroup called the Frittini subgroup which is Denoted capital phi of H and this is generated by the maximal open proper subgroups of H so From the definition you can tell automatically that it's a characteristic subgroup. So it's normal and a Frittini cover is an epimorphism Such that the kernel is that should say phi of G. Yeah, so the kernels contain in phi of G Now every profanity group has a universal Frittini cover, which is the same thing as the smallest projective group that surjects onto H So so this is defined. This is unique up to isomorphism and Here's the second key fact if you take the graph You build the group Then you pass the universal Frittini cover and then you try to interpret the graph again using the interpretation I described before you get the graph back So actually this process of replacing G with its universal Frittini cover does not affect the graph that we interpret in the inverse system Okay, so this is wonderful because what this means is that if we take this graph We we build the universal Frittini cover then we can realize that at the absolute gaoual group of a PC field and then we get what we want No, no, I clarified subgroup of G. Yeah, it's just a typo is subgroup of G Of 5g yeah, well in particular the subgroup of G and it's also a subgroup of 5g that one actually has content Okay, thank you Okay, so now I want to talk about how the analysis of these inverse systems of Universal Frittini covers of G gammas go so we want to understand certain important quotients Of this group so notice for example if I take in copies of DP So I look at DP to the end there's a copy of Z mod PZ to the end sitting inside This is generated by the betas in each copy so Now if I take the quotient of that then I get a big copy of Z mod twos so I can quotient by this group and I get Z mod twos each the end Now sitting inside of Z mod twos each the end as as a normal subgroup. I have the diagonal subgroup So these are the things which are either zeros and all coordinates or one and all coordinates Wait is this what I want? Yeah, I guess so so what I want is the quotient to be Z mod 2z Yeah, so so I'm not sure what I mean Yeah, so there's a quotient of the Z mod 2z which which is okay, so fix whatever normal subgroup makes this true You have a quotient of this diagonal which Exactly, yeah Yeah, okay, so I said nonsense here Yeah, yeah, I'm not sure what I had in mind when I wrote this okay, so so anyway that Let's ignore this for a slight moment so Here's one thing that is certainly true so if So let me let me rewrite the fact in the following way so let me let me write it on the board very quickly So if you have a DP and a DP sitting at level 2p and you take a 4p squared q then there's a unique w sitting on top if there's one at all Okay, so this follows from the fact that in the graph interpretation if that the the the copies of DP you see are always associated with with vertices so every every DP is really the the kernel of a coordinate projection to a DP and And every W that you see is is really coming from the kernel of a pi r So all of the W's and the DPs that you see in the inverse system come from this configuration So that means that from from these DPs. There's a unique W that's sitting on top from the definition of the group now I want to define a system a definition called the graph closure and Here's the way it's defined so you define the graph closure a sub gr to be the smallest Subsystem containing a so that if you take an element and you get a Z mod 2z Then you include all of the all of the vertices that you that you see so So Here at level 2 you might have a DP above Or it might be finitely many DPs above So so if you if you have a Z mod 2z Which is a quotient of a DP then you put the DP in that's what this says so if if I see a element of this structure and it gives me a Z mod 2z then I put in the the the kernel of the vertices that are associated to the C meaning that that Surject onto the Z mod 2z non-trivial now Additionally if I have two things here which look like vertices so which give me Structures isomorphic to DP so if I see two DPs then if there's an edge between them in The subsystem then I put that in the closure so the idea here is The I take the small substructure that sees every piece of the graph so if if I take a Z mod 2z and I I see vertices above it Finally many then I put those inside the graph so they're all algebraic over the Z mod 2z And if I see an edge between the vertices that I put in then I make sure that I put those edges inside this structure So the the whole point of this definition is that the interpretation of the graph Applied to this substructure is an induced subgraph of the gamma that I've defined Okay, so forget about this nonsense that was written before this is really the definition so this is this is just a definition so it's It's a way of getting an induced subgraph from a certain set Which one Yes, sorry so in the in the subsystem I have this partial order right and then you have an equivalence relation and on the on the equivalence class of a particular element Because I have this this relation P which is like the graph of multiplication here This is isomorphic to a finite group So so each element in the inverse system if I take its equivalence class that is a finite group and I can ask What group is it isomorphic to okay? So so here when I say see is isomorphic to Z mod 2z I mean I take an element I look at the Isomorphism type of the finite quotient associated to this class Which is which is given to me by P in the language of the inverse system and if it's Z mod 2z then I look at which DPS are above Yes, okay, and then if there's an edge I make sure that I put the edge in Okay, so here's here's the theorem which sort of tells us how to analyze types in the inverse system so suppose I take two finite tuples from the inverse system and I look at this the graph closed Substructors that they generate so this is a subsystem which is graph closed so it means that the the interpretation of the graph here it's going to be an element you know an induced sub graph of gamma and the The thing here will also be an induced sub graph of gamma And I look at the induced map Which goes from gamma a to gamma b So I first take these substructures I form the interpretation of the graphs and I look at the induced map between those graphs and I ask the question Is that partial elementary? So if that's partial elementary in the theory of the graph Then actually the map that sends a to b is partial elementary in the theory of the inverse system So this tells you exactly how to analyze types in the inverse system all you have to do is understand types in the graph and the quantifier free type in the in the inverse system and Using that you can Well because the quantifier free type is encoded by the isomorphism type of the substructure generated by the tuple Right it's sort of like in for say limits right like two things have the same type if there's an isomorphism of the structures that they generate Okay, but okay, so quantifier free type is not strictly speaking true because the language is not correct right, I mean like the the the Substructures don't correspond to subsystems, but but you think about it really being secretly quantifier free I mean you can you can you can formulate an appropriate language in which it's quantifier free and it's not too bad But okay, really I mean the isomorphism type of the substructure which which we think of as model theorists is coding the quantifier free type But up to an appropriate choice of language Okay, and then using using this analysis of types you can prove that for all positive integers and here I was careful to write it that the theory of the Inverse system of the universal for Chini copper of the group that you build from the graph is Inesopian if and only if the theory of the graph is So this is enough to tell you I mean not really the theorem But really the proof of the theorem tells you how to analyze types and you can you can code the relevant amalgamation problems in terms The graph and then solves it Okay, so now let's just put it all together and Yeah, this is right on time. So for ingrid and they're equal to three there are graphs. There, you know, they're known examples of graphs These loop-free graphs of a certain kind which are SOP in and in a sopian plus one So you just build the the right graph now There's a general fact which is which is easy to show although. I don't know if it's written anywhere Well, I mean I wrote it down, but I don't know if it had been written before anywhere else Is that if SOP 1 and SOP 2 or so be two or not equal to so be three this will be witnessed by a graph gamma and you know to prove this it's very simple you just open up your copy of Sue gows and variant descriptive site 3 you look at the proof that graph isomorphisms broth pleat and what you do is you just code arbitrary structures into graphs you it's very simple to show that SOP n for example is preserved under this operation because essentially definability is not changed What that's the idea yeah, so you Yeah, yeah, but okay. You there's a small check. I mean I I was clear that it's trivial. I mean it's not a deep fact Okay, so then you build g gamma from that graph and then you pass the universal for a teeny cover and we know that that operation preserves the graph Then you use banded Dries-Libotsky the inverse gawa problem for PC fields to realize G tell the gamma as the absolute gawa group of a PC field and then Gamma is interpretable in the field. So this is something which Well, this is the universal for teeny cover of the group g gamma then gamma is interpretable in the field so this is a somewhat delicate point in the sense that sg of gamma is not actually interpretable in the field Gamma because it's the the the gawa groups are really only interpreted up to conjugacy So sg of gamma is is is interpretable in the pair the field together with its algebraic closure the field together with separable closure But not necessarily in the field itself However, the graph is interpretable because you can write polynomials that say I'm separable and my gawa group is DP and I'm separable And my gawa group is W and using that you can code the graph even if you can't get your hands on the gawa group itself and So this gamma is interpretable in the field and so f is SOP in But then also it's in a sop in by this theorem of Zoe Which shows you that if the if the group itself is in a sop and then that passes to the field Oh shoot, yeah, in this op in plus one. Yeah, this is a remarkable theorem, right? So Which which one Zoe's there? Here I Mean so for this one this this is me and then this one is Zoe So so so look so there's there's two steps, right? So one is that you have a graph then you get a group and then you have the group and you get a field and then You want to prove that in a sop in is preserved at each step of the way So I start with a graph and I build a group I show that if you start with an in a sop in graph Then the in the in a sop then you get an in a sop in group So it shows that if you start with an in a sop in group and you build an in a field you get an in a sop in field Yeah, yeah, so so that so so Yeah, so so this theorem is a corollary of this there Because you code sop in as a as an amalgamation problem They use it for all three So so okay, so you have to use strongly indiscernible trees or came independence to do the one one in two case It's not such a direct translation, but but you use but you translate everything into an amalgamation problem And you cite this theorem Okay, so Enrique standing up so I should stop thank you. I
A field K is called pseudo-algebraically closed (PAC) if every absolutely irreducible variety defined over K has a K-rational point. These fields were introduced by Ax in his characterization of pseudo-finite fields and have since become an important object of model-theoretic study. A remarkable theorem of Chatzidakis proves that, in a precise sense, independent amalgamation in a PAC field is controlled by independent amalgamation in the absolute Galois group. We will describe how this theorem and a graph-coding construction of Cherlin, van den Dries, and Macintyre may be combined to construct PAC fields with prescribed model-theoretic properties.
10.5446/59336 (DOI)
I've mentioned that this is, I've attended all four neostability meetings and I remember distinctly speaking in three and I mean this is the third one and each time I change the title of my talk so I'm just keeping up the tradition. So I'm going to talk about joint work with Kobe and Pantelis and it has to do with Zilber's trichotomy, the restricted Zilber's trichotomy of Zilber's and I think that it was in 2006 that Kobe suggested the following conjecture, strongly minimal structure, interpretable in an O minimal field say is either locally modular or interprets an algebraically closed field. So I've been working on variants of this conjecture for many years and there are a couple of comments that should be mentioned almost immediately so one comment is, so comments. The first comment is that by results of Kobe and Sergei if a field, if a stable field is interpretable in an O minimal structure then it is pure. So at least this part of Zilber's conjecture is out of the way if you restrict yourself only to structures interpretable in O minimal fields or O minimal structures. Yeah well whatever I mean it doesn't really matter. Sorry well if the structure is locally modular it won't interpret the field. Now the second remark which is more relevant to the approach to the problem is that by all results of Kobe and Charlie if an ACF is interpretable in an O minimal structure it is two-dimensional. So this means that the conjecture really splits in two there's the two-dimensional case where you expect to get a positive result where you expect to actually have a local a non-locally modular structure which interprets a field and two other dimensions where you expect not to have strongly minimal non-locally modular structure is interpretable. So it's in a way a vacuous instance of the conjecture the result is a non-existent would be a non-existent result. Now so there's a theorem of Kobe myself and the and ALF which says that the conjecture is true in the one-dimensional case. So if you have a strongly minimal structure that's interpretable in on the one-dimensional O minimal structure it's locally modular. Okay now the four-dimension for dimension d greater than 2 this is open even for for ACF 0. So note that any structure interpretable in ACF 0 is interpretable in the reals right so the conjecture so the analogous conjecture for locally for strongly minimal structures interpretable in ACF 0 is a special case of this conjecture and the case of dimension greater than 2 is totally open even in ACF 0. I've been thinking about this for a long time and that and even good points to start thinking about this. There are some ideas but they are not very not developed. Now how do you go about proving instances of such conjectures so so we restrict to the two-dimensional case and how do you go about proving such things then usually I mean if you go to the heart of essentially every proof of instances of Zilberst's trichotomy then it's done in two steps. The first step is you construct a group you show that such a look it is not known so I said that ACF 0 is interpretable is interpretable in in the reals. So if you have a strongly minimal structures interpretable in ACF 0 it's interpretable in the reals so this is a special instance so the conjecture for strongly minimal structures interpretable in ACF 0 is a special instance of this conjecture. This is a generalization of the conjecture on ACF 0. Oh minimal dimension. Well of course it's a risky dimension I mean molly rank whatever. So the way you usually prove such theorems is you first prove that there is a group interpretable in the structure and then you prove it and then you prove that there's a field interpretable using the group structure. But in practice it quite often goes the other way around because you have no idea how to produce the group because you don't understand well enough the situation so you first study what's going on when you have a group and then you produce a field and from that you hope to gain enough insight into the situation that will allow you to construct a group. So that's what we do in our theorem so the theorem that we prove is the following let g let's write it this way. B two-dimensional group definable in an O minimal expansion of a field or n. Let D be g plus and some other structure be strongly minimal and not locally minimal. Yeah all let's say interpretable in 10. Then there exists D interpretable algebraically closed field k and an algebraic group h over k over k in k such that D is isomorphic to h with all its k-induced structure. So what we're really saying is that g is an algebraic group with all the induced structure coming from the field k. So this is in a way the best possible result you could hope. Is that enough? Okay. Pure algebraic group when algebraic group is interpreted as an algebraic group with all its geometric or algebraic structure. Yeah. Yeah precisely. Sorry. D this structure. So I have to distinguish somehow between the group and the structure that in which it lives. No. No it will not be. Yeah no. Automatically it will be. Yeah. Automatically it will be the same. It's a it does yeah. It is definable over D. Yeah I guess it is. Yeah it must be. The isomorphism is definable over D. Indeed. Yeah indeed. So those structures are really by interpretable. The g in the field. What about parameters? Just to be sure. The right hand side. This h. The right interpretability if I'm not mistaken is with out parameters. Out parameters. But what we consider D is as interpretable in n. Well we can add parameters in n. So nk is a pure field. So what is the problem? Well that this is that in this sense. Yeah well h may be expanded by constants. I don't know. I mean we don't we never traced. We were not interested in constants and we were very liberal with uses of constants and in the proof. So well I mean it's a legitimate question. Asking whether you what parameters are needed to interpret everything. But sort of it was difficult enough as it was. And the conclusion is very strong because it says precisely what structure on this kind of part. Yeah I agree. Now note that for arbitrary strongly minimal sets we cannot hope for such a good resort because the examples of Boris of those non-algebraic risky geometries which are interpretable in n. Structures. So you cannot hope that if you do not assume the group structure to begin with then what you end up with is purely algebraic. So this is a phenomenon that happens only because we have the group. Okay. So I want to dedicate the rest of the talk for describing parts of the proof. So how do you approach this problem? So idea of the proof. Well there are quite a few of them. So I would focus on the first part of the proof really. So what do we have? How do you approach such a problem? Then ultimately if you go to essentially every proof where the Zilberst-Drake-Athom is proved and it boils down to defining tangency in your structure up to algebraicity. Defining an equivalence relation or an algebraic relation which causes tangency on plane curves. So that's what we're going to do. And what tools do we have? So the idea is, as always, and this goes back to Zilber and Janjera Vinović, that if you have, if your curves were, so we work in G square and when I say plane curves I mean the definable subsets of G square that are of rank one. Okay. So plane curve for me are just one dimension, one morally rank one subsets of G square that are definable in D. So if your curves were algebraic and you had two tangent curves, so of course I'm drawing a real geometric picture. The picture, complex picture is does not look like that. But essentially the idea is that if you had here a tangency, a point of tangency, then by Rousses theorem or the argument principle or something like that from complex analysis, this is a double intersection point. So if you perturb a little bit your curves, then here you will get, if you get, if you have, if you perturb the curve a little bit, then here in a small neighborhood of your point you will get two intersection points because this was a double intersection point and here you will not lose intersection points in a small neighborhood of the original intersection, of the other intersection points, essentially because again of Rousses theorem or something like that. But of course we don't have, so this is our aim. Our aim is to somehow show that if you have two curves which are tangent, then if you count the number of actual geometric intersection points without multiplicity, the intersection points that you actually see in the structure, then you will get a smaller number of intersection points. That's essentially the only tool that we have to work in this level, to work within this level of generality. Now of course our curves are not assumed to begin with to be analytic, so we don't have Rousses theorem, we don't have the argument principle, we don't have anything like that. Okay, so the problem breaks into two really. First we have to show that if we have a tangency point, regardless of what that means, if we have a tangency point then if we perturb our curves we gain intersection points. This is first, a first part of the problem and the second part of the problem is to show that if, that again if we perturb the one of the curves, then we do not lose intersection points next to, in small neighborhoods of other intersection points. So we somehow have to show that if we have tangency, then perturbing increases the number of intersection points next to the intersection point, but does not lose intersection points in other places. Now if, again, if you change the argument a little bit for the for the analytic case, then what you can say is that you can use instead of Fousche's theorem or the argument principle you can use Fousche? Sorry, sorry. This theorem says that essentially the number of, that if you perturb a little bit an analytic function then the number of solutions to an, if you look at F of X equals zero and then you take a small w, w close to zero and you look at the solutions to X of X equal w, then the number of solutions counted with multiplicity does not change in small neighborhood. It's essentially right the degrees invariant under, the topological degrees invariant under homotopy. This is essentially the essence of the theorem. So if you, if you replace, if you don't want to work with those analytic theorems and you want to work with a more topological theorem, then what goes on here is really that those functions, if think of those as, as analytic functions, ignore the fact that these are curves really, but locally you can think of them as analytic functions, then analytic functions are open and the difference between two analytic functions is also an analytic function, so it is also open. So if you have a point of intersection here and you perturb it a little bit, then openness assures that you don't lose intersection points. So at least solving this problem amounts to showing essentially that locally our curves are graphs of open functions. If we can show that locally our function, sorry, our curves are graph of open functions, then at least the first part, the first problem of the problems of keeping track of intersection, of intersection points after perturbation does not, is more or less solved. Okay, then there's the problem which is, I have the paper roughly of showing that if you have a tangency, then you actually gain intersection points, but that's a different problem. Okay, so I want to talk about how to approach the problem of showing that those are really open maps, that what we have are open maps. So, so first goal show that generic curves, that curves, that plane curves are locally graphs of continuous open maps. Well, so if we were in a compact setting, then to show that, then to show this, it would be enough to show that plane curves are closed. Right? So that's the first step. So the first step, not necessarily just that it has O minimal dimension too. Oh, as a set, okay. As a set, as a set, the universe of these two-dimensional, in the sense of the O of the underlying O. In G square, yeah. Yeah, so we assume throughout that you can, you can always, this is a group interpretable in the O minimal, definable in the O minimal structure. So you can embed it as a closed subgroup of n to the k for some k. And the topology is the affine topology. No, so plane curves are molly rank one, molly rank one, D definable subsets of G square. So plane curves, molly rank one, D definable subsets of G square. So, any questions? So this continuity openness that's in the O minimal topology? Yeah. Yeah. So the first theorem is if S is a plane curve, then S, the frontier of S, which is the closure of S in the O minimal topology, those points in the closure which are not in S is finite. And moreover, in, well, this is the canonical base of S. So this is contained in an algebraic, if you want the, yeah, so those points are algebraic in the sense of D over S. S knows about the, its frontier points. Yeah. Of the, canonical parameter. Yeah. Yeah. Sorry. A code for S. Yeah, if you want. Okay. Now here I want to say that there's an issue. Here already this is really not obvious because consider the following example. Take your group to be the additive group, but now what you do is take C and split it into three stripes and take any definable bijection in the O minimal structure which just switches those two bits and push all the structure, push forward all the structure you have in D through this bijection. Now this will just not happen. So this will be continuous with respect to the O minimal topology. Right? Because we have here two large sets where things are obviously not continuous with respect to the O minimal topology. So we have to be working with the group topology. So we've embedded G into an affine space where the topology, the induced topology is the group topology but if we don't assume this then we have to assume that we're working with the group topology and here there's a, here's a question. You want to prove this theorem, you want to prove this conjecture without assuming that there is a group. So you have to find a topology on G, on your strongly minimal set, call it whatever you want, that will make this true. How do we find such a topology? So my guess is, and it's not completely outrageous, my guess is that if you have a strongly minimal structure that is not locally modular, interpretable in an O minimal field, then there exists a unique topology on, definable topology on that strongly minimal structure which makes this statement true. Coming from the O minimal structure and you have a definable basis, I mean if you want to manifold topology. And I think that in general this is an interesting question, I mean if you have a strongly minimal set in a general topological setting and you want to exploit this topological setting, you cannot assume that you're just working with the affine topology, you somehow have to find a manifold topology on that structure to make things work and how to find it is I think an interesting problem. So how to prove this, so idea, so suppose that you have your curve and this is B, your frontier point, then what you do is, you take a generic curve, you fix some large family of plane curves, non-local modularity provides you with a large definable family of plane curve, you take a curve, a generic curve passing through that point. Now assume that this curves comes from a family of more than degree one, so that it has a unique generic type. So there's a unique generic intersection number of curves from that family with our target curve S. Right, now the idea is that if you move your curve a little bit, then if all is well, if here the intersection is transverse of those two curves, if the intersection is transversely and you move a little bit your curve here, then here you will not lose intersection points because of the assumption on the intersection that it is transverse and here you will gain a new intersection point. So the number of intersection points of a generic curve passing through that point will not be the number of intersection points of a generic curve of the family with this target curve. Okay, is that clear? This is a frontier point, right, so it's not, so this is not an intersection point of this curve with S. Now because it's in the frontier, you can perturb it as little as you want and it will gain a new intersection point in S. This intersection point did not occur in the original curve we started with. Now if you can assure that the curve you started with intersects S transversely everywhere else, then the perturbation will not lose intersection points because only tangency can allow you to lose intersection points and therefore the generic curve passing through here will have more intersection points with S than a generic curve passing through here. Now we can assure that if B is generic in G square, we don't care about its genericity in S. If B is generic in G square, then this always happens that all the intersection points are generic, are transverse. But the problem is who assures you that the B is generic in G square? So the idea is because we have a group we can move around our curve and put B in such a situation where it is generic in G square and then the intersection will behave well. Now this is essentially the only point where we are using the group topology in this argument. So it's a very, very delicate point and I find it very, very curious that there's an extremely delicate point where if you use the group then everything works, if you don't use the group then obviously this is wrong. Not the group topology, the group operation, the group topology you use? Well you need to use the group topology in order to know that the translation is continuous, a homomorphism and therefore it maps from tear points to frontier points and stuff like that. So this is a curious point, this is something to look deeper into and understand what's going on because if you want to prove the theorem without assuming a group then you will have to find something like that and this is where things could go wrong. By the way in this goal are you showing that algebraic closure is definable closure? Why would it be? It's locally a graph, locally in the sense, what do you mean locally? Locally in the sense of the ambient or minimal topology. Oh, so it's not implying that. No, no, no, not at all. Not even. What do you mean? Just look topology graph around the point. Yeah. It's, we prove something stronger but I don't want to go into the details. So as I said if we were in a compact situation then this would imply that also the openness result but if we're not in a compact situation then things become delicate because what the situation, think of the following example. So consider G plus, sorry, C plus and F where F of x, y is x and 0 if y is equal to 0 and x, y, 1 over y if y is not equal to 0. Then this, the graph of this function, the graph of this function is closed in C square but it's very non-continuous and I'd be very glad if anyone could give me an easy proof showing that this is not strongly minimal. I mean of course by our theorem we know it's not strongly minimal but proving that this exact specific example is not strongly minimal is not so trivial I think and what happens here is that we have infinitely many poles in this function and the next theorem that we prove is that this set S cannot have infinitely many poles. It cannot have infinitely many points where if you tend to a point in G then the curve goes to something that has no limit in it. This follows immediately, I mean the fact that this is the only obstacle for S being open follows immediately from this theorem. So the next theorem is S has finitely many poles so since we assume that our group is embedded in NK with the ambient topology then this just means that there's a neighborhood of the point that S is unbounded. The pole is just a point where and this is an extremely hard argument, sophisticated geometric argument and I think that when we were working on it we were definitely thinking that there must be something simpler but even for that example it is not that easy to find an argument showing why this is not strongly minimal. So maybe if someone can analyze examples like that and understand them we can come up with a nicer argument showing that this. So a pole, so A, so definition assuming G subset N to the K with the ambient topology A in G is a pole of S if for all open U containing A U times G intersected with S is unbounded. So it's really what you would think a pole should be. So now it's an immediate corollary of this argument that if you only have finitely many poles then S is in fact an open relation which means that it maps open sets to open sets as in the same sense as F is an open function. So this is really the essence of the problem of proving that S is open and this gives us if you want a starting point for what comes next because as an immediate corollary of that we can get for example that if you have a function that is smooth at the identity then it's determinant Jacobian has constant sign in a small neighborhood of the identity and this is what this is the engine that allows us to understand how the differential, the Jacobians of these local functions if you want look like and this is really the machine that allows us to conclude how things really look. But this is a whole different part of the paper so I guess it's a good place to stop.
We prove that if D=(G,+,\dots) is a strongly minimal non-locally modular group interpretable in an o-minimal expansion of a field and dim(G)=2 then D interprets an algebraically closed field K and D (as a structure) an algebraic group over K with all the induced K-structure. I will discuss some key aspects of the proof that may be of interest on their own right. Joint work with Y. Peterzile and P. Eleftheriou.
10.5446/59338 (DOI)
So first of all, I'd like to thank the organizer for inviting me and thank everyone for coming to my talk, even though it would be a really nice time to take a nap. All right, so before I start, I guess I should mention that this is what I'm doing is really just stability. It's not neo-stability, but hopefully it can generalize to neo-stability. All right, so group rates and relative internalities. So I'm going to define what relative internalities means. I'm going to define what group rates are. So let's start. All right, so that's the setup. So we fix t, a complete stable theory, eliminating imaginaries, and I'm going to work over a cl of the empty sets. And then we work in a monster model of t. And we fix a family of partial types, curly p, which are all over the empty set. And q will always be a complete type of the empty set. I fix all of this for the rest of the talk. So a tuple c in m is said to be a realization of p if it is a realization of one of the types in p. So this is a bit maybe non-standard, but that's what I mean by that. And then this is standard notation for the set of realization of curly p and q, just p of m and q of m. So that's what we'll do today. All right, so this is for your well-known definition of internality. So stationarity type p in S of a is said to be p internal. If there are some set of parameters b containing a, realization of p independent of b over a, and at all c of realization of p such that a is in dcl of c and b. And it's said to be almost p internal if we have the algebraic closure instead. So this is really well-known definition of the new material p. And so in the stable context, actually these parameters can be chosen to be a tuple a of realization of independent realizations of q. And then it's called a fundamental system of solutions of q. So I call this, I mean, I will not need this notion of fundamental system of solution to state my theorems today, but this is actually very important for the proofs. So I thought I might just say it. All right, so the classic example, of course, is just the two-sword structure, like, say, the complex numbers and then a finite dimensional c vector space. Then any type in there is just c internal just by picking a basis. And in that case, the basis would be this fundamental system of solutions. OK, so then, sure. So this a is equal to a relation of q or p? It's q. It's q. Wait, no, p, p, yeah. OK, p. Right, but the type I start with is p, sorry. Yeah, yeah, yeah. It's not the q I was fixing at the beginning of the talk. Yeah. OK. Right, so we come to the group, like, odd q over p of fermentation of y-ization of q, which are induced by automorphisms of the monster model that fix y-ization of curly p point-wise. And then we have the following. And again, this is very classical. I don't actually know who to attribute this to. But there is a zero-type definable group g and a zero-defineable group action of g on q of m. And all of that is isomorphic to the natural group action of r of q of m on q of m. Is shooting in analogy on? Yes. So q is p. q is p. q is p, no. Yeah, I should have said that, of course. q is p, no. OK, so that's what we'll generalize in this talk. Right, and what do I mean? OK, I started doing this basically. This theorem says that there's a natural algebraic structure type definable that's associated to any internal type. And the question is what happens to analyzable types, right, and in particular, two analyzable types. So that's kind of the motivation here, even though it actually ends up being used for things. OK, so group is, OK, so even though it I define this, I'm going to define it again. So group is a category g in which every morphism is invertible. OK, cool. That's the definition. So what does this actually mean? More g of morphism and set of g of objects, right, because we're working with a category. And then we have the domain map and the co-domain map, which again come from the category and the partial composition of the category, which is associative, as neutral, and inverses because every morphism is invertible. All right, and basically what I'm going to do is I'm going to consider type definable group weights. So the definition of these is what you think it is. So two type definable sets for the morphisms and the objects, and three type definable functions that satisfies all these actions. I guess a remark that I should mention is that actually you can prove that this function can be actually proven to be relatively internal, relatively definable, sorry. Right. All right, so that's the definition of relative internality that was in my title. All right, so we start with pi, a zero definable function whose domain contains the set of function of q of a. And then q is said to be relatively pin of a over pi of a is stationary and p internal. And we'll denote it by q pi of a. OK, so that's nice. That's the definition. So when does that happen? So for example, if some two analyzable types satisfy this, it's just because of the stationarity condition, which I think so if you have a two analyzable type you get, say q to analyze, then I get this map, which is just given by analyzability. Thank you. Should I close the window? Or can everyone see? Right, but yeah, so this doesn't need to be stationary necessarily, right? So that's why I say some two analyzable types satisfy this. Not all. But still, it's something. So the image applies just the monster. Yeah, yeah, yeah. OK. Actually, can you close the window? Yeah, I will. Yeah, yeah. Are the values of pi finite? Yes, yes. All right, and another way this can appear, which is maybe actually more natural, is if I start with an internal type, type of a over d, and then I just consider this type q of type of a over ad over the empty set. So just put the d in front. And the projection pi on the d coordinate, then that gives me a relatively internal type. So that's one way to think about this is, right, we study how the internality and the binding group will vary when we make the parameters vary. All right, and just to note that the image of q of m, it's a complete type. And we'll denote it pi of q. And for now, at least, we do not assume that it is internal. It would be internal if we work with a two-analysable type. All right, so now we're going to define a group of it, and then I'm going to say that this is type definable. All right, so we define a group of it g, which depends on q, pi, and p. And we'd find it as follow. That's kind of the only thing to do, really. So the objects are realization of pi of q, and the morphisms from pi of a to pi of b are bijection from the fibers of pi that are induced by the morphism of m fixing p point wise. So I think maybe it would be nice to have a drawing of this. It makes things easier. So this would be pi of q. And then I have all these fibers here. And I have my map pi. Q pi of a was type of a over pi of a? Yes, yes, yes, yes. All right, and so basically, right, so this would be q pi of b. q pi of b and q pi of a would be over here. And so basically my morphisms are just, right. So for example, this would be a morphism from pi of b to pi of b, just moving the fibers. But then you also have morphisms between the fibers. All right? So that's how I define my group rate. All right, and of course, so this group rate acts on q of m. So of course, it's a partial action, right? So something that is in morph pi of q to pi of a would act on these fibers by moving them. So it's a partial action or group rate action. All right, and so the theorem is if q is relatively pi internal, if it's a p internal by pi, then there is a zero type definable group rate acting zero definably on q of m. And this action is isomorphic to the action of g q pi over p on q of m. So to this action that I just described here. Right, so this generalizes the theorem I stated earlier about the binding groups. All right, so yeah, what does this says? So it says, first of all, that the binding groups of the fibers are uniformly type definable, which I don't think is obvious from the setup. Maybe it is actually, I'm not sure. And also that these sets of morphisms between fibers are type definable. And finally, that this all comes together to form a group rate. All right. So maybe I should say a word about the proof. So really, the proof is very similar to the proof of this in a Lansberg geometric stability theory. And the basic idea is, so if I start with a fiber q of pi of a, then remember I said that actually there's a fiber, a tuple of independent realizations of q of a, right? Because that witnesses the internality. And so in particular, what this means is that q pi of a of m, so the set of realization, this is in this idea of a bar p of m. All right. And now the whole idea of the proof is just to encode some sigma from pi of a to pi of b by the tuple a bar sigma of a bar. All right. And because this is in DCL of a bar p, this actually tells you what the morphism is. All right. And there's still work to be done. You have to show that this is obviously this correspondence is not injective at all. So you have to make this injective using an equivalence relation. And the extra work in this case compared to the proof in Anand's book is just you have to make sure that everything is uniform in pi of a. But apart from that, it's more or less the same proof. All right. All right. So actually, there's no reason why we can't do this for. So q of n, all right. So by q of n, I just mean the type of n independent polarization of q. I'm allowed to do that because remember I work over ICL of the empty set. So there's no problem there. All right. And so we do the same thing for all these q of qn. And we obtain, in the same way, type definable group weights. And so what they do, they just encode the action on n independent fibers. And actually, these group weights are linked by morphisms, i.e. functors between them that are definable. Right. So what happens really is that, say we have gn plus 1, 1 group weight for n plus 1 realization and then gn. And what we get is we get n functors definable, which I will call delta 1 to delta n plus 1. And what these do is they just restrict the action on the fibers. Nothing complicated happening here. You just restrict the morphism. All right. But what's interesting, actually, is that this idea will just give us a criteria for when the type is internal. And the way it works is, so for each a realizing qn and each b realizing qm, independent. So we have two type definable binding groups. These are actual binding groups, ga and gab. And we have a restriction map between them that's definable. All right. And what's going to be our criteria for internality is that we say that this sequence of group weight gn collapses if there is an a satisfying qn such that for any b satisfying qm that's independent of a, this map is an isomorphism. All right. So basically, this double a will encode all the data of the action. And that's what's going to give internality. And the second definition that's very similar is we say it almost collapses if there is such an a such that these maps all have finite kernel. And I guess you could guess that this is going to give us almost internality. All right. So here are the theorems. So suppose q is relatively p internal via p. And the group is gn. Then the type q is p internal if and only if pi of q is internal and the sequence of the gn collapses. So this is actually not too hard to prove once you have everything laid out. You just pick this a here that witnesses the collapse. And then you prove that you're a lot to make a a bit bigger. And then you prove that any morphism fixing p and a just fixes the realization of q. And you're done. There's not much to do, actually. And the second theorem for almost internality is you just put almost everywhere. So the type q is almost p internal if and only if pi of q is almost p internal. And the sequence gn almost collapses. So one fun corollary of this, but I think maybe people knew this before. I'm not sure. So these groups gn with the restriction map. So they form a projective system. So in particular, you can take the limit of these. And if q is p internal, then we actually get a definable short exact sequence of type definable groups. So one and then the projective limit of the gAs and then automorphism of q over p and then automorphism of pi of q over p to one. Right. And moreover, this automorphism group of q over p is p internal in that case. So this sequence is definable isomorphic to a sequence that's actually in p, q. All right. And I guess the reason why I find this interesting is, well, it gives you control of over what p internal type might be, in a way. So basically, if you know how exact sequences work in peq, then you can deduce information on how analyzable type can become internal in the whole structure. Right. So an example of this would be, for example, if p is ACF0, then you know that exact sequences of a billion groups always split. And then you can deduce information about analyzable type in the bigger structure. Right. I think this has been done by Zoe before, I think, in one of your papers. I'm pretty sure. All right. So maybe there's some application there. I haven't looked into it too much. All right. So another notion that's interesting, actually, and that is actually more natural to consider than internality in that context, I think, is to preserve internality. So this is definition due to Raimouza. So we say that p over some set of parameters, over some set of parameters, actually. So d is just a tupper in that case. So we say it preserves internality to p. If whenever a realization p, a, and c are such that type of d over c is almost p internal, then the type of a over c is also almost p internal. Right. So what this says is we start with a, and then we have d, the parameters, and then we have c. And then we say that if this is internal, then the longer thing is internal, too. Right. So this notion was introduced, motivated by some phenomenon in compact complex manifolds that, to be honest, that don't understand quite well. But this was basically the idea was to mimic something that happens in compact complex manifolds, something that's called being moistened, if you know what that is. The nice thing about this notion of preserving internality is that basically we have the following sequence of implication that are easy to see that p algebraic implies preserves p internality. And this preserves, and this implies p internality. What p algebraic means? Right. So this just means that if I look at type of a over d, then let's say p, a type over d, then any a for any a satisfying p, a is in ACL of d, p of m. So it's like almost internality, but we actually did not need the extra parameters. That's what it means. All right, so these implications are actually quite easy to see. But I guess the interesting part is that we have something that's stronger than p algebraic, but weaker than p internal. All right, so the question is can we use this group of stuff to know when something preserves p internality? So the answer is yes. And so recall what I did at the beginning is if I take a type p in S of d and this type a over d is p internal and stationary, then the type a d over the empty set is relatively p internal via the projection on the decoordinate. In particular, by all the work I did, we get a group of it and then the theorem is as follows. So suppose t is super stable of finite q rank and p is equal to type a over d's p internal and stationary, and q is the type of a d over the empty set. Then if the sequence of group of it associated to q almost collapses, type of a over d preserves internality to p. So remark about this is that I really use finite q rank in the proof, but to be honest, I think if I knew a bit more technical ranks and all of that, I probably should be able to just extend it to just stable. I just haven't done it yet, but I think this should work. To my knowledge, this is the first criteria for preserving internality that doesn't actually assume the cbp on the theory. If you know what cbp means. All right, so I'm going to end the talk with a bunch of questions. All right, so the first question, of course, is this last theorem about preserving internality. It would be nice to have an if and only if condition. So I just realized that actually it is not likely false, the converse. It is just definitely false. It just doesn't work. But actually, this is sort of interesting because of the notion of Moisesen in compact complex manifold. It implies preserving internality, but there's also, sorry. This is a notion of Moisesen on map. Yes. Yes, OK. Moisesen on map. Yes. Moises on map. Sure. So I guess a relatively internal type could be Moisesen or not, I guess. We'll talk about it. But OK, the question is maybe this notion of collapsing is closer to the complex complex complex complex manifold notion of Moisesen than the notion of preserving internality. So that would be something interesting to look at. So another thing is, so this group that I defined, it just does not live in Peq. And it would be interesting for many reasons to find something witnessing this in Peq. So there's some obstruction to this. OK, so this is something that I did with Omar Leon Sonshies. So the idea is that in the internal context, if you start with an internal type Q, you get the automorphism of Q over P. All right? So the binding group, the way it is constructed in that torque. But then if you look at P, basically using some parameters, so this is using some parameters B and this is using some parameters C. So you get all these groups in P that are isomorphic using the parameters B and C to the automorphism group. And so basically what this tells you, OK, that's how you get the groupoid in the more modern approach to this. That's how you get the groupoid. But in particular, what this tells you is that this group is internal. All right? So this implies of Q over P is P internal. But actually in the case of relatively internal types, so what Omar did is that you can show that under certain assumptions that are actually not too bad on the groupoid, if the groupoid is internal, the type is internal too. So there's no hope of having an internal groupoid witnessing this. But still, I think there should be something living in P. Right? And the last question is, of course, since this is neostability, what can we do outside the stable context using only stable embeddedness, which I think qualifies as neostability? All right. So I think that's it for today. So thank you. Are there questions?
We prove that in a stable theory, some 2-analysable types give rise to type definable groupoids, with some simplicial data attached to them, extending a well-know result linking groups to internal types. We then investigate how properties of these groupoids relate to properties of types. In particular, we expose some internality criteria.
10.5446/59339 (DOI)
I enjoyed myself. So OK, so I want to start off defining some things. And I think I could say a little more if people want me to. But I'm going to start with what I mean by Ramsey property. It's in my title. So I sort of follow a convention. Is this not typical to say that accountable structure has a Ramsey property, but I say it? So what do I mean? And instead of saying the Ramsey property, I should say it has RP. If it's age, it has a Ramsey property, I'd say in the sort of Nesha troll or Kekris Pestov to Dorchivich Sents. Yeah, sorry. Thank you. I saw a capital letter. I thought that was it. OK, so that's sort of my weird convention. And the more accepted thing is to define Ramsey property for an age. So an age k has. Is it set for finite structures of substructures? Yes, it's going to be all the finitely generated substructures. Yes. So you're in the not necessarily. Not necessarily, but always locally finite. OK. Yeah. Sure. So has the Ramsey property and the sense above. These are sort of, I'm thinking, two papers that came out around the same time, 2005. OK, and I'm going to start talking about these finite structures in this age. They're finitely generated and finite. OK, so if for all finite integers and structures in k, there is, am I going too low? Maybe, right? Go up? I think it's still fine. OK, this is still fine. OK, there is C0 in k, such that I'm going to use this arrow notation. OK. For whole k? Oh, OK, good. So that's going to be a number of colors? There's just more k. There's two k's. There's small k. Oh, yeah, that doesn't help, does it? How about R? R for colors? M for colors? Probably be C for colors, right? C and this, there is, of course. OK. Is that OK? Usually people do R, right? I'm going to take that back. I'm sorry. This is the kind of thing that ends up when you're teaching reviews very bad. You see, erases things. OK, so all right. So finitely, many colors, or you could just say two and then use induction, two colors. So choose a number of colors or just choose two. That's fine. So for any finite structures in k, there's another one, a big one. I can say what this means or write it out more if people want me to. There is a special, big one, so that no matter how you color the embedded structures isomorphic to A0 in here, using that many colors, you find a copy of this one that's homogeneous for the coloring. You have to assume that A0 is a substructure of B0. A0 is a substructure of B0? Isn't it a bit vacuous if it's not? It's part of the computation, isn't it? It's part of the computation, isn't it? So if there are no copies of A0 and B0, it would be quite easy to find a copy, all of whose subcopies are the same color. But I think it makes sense, yes, to not ask it if it's not. Yeah. Should I say more? Can you write down? OK, sure. IE. OK, so then a little bit more notation. So they use this choose notation. This is going to be A prime substructure isomorphic to the A. And the language is the language of the A. Yes, thank you. OK, so this is a way we can talk about the substructures that look a certain way. OK, so this, who's this arrow? Who do we attribute the arrow notation to? I kind of forget. Oh, there's, OK. So what does this mean? Yes? So for all, OK, I'm glad I didn't use C, because I'm going to use C. OK, I'm glad I didn't use C, because I'm going to use it now for all colorings. So that's going to be functions on the copies of A0. There is B prime, not with isomorphic in the right way. And there is, how about, well, I want this one color that I'm going to hit. So there's gamma. I don't know. I'm sorry. We don't usually do that, right? There's one of these R colors. Or I'm kind of forgetting. Sorry, logicians, we don't do that either. OK, I'll try that again. OK. OK, so there is a color, right, such that when you restrict them out to just the copies of A0 and B prime, you get one color. So see, does this work? This is just going to be the one color. Yeah? OK. OK, so I'm interested in when can you transfer this from one age to another? And I'm not interested. The languages could be different and all of that. And my way of understanding it is through these generalized indiscernible sequences. So the first place that I know that they're used are in Shella's classification theory. So we have these indiscernible indexed by trees and such. So definition. Oh, yeah. OK, so done with Ramsey property. Let's just say a little notation that I like to use is if I have finite sequences, this is going to mean that they have the same quantifier free type. And that's just fewer letters for me to write. OK, so now definition of the indiscernible. So fix structure I. And these will be same length tuples. I wanted to make sure that this Ramsey property do you really want to express in terms of substructures or maybe coloring of embeddings? Because in the description of extremum and abilities, these are coloring of embeddings. Yeah, so I guess today I'm only going to work in the case where there is a relation and a language that is a linear ordering on Earth. Yeah. But there are some interesting issues when you don't do that. So in a paper with Dragan, with Silevich, we looked at category theoretic Ramsey properties, basically the same idea, and it's preserved under right adjoints provided something is going on with the automorphisms. And then we need to be careful there. But here, because we have the ordering, I don't worry about it. OK. So we're going to fix same length tuples in some monster, u, then this sequence indexed by the structure i is i indexed indiscernible. And I did not change this name from what I read in classification theory. I hope that's OK. Oh, yeah. So you could put any structure in there. Usually we put an order. Can I use the same? Yeah. Yes. Yeah, I mean, you could even think of this if you like. OK. So fix these tuples. So far, this is where I'm at. But then what makes this embedding special? Indiscernible if for all finite n, I really mean that to be different from n. OK. For all length n sequences, ij from i, if those sequences look the same in i, they're going to push forward to sequences that look the same in this monster model of some other theory. So over here. OK. So far, so good? Yeah? OK. Is twiddle sub-a is in the structure a? Twiddle. Same quantifier for tuddy and i. Oh, no, no, in. Yeah, I'm not using over parameters. Or make them constants if you want parameters. OK, good. So for the rest of the talk, I'd just like to restrict to the case of i-cable, locally finite, and linearly ordered by some relation in the language. By language, I mean signature. So some relation in there. But you could consider other questions or you don't restrict it to that. I think I can. There, did it. All right. All right, so I have an example for us to think about. So let's use, let's see. So two structures. And they'll both be trees, but then with some additional structure. OK, so this is the lexicographical ordering. This is the partial ordering in the tree. And then this one happens to have the meat in it. So remember, I have this convention about what it means for a structure to have the Ramsey property. So this one has a Ramsey property and does not. So what happened? I took a reduct. Sometimes I can take a reduct. So in my thesis, I studied the situation where A was just a linear order and B was an order together with a random graph relation. And they both had the Ramsey property. So something different must be going on here. So I'm going to go ahead and take a look at the Ramsey property. It's going on here. In both cases, yeah. So I had an example I could work through, but I think I should just leave it there and keep going. So you can see pretty clearly what the issue is. Remember, I'll make this quick. So these little nodes, just saying a four tuple, notice who I'm not coloring in, the meat there. So then these guys look the same in terms of quantifier free time. Because you can't name the meat. So it just had three people, incomparable. Yeah, well, both A and B can play the role of I. So then, but not so with B. So the problem is you shoot these over into another structure. And it could just have two ternary relations, red and blue. And so in your initial map, F, like the F over there, you make these red and make these blue, it's going to be very difficult to find your B prime. This one? Very difficult to find this. You know, some things you can find. So you can find, if you're a little clever about it, right? So four people who are incomparable and somebody below them. And that would be a way to say, oh, well, all the triples from here are going to be red. But in general, for all the kinds of finite substructures that you can express in A, you're going to have a problem sort of basing your indiscernible on this information. So that's a little bit wishy-washy. OK, so we'll leave that. So these ion-dexed discernibles are kind of the technology that I'm using behind the scenes to get a certain result that I'll write down. So let me give you the definition of a semi-retraction. And I guess for this definition, you probably don't need any of this. But in terms of the theorem, you do. So anyway, let's say definition AB structures. Oh, you know what? OK, this will be a different definition. OK, so by a quantifier-free type preserving, or I got a better word from Annan this week, maybe respecting, map. What do you mean? So sort of over here, if these guys are looking the same in A, then they map to people that look the same in B. Other votes, do you like preserving? Do you like respecting? Preserving? So let's put a couple of these. Thank you. We'll take your vote. You were silent this time, Annan. OK, so here's the picture. I want to take two of these. F and G. So here, I'm really getting into hot water, because I'm not sure if I can use this word. But Dougold says it's OK. So A is, but if you've got a better one, please. Now is the time. Semi-retraction of B, if the picture above. There exists quantifier-free type, preserving, respecting, preserving, maps as above, but such that they compose in a special way. So F followed by G is an embedding. So this composition is providing an isomorphism of A with another copy of A inside A. And it didn't have to be that way, because F is sort of taking an orbit here, sending it to an orbit there. G does that here. Maybe they're not the same quantifier-free type. So but I'm requesting that they be the same. So now, under the assumptions on the very right, I get my Ramsey transfer. Elementary? No. No, there's strains. They're just like mapping one orbit onto another orbit, different languages. OK, assume A, B, accountability infinite. OK, let's keep score. OK? Look at the definition of A. Look, we've got two for respecting, and I'm with you, Alf, two for preserving. OK. Voting will close at 410, so think about it. How come 25-week-old respect for maps? Oh, the map we've got? Well, yes, it should continue. Yeah, OK. All right. Look like tiny. You want NIP now? OK, so under these assumptions, if A, semi-retraction of B, and B has the Ramsey property, then A has the Ramsey property. So it's kind of coarse, but the Ramsey property is kind of coarse, too. You can also get an if and only if. So if A is a reduct of B, in terms of it's just a restriction of the signature. The signature of A restricts this one. Then you have an if and only if here, not just the if. So there's different structures, different languages? Yes. But they've got a common language. Less than common. It's common to the language. Well, you know what? I probably shouldn't. Can I? I need each one to be ordered by a relation in the language. I don't use that the order is the same. Yeah. So what do you put a semi-retraction? OK, why? Because I was looking for something like this in the literature, and I found the Algorant Seagler paper where they talk about retractions. And it looked like it was one half of it. So but here's the problem. We would need to have two interpretations. These maps are injective, but not necessarily onto. And interpretation would be sort of going this way. So a quantifier free type in A is a union, you could say, of types in B. So a relation in A can be expressed as a disjunction of alpha-nautical categorical case relations in B. So you're sort of interpreting A and B, except you haven't gotten all of it. Or maybe you have if you're mapping onto the isomorphic copy of A. But then what's going on with the domain here? It's not definable. It's not necessarily all of B. Then it's not quite an interpretation. I wasn't sure what to do with it. And suggestions after the talk are encouraged. OK, so I think that's the language contained in quality. Yes. But then quantifier free type presented implies one one. It must be one to one. Yes, no, no, you're right. You're right. I didn't say it, but you're right. This one? Because you're going to take a quantifier free type to a quantifier free type here, maybe not the same thing. I have an example for you. Yeah. You're going to want to see this. OK. Oh, I hear more votes. OK. Good, good. Yeah, maybe that's the thing. Preserving you would think something like this. Yeah. Maybe it could change if I ask my referee very, very, very nicely. OK, so my example is for you to think about. Yeah, OK. So I actually got this idea working with Byung-Hwan and Hyun-Jin. We were looking at two specific structures and actually got a great suggestion from our anonymous referee at the time about a way that they were related. And I was thinking, that's strange. And it kind of sat with me. So what do I have here? A is going to be my convexly ordered equivalence relations. And what is the best way to do this? Instead of making it by definable, maybe we should try to do it like this. I don't know if you'll be OK with that. Let me. OK. So finite sequences on omega. This is the same tree language, but I'm going to add something else here. Length. So these were the strong trees. So what does this mean? So basically what? Oh, this will be an equivalence. This is going to be convexly ordered equivalence relations. But the tree structure doesn't repeat. No, but I just want to show you the embedding because it's very nice. You can think of this. Right? So look at how I'm mapping the tree into the relations. And then you can actually map the convexly ordered equivalence relations into the tree. And this was very clever. I want to thank the referee for this idea. What are these? Oh, what are the classes? So this is just going to be an equivalence relation, infinitely many infinite classes. It's countable. Right. Convex the order to mean like it has convex classes. Yes. But in the drawing, the class is about the level of the tree. Yeah. So the tree and the order has some relation. Otherwise, the tree is a bit too much. Oh, yes. No, we have this sort of reduct situation, right? Bit of a reduct because you can define the orders using this relation. So I didn't quite say it, but it's a length of sequence. Length of sequence in the tree. So if you say, I'm not longer than you, and you're not longer than me, we're at the same level. So we have the equivalence relation. This is a semi-retraction. Maybe you won't like that my root is down here, but all of my points have the same quantifier free type. Any equivalence relation? So I can't really mess up on that. Basically, I want to say if two guys have the same type in the tree, they'll have the same type in the equivalence relation. That's not so hard because of the reduct situation. But from here to here, that was sort of the cleverness that sort of having the same type here is enough to have the same type in this richer structure. And I think that's what's really going on with transferring the Ramsey property. So what's interesting about this example, if you compose G&F in the other way, you do not get an embedding, something to think about. So really, whatever name I come up here with here, it should be like asymmetric. And the other thing is what is it saying about the automorphism groups? So in the indiscernible sequences, I think Pierre was talking about this. With just the indiscernible sequences, you could take N, you could take the integers, you could take Q. So in terms of how you index the sequence. And that's just stretching compactness. So here I could take actually, for say, limits of my classes, in Nash-Retro's 2005 paper, he shows that with the ordering, if you have the Ramsey property, you will be a Frece class. So you can do that. Yeah? Oh. If you have the Ramsey property, if you're a structure with the Ramsey property and linearly ordered, then you are a ordered Frece class. Oh, just you have a linear ordering, basically. Yeah, no, I'm just saying, like, if you say, how come I can take Frece limits, I can. So. I just wanted to say something about the Ramsey property. Yes, from the Ramsey property. You color in some light. OK. Oh, yes, what's happening with the automorphism groups, right? So you have some subgroup here, because the automorphisms of the richer structure are, in turn, automorphisms of the reduct. So what is going on? Are you transferring extremely amenability from the subgroup up to the bigger group? It's not quite something I've seen, because I don't know much about it, but I've seen this happen for normal subgroups, if the factor group is extremely amenable. But this one isn't normal. I mean, think about the automorphisms in the class. Very coarse, right? You just do some order of automorphism within the class you could do. And then you're going to really mess yourself up. I'm going to make this the last thing that I say. So let's say G is just some kind of shift in Q. So you sort of shift things over, and then you use some sort of H, which moves things around within some siblings or whatnot. And then you shift it back. Because no guarantee that you won't sort of mix these two up, right? So you don't have that normality. Anyway, so that's why I thought it was interesting, and I went a little over. Thank you. Thank you.
In this talk we introduce a weaker form of bi-interpretability and see how it can be used to transfer the Ramsey property across classes in different first-order languages. This is a special case of a more general theorem about what we will call color-homogenizing embeddings.
10.5446/59340 (DOI)
And the meaning of that is that certain parts of this will just be used, use basic computations about strongly minimal sets that would, you know, then those methods were introduced about the same time that Saharan was writing his thesis. So, but the other part will be using products, no methods from the early 90s of the Wyshawski construction and then some variations on it, which are actually in both Udi's paper and one that I wrote about the same time, but as far as I know, haven't been used since. So that may become new all. All right, where is this thing? Okay, so I just wanted to go back and look at what when Zilber proposed his conjecture, he says, any encounterly categorical structure comes from a classical context. And I think that's pretty much clearly not true, but I'm going to find a little bit more structure on some of the exotic classes that may be, maybe there's some hope in finding more structure in these strongly minimal theories than we've thought for the last 30 years or 25 years. So I'm going to say a little bit about what I'm not talking about, namely classifying strongly minimal sets by their geometries, and then switch into two or three things about what I am talking about, which is let's look at the theories themselves and what do we know about them. And so there's two papers. I'm going to talk one with Giannale Paolini where we construct these Steiner systems, and then one that he's not involved in where it's analyzing what have we created. And I had interesting talks with Omer Mermelstein and Joel Berman is a universal algebraist at UIC, and we've been talking about various of these things. Okay, so the Zilber conjecture says that the conjecture was that the algebraic closure geometry of every model of a strongly minimal first order theory is either disintegrate, which means the lattice of subspaces of the geometry is distributive, vector space-like, lattice is modular, and then we get locally modular by interpreter with an algebraic closed field, and we're going to be discussing variance on the Ruschowski construction. And in particular, the dimension function that we give is not obviously Ruschowski's, although Mermelstein has pointed out to me that there's a way of interpreting them. But a flat geometry is one where the dimension is given by applying the inclusion-exclusion principle to the de-closed sets, and then you will get that forking is not too ample that it is CM trivial, it doesn't interpret an infinite group, and so it's not locally modular and so it's not disintegrated. And everything we're doing will be geometries, will be strongly minimal sets where the algebraic closure geometry has these properties. And basically Evans, Ferrer, Hassan, and Mermelstein have been working on things that basically say, up to factoring out by finite sets, localizing the geometry, these guys are very similar. You can tell of clothiarity, but that's about it. And I'm just, that program is not finished, oh, Mary, I think it's making wonderful progress on it. I'm just going to behave as though it's true. So I'm not going to worry about distinguishing the geometry. But what I'm interested in, the object language is the theory, this thing we're actually talking about. Not the associated geometry that you get from the strongly minimal set, but just what is the theory talking about? Well, I mean, so for example, I built an almost strongly minimal rank two geometry. It has a strongly minimal set in it. The geometry of that strongly minimal set will be back with what these guys were talking about, but it's just not the subject. The subject is the projective geometry. And in the cases that I'm talking about, I'm going to worry about is the Steiner triple system, or the Steiner 17 system, not the associated strongly minimal geometry. And in this case, I have a... And so in this case, I built a non-desargeant geometry that was, in the case of the Steiner, the non-desargeant geometry, but it was not the case, because I was talking about the non-desargeant geometry. But it was not the case, because I was talking about the non-desargeant geometry. You couldn't even... So you have this ternary operation, which in the standard case, or over a field, is AX plus B. In this field, you cannot write that thing as a composition of two different functions. I mean, this is stuff from the geometry people. So what Zilber was trying to say was, are there properties of this algebraic closure geometry that tell you conditions about the actual algebra or structure that you're looking at? And here's some kinds of conditions that we might find. Well, some of them are Steiner systems with 17-point lines, some of them are Steiner systems with 19-point lines. But maybe there are some of them aren't Steiner systems at all. And I'm going to spend a lot of time on what it means to be co-ordinateized. And some of these structures are co-ordinateized and some aren't. Oh, yes, I left... Chris and I were talking about this yesterday, and I'm supposed to write in non-trivial, and I forgot to mean that, because you always have binary functions definable in the language of equality, projections, and so I have to be...in order for this to make any sense, you have to have some non-triviality condition. All right, and now I want to say something about the word quasi-group, since it's not well known, and it's a structure that has inverses. If you have x star y equals z, then any two of them uniquely determine the third. Okay? And then I'm going to look at various properties that arise in finite combinatorics, and then people like Cameron and someone named Webb. I mean, there are various people that have lifted these issues to infinite geometries, and I'm doing some of the same thing using these systems. So I think that...I mean, I put down these four things just at random. If I'd asked people how they do this, everybody here would have given us a slightly different... Does commutativity go first, or does it the inverses that go first? But somehow you lose inverses, you lose commutativity, eventually you lose associativity, and then you get an alternative ring, and you can lose alternative, and I don't know, there's something like that. On the other hand, geometers have a different view. The first thing you lose is commutativity, because the theorem of Papas is stronger than the theorem of Dysarge. And then there's something called a near field, which means you've lost less distributing. And there's something called a quasi-field. Multiplication is a quasi-group with identity. And an alternative algebra, now you've lost... And then finally down here you get ternary rings, which any projective plane has a ternary ring. So there's just sort of two different ways. You see, all the way down here, inverses are always there. They're the last thing you lose from the geometric side. I mean, you don't lose them actually. And that's because multiplication is not repeated addition. Multiplication is scaling, and scaling has an inverse. So now I want to talk about co-ordinateizability. And this first paragraph is just a rough paraphrase of a paper by Gantt, Niren Werner, in 75, and another treatment of it in 80, about what it means to co-ordinateize Steiner systems. So, sorry, I haven't actually... A Steiner system, I should have defined already, it's just a collection of points and a collection of blocks. And in a Steiner system, two points determine a block. You can't have... If I have a block and two points in it, and I have another block that intersects it in two points, well then these are the same. So... So you have direction points. Yeah. And a block is just a certain structure. That's right. That's right. What are the actions for that direction? So I'm saying all I'm saying at this point is that all the blocks have the same size, and if two blocks share two points, then they're equal. You also have to add that every two points, there is some block there. Well, yeah, it might just be those two points. Yeah, oh no, no, every two points in fact get into a block at the same size. The same size. Yes, right. I also have to say all the blocks have size bigger than two. Well, at least two. Otherwise it's trivial. Well, otherwise it's trivial, but that doesn't keep it from me. But yeah, the interesting ones are bigger. That's right. No, that's not true. Yeah. Well, and it's a finite number, right, in these Steiner systems. That's right. General, no. But... So they just say there's a one-to-one correspondence between the class of geometries and a well-behaved class of algebras, which for them means a variety. So variety is another word that people are asking me about. A variety is just the notion for structures, for sentences, without restricting to a model. A definable set, you have a model and you don't get the subsets. If you define a class of structures, then that's a theory. And now if we do this for equations, a variety is an equational class. It's a collection of algebras that satisfy a certain set of equations. The universal algebras. The universal algebras, right. Yeah, yes, the universal algebras meaning a variety. Now, the point that Mianosz-McCoskey has been making in a couple of papers lately is that the notion of by-interpretation kind of gets lost in his case in various places in computer science, where they were using the coordination of a Dzargiian plane by a field, but forgot to notice that you had to have the by-interpretation. They only had the definability in one direction. And that kind of looseness is exactly what's going on in this universal algebra work. And it turns out to be an actual issue here. Okay, so our notion now is going to be, we're going to look at some kind of object geometry. It's coordinatizable if there's a one-to-one correspondence, definable, first-order definable between it and a first-order definable class of algebras. I won't be having varieties. I will be interested in the algebra that the variety is in, but my thing's alphan categorical. It's nothing like being all the models of some variety. I think if you just see the examples in a minute or two, it won't be. So now this is basically what I was telling you about a minute ago about what I meant by a Steiner system, but just even a little less. A linear space is a collection of points and lines. That's a two-points-determined line. That actually means two lines can intersect in only one point. And I'm going to, that's usually phrased in a two-sorted notion. I mean, you have lines and points and you're saying this. But there's a biinterpretation between, and everything I'm doing is going to work in a vocabulary with a single ternary relation, which is a hypergraph, that is, it's set like. It depends on the three points in any, the points have to be distinct, and in any order of them satisfies our, and two points-determined line. Now there's natural generalizations of this. K-points-determined line, allow a finite number of line lengths. This kind of thing, Omer and Asaf, have done some examples about this. They weren't thinking of it in this way, but they, well actually, in certain sense they were. But they were looking at, they were emphasizing the algebraic closure of geometry, not the actual, the object vocabulary, object language. All right, and so you can, if you work in two sorts, then the structure is obviously not strongly minimal. You've got two sorts that are both infinite. In a one-sorted structure, you couldn't have two infinite lines, because if you have two infinite lines and at least three points on the line, you take a point off, and now here's another point, and here's another point, and here's another point. So this, this, well, I get, I'm sorry, if I have, well, no, I'm sorry, I'm making this too hard. There can't be two lines with infinite points, because they're definable, it's not strongly minimal. What I was just saying is an argument I need to give in a minute or two. Okay, so, but you can have a biinterpretation. It's a two-dimensional interpretation. Now, I'll have something that, that, that, that, the two-space, that the points in my system with the, with the, with the ternary relation will actually be pairs from the, from the points and lines. And so, John Luke and I showed that K zero star, which I thought I had defined on the previous case, yes, so K zero star is the finite linear systems, and K star is, is the infinite ones. And so then, there's a biinterpretation between K, yes, this is not. K star, which is the ternary structure linear spaces, is biinterpretable with the two-sorted linear spaces. So now, what did I say here? If you have a strongly minimal linear space where every line has at least three points, there can't be any infinite lines. And this is where I started to draw here. Suppose I have, if every line has, where did the eraser go? Oh, is it possible that it's on the other side? Oh. No. But on, yes, on this one. Suppose there's one line that's infinite and a point not on it. Then you connect this and, well, each line has three, has at least three points, so there's one there. And then one there, and then one there, and now I've got, yeah, I've contradicted strongly minimal. And that means that there can be no strongly minimal affine or projective space because in those, in those, that implies that there's as many lines as points. The basics. And now, this is what I meant by the, the paolithic. This is sort of the basic property of, of strongly minimal sets, right? That if you take a formula, there's an integer k such, so, so, yes, so we're talking about, we've been talking about a law week as elimination of, of infinity. Now, that means that in particular you ask what, what's the length of the line for these two points? If, if there were, if, if you have more than some k, then you, then you have infinitely many. And so there's a notion of a capital k Steiner system where it says that you can have blocks of different sizes, but only finitely many different sizes. And just strong minimality would mean that you would have a capital k strongly minimal system. Now, the ones we're going to construct are actually Steiner systems, meaning there's a, there's a fixed length. But just, just out of strong minimality, you get this much and then the actual construction, it makes it more homogeneous. And so now the ones we're going to talk about, and this, so the theorem of, that I have with just on Luca says there are uncountable, there's an uncountable family of strongly minimal, infinitely k two systems. I probably, which, yeah, the next second typo found it, it's 2k there and k two here. This one should be 2k. Just means two points determine a line and every line has size k. And you can do this for every k greater than or equal to three. And then once again, I think I've said this before, it's not one, it's not locally modular. It is CM trivial, there's no group definable, and in a sense I'll come back to more. The real message is there's nothing associative going on. That's the theory of the strong, yeah, yeah, yeah, so they, yes, I see the, so the, the strong minimal sets will be indexed by, by the mu that tells you how many good pairs there are. And so that then T mu is the theory. Well, I mean, I just, one amp will not locally marginal to blah, blah, blah. That means here's the simple thing, there's no, there's no way for the group definable. No, infinite. Yeah. I mean, without that? No, it's not true. Right. No. The ball issue is not true. Where there's a group of locally modular. Well. Well. It's a lot of one category of ones, but not strongly minimal. Yeah, but they're almost, they're almost strongly minimal. Well, right, right, we, let me talk about later. I mean, they, they, you know, maybe. This is not, the conditions themselves do not imply no could be defined. It is flatness. Well, yeah, no. Well, yeah, yes, of course, but I'm saying any T mu, the T mu's are flat. I'm sorry. No, okay, so we're just, yeah, no, yeah, no, yeah, no, that's not, see this, this, this slide is, is the specific ones. They're the ones that are flat. Okay. All right. And so now here's the trick that John Luca introduced. So we're going to think of the collection of finite linear systems in the vocabulary with just this one-turn-a relation. A line is the same thing as a maximal R clique. And L of A is the lines that are based in A. That means it's a line in this ambient model, but at least two points are in A. And now the definition of delta for him is delta of A is the cardinality of A minus, for each line that's based in A, of course, of A is the whole universe that just says all the lines. You take the cardinality of L minus 2. It's the surplus on the line over the, over the, the dimension of the line. And then the class of finite structures that you use for the Roushoff's construction is, the structures that satisfy two points, the terminal line, and each model is hereditary non-negative when it's delta function. What is M here? Sorry? What is M? Where is there an M? M. Oh, M. Yeah, so it, it, there's a just, I'm just, M is some structure and then if A is a substructure, you, yeah. A line. A line is a, you know, this is, this is the kind of thing when you, when you, the, what, you, you actually have to calculate with, with, with places where you don't just look at the whole universe. You want to know what, what's a relative dimension and then you need the L update, the, the, and I copied something from that proof that I'm not actually going to use. Okay. I would say the same as, Omeo has a thing that, that, that, that he's more or less convinced me, he's by interpretable, but he, he, he, he, he never actually worked out the strongly minimal case and input at all, actually, and even more not with the assumption that, that you had two points, the terminal line. I mean, he's convinced me that he, that he could do that, but, but it's not stuff that he's done. It's not. And, and that it really is pretty much the same thing. It's, it's another approach to that. Oops. Oh, that was not what I wanted to do. What did we do before the? Oh, okay. I'll let somebody, no, no, you're going to say, okay. Okay. It's the, yeah. Okay. Okay. What? Because it's not, it's not quite full screen. Well, okay. Yeah, it's close enough, I think. All right. So now here's just a quick, the one thing that, that is, is, is novel here is the amalgamation. The problem is, here's some things I want to amalgamate. And what, what, so you almost have a disjoint amalgamation. That's what in, in, in, in the Wyshawski argument in, in the infinite rank case before you worry about the muse, you, you have, you have a disjoint amalgamation. But here, suppose I have a line like this that comes around here. Then these two points have to be on the same line. So I have to put in, I have to have our holding of this point, this point and anyone that was on the line down here. And so that, that's just a little adjustment in the, in the, in, in doing the amalgamation. But it has the important point. And I'm just going to say this real quickly for those who, who, who studied this thing. It means that the, that a zero primitive, just a point, doesn't have a unique base. It's based on any pair of points from the model below. And that, that just means you have to make, that's something that involves extra steps and in some cases saves time in, in the technical work. Okay, now, yeah, okay, so there's a, so what it means for A to be strong in B is the, the usual notion. There's nothing in between where the, where the rank goes down. You're, you're just, I'm sorry, where this is strong in this, if anything in here has higher rank, at least as big a rank as, as this one. And then, and then the point is that if you look at something, let's, let's, let's make it minimal. But now you look at delta of B over A. And now there's three possibilities. It's greater than zero, equals zero, less than zero. And now, if it's less than zero, then it's easy to see that there can only be finitely many copies of B. Because as you copy, each time you copy a one over, you, you go down. If it's greater than zero, then it's infinite. And if it is zero, then this is the whole difference between the uncollapsed class and, and the actual strongly minimal construction is that you force the things that are zero to be, to have finite dimension, to appear only finitely many times. Now, in this case, what's going to happen is that there's a particular isomorphism type of such a set which is, such an extension which is crucial. The, namely just adding a point to a line. So that, this has the, the, the A, B are, they have dimension two, but you add this extra point, you still only have dimension two. And the, and so what I set this mu of alpha, if I set it to some K, that will mean the line length is K plus two in, in, in the, in the, in the generic model. Okay, so I wanted to just remind you of this, this setup of, of what, what's going on here. You start with all finite linear spaces, then you take the ones that have non-negative, that are federally non-negative dimension. Then you take the small, the, the, the, the, the, the, the still smaller class of those where the mu function is, is, has told you a limit on how many, extensions you can have. So a, a good pair is the absolutely minimal case of this. First you get something that's minimal of dimension zero, and then, and then the good pair says, well there's some stuff down here and these are the, these are the things that are actually touched by the relation. This is, Udi called this minimally simply algebraic. And when, when I was working with, with cases where you weren't getting something out of one categorical, I, I wanted to have a, a notion that didn't imply algebraicity when it wasn't going to be there. So we picked up good pair and primitive. And then K mu D, which is the models of the, of the theory of the, are those things such that M is in K mu D, if you have some M, infinite in general, and, I mean it will have to be infinite. And then N, which extends it, which is in just K mu, then any non, if you have something up here, I have something of rank zero that shows up here, I already had a copy down here. It, it's closed for D. Now this is the other reason that we're going to save some time. Here are two slides that we only look at if somebody asks a question where we really need to know a definition. All right. And now here is the theorem for, oh, I should have, yes. So script U, probably it's on this note, should be here. Yeah. Script U is, is, is this condition on mu that says if mu of B over C must be greater than or equal to the delta of the bench, of the base. Now this is, this, this is in, in Udi's paper. It's, it's, it's a hypothesis for all this work of Heavens, Heavens and Forerla about Dr. And we're going to be looking at cases where we break that. And that's why I put in a U script to say these, because I'm going to look at, at, at, at different collections of possible muses. Oh, no, sorry. So if you're in mu in, in this script U, then there is a generic which is strongly minimal. And, and depending on what you've picked up from you of, of alpha, you get the length of the line. And the, the, the way I chose to write this up was to, was to go through Kitty's proof because it, it gives the clearest explanation of where the axioms actually come from. And in doing this we had to analyze primitives a little bit. There's, there's slightly different things going on. But it, it, you know, this is, this is pretty much the, the same, the same, same argument has gone on before. We've just, now I've written these tools. I'm, the way I wrote these slides in the end is I'm hoping to convince people there are some interesting problems to look at. On the other hand, I'm not, I can see now that I'm not going to have a paper I want to circulate for several weeks, a month. And so I'm going to post the slides which will have the problems and then has a pretty extensive bibliography. Okay, so the first thing, and this, this is, this is, this tool zero that we're studying theories. You know, that, that's nothing new for us. It's more that that's an issue for, for making the, the, the applications to combinatorics. And I'll say that later. But then the other tools is we can either make K zero smaller in a way. I'll give some examples in a minute in order to get special properties or we can not be quite so crude. It's making, it's, we change the mu. So there's fewer zero dimensional extensions. And we also can somehow get models that have properties by expanding the vocabulary. And so I'm going to give examples of, of, of, of results that I can do here with using these various tools. So the first thing is, what do you need to make the, the structures too transitive? Now they certainly aren't to begin with. I'll show you in a minute that there are arbitrarily large structures that are zero primitive over the empty set. And also over one element and also over two elements. So you're going to have a big algebraic closure of, of, of small sets in the general situation. But so here's just a very easy argument. If, if you happen to know that every two elements set is strong, then the automorphism group is going to act too transitively. Because there's only, there's only one type of a, of a two-tuple. And in the monster model, any two strong subsets that are, that are isomorphic are, are automorphic. And so it's too transitive. Ah, but now here's where, where I'm using the part about the theory. Saying that there's only one two type is a first order statement. If it's true in the monster, it's true in, in every model. So all the models become, yeah, right, or, or another way to use the, the, the, there's only, yeah, there's only one two type. But it's also the case that in a strongly minimal theory, every finite model, every, every model is finitely homogeneous. That, that again is what I'm not going to say. Now we can also just get n transitively if I say, oh, let's, let's just make delta be greater than or equal to n. And then whatever that in is, well, then there'll be no more, no more n types than there are quantifier free n types. So, Ushavsky has this example in, in, in his paper where he says, so he just, so I've called this k0 minus 1 just, just, just for that one place. It means you take the ternary relation, you don't have any axioms on it, rather than that it's a, it's a hypergraph. And, and he says if b is greater than or equal to, delta of b is greater than or equal to 3. So that means that, that gives you immediately that there are only, you know, it's too transitive for the reason I just said. There's only two three types. Either you have three points that are independent or you have, or you have three points that are on a, on a triple. And so there's only, there's only two three types. And in particular then, a slight argument that, that Omer had noticed, I think long before I got that is, is that that means that you have a, a Steiner triple system. But that construction of course kills having any longer lines. It's just, this by itself says lines have at length most of, at most three. So, Udi, well Udi got this so that the geometry would be different because there's, there's, you have closures of two points are finite. And so it has to be a different geometry from the ones that we had to begin with. He wasn't interested in the geometric, you know, in the object language geometry. Now, in, and I'm going to use this in a minute, but this, this same argument will show not only that it's too transitive, but if on a line, you're three transitive. So, and so in fact, each, no, no, no, now I can have lines of some fixed larger length. And that's, that set of points in that line will be indiscernible. And this kind of trick I was using in, in, that's how in this paper where I showed that the, that the non-desarguin plane was had low length, lens balytic class. Okay, new topic, coordination by, yeah. In general, yes, because we're going to have lots of, lots of, of, of the primitives over them. Okay, now, collection of algebras weekly coordinatizes, this is just more or less like the trial edition for each algebra expands, definitely to a member of S and the universe of each member of S, the underlying system of some, perhaps many algebras. And then there's a theorem. Oh, right. So I wanted to make one comment. The, the, the thing about these, these quasi groups and, and the, then the various varieties, they're all going to be, the identities will only have two variables. That's why you don't get associativity. And they are basically the, the identities of, of a finite field or a near field or something like this. There's, there's a whole family. This, this is a immense literature running from the fifties through the seventies or eighties. And then I don't think people have done much since. So a Steiner quasi group is a group or it, all right. Group or it does not mean what it meant an hour ago. A group or it, and this is just the reason I'm saying this is because if anybody goes into this literature, they never define what a group is. Everybody knows a group or it is an algebra one binary operation. Now nobody knows that anymore. Wikipedia has this, this separation of, they don't have that one in, but, but there's a, there's a literature of 30 or 40 years where that's what group or it means. Okay. And so a Steiner quasi group is, is one that satisfies these equations and any standard triple system, you have a, you have a real by interpretation. If three points are, are, are equivalent, you define the product of any two of them to be the third one. No, no problems. Now a Steiner quasi group, so it was Steiner on the first one, it's Steiner on the next one. Steiner and Steiner are two, two different people that live about 100 years apart. And, and that, that's it. Suppose you have a two, four. So, I mean, so, so, you know, four element lines. And now you just say the A, A times B is C and B times A is D. And, and then you get a certain kind of varieties there. But there's no particular reason to think that you can define this. All right. And, and so this is an interesting point, right? You've got, you've got a, you've got a strong element, so you've got an algebraic closure. And now you're going to make the definable closure bigger, but only inside the algebraic closure. What, what, what can happen there? And I mean, I don't, I don't know a general answer in the end. I came up with two, with a couple of equations. Okay. This is the part I just said. You can get one where it's not definable. And I do that by playing with this. Well, in fact, that, that was what I told you a minute ago when I had this. This structure, every model is too transitive. Every line is set of indiscernible. Well, if the set of indiscernible, there can't be any function defined inside the line. Whether there's a non-trivial binary function that's not inside the line, I don't know. I mean, that, that's, and in the other hand, by, by using this expansion trick, I, I just do this thing over where I have not only the relation R, but I put a function F that, that is, well, not the function, but the graph of the function that's going to be the Stein-Quasi group. And the, the finite models all have to have each line. It satisfies this. And then you prove the amalgamation that there's some, there's some work to be done there. But you get another structure. And this time I built in the, the, the Quasi group as well as the theory. Now, here's, here's a kind of general idea of what's going on. If you had a, the, the line length was a prime power, then there's a near field and you can find a multiplication and you get something called a block algebra that coordinatizes each, each one of these finite objects. And now you do the same game and get that it's weakly coordinated by, by block algebras. A block algebra determines the Steiner system of, of, of, with line length K. And in the other direction, you can put one on it, but it's not obviously definable. Presumably the kind of trick I just did will, will make it that way, but I don't know. Now there are a few other cases where you, where this universal algebra literature tells you how to do this. But only for, with, with restrictive number theoretic conditions on K. And on the other hand, we have these systems for every K. So is there any way we can learn about what's going on there? All right. And now really quickly, the, the, the varieties associated here are very well behaved. That they're, they're, they have congress mutable, regular uniform, various kinds of things that are very nice from the standpoint of, of universal algebras. But there are arbitrarily large, subdirectly irreducible algebras. I, I, you know, Joel and I have looked at this, but I can't really tell. Is this structure subdirectly reducible? Did the Aleph one Catechol circle when happen to be the guy that's subdirectly irreducible? So, so there's general kinds of questions about, about universal algebra that, you know, general questions that the universal algebraist asks first and none of them are really clear. The, the, this permutability means that these finite algebras are all direct products of, of, of little finite things. But that, that, that surely isn't true of the big one, but I don't know. Okay. So I, I think I'm going, I'm just about out of time. So, oh, I've got four more minutes than I thought I did. So now, so this is, this is an interesting part. Infinite linear spaces are too easy to construct. But what I want to do is, you know, what I'm doing is constructing ones with a lot more structure. And the two sorts of things that I'd like to answer Cameron's issue with is, so, so first of all, I say, look, if we look at theories and we, and we look at the models of those theories, you're not just constructing one, one, a one off of a, of a, of an algebra with, or a combinatorial structure that has whatever property you're trying to get. You're getting, you're getting ones in every power and you're getting them in a very uniform way. And these kinds of, so, and then I'm specifically giving examples to do that. So, okay. So one notion is a, is a cyclic graph. And so this, this now is really about triple systems. But it says this, you start with, here's a three element line. And now I'm going to have what I call an A B cycle, because I'm going to actually forget about this guy. And that's, you take one point that's off, and then you draw the line from it. And now I'll come around this way. I draw the line from B to that one. And then I come back from this one, and I draw the line from that. Now, if I now look at this and it happens to close off, I've gotten a cycle. If it doesn't close off, you keep doing this, and it may happen that instead of in four, you'd get it in eight. Or maybe you get it in 12. What's going to happen is going to happen in a, in a, in a multiple of four. And, and, and so these cycle graphs in fact turned out to give us the interesting examples of, of, of the primitives. The arbitrarily large primitives over the empty set are well over, yeah. No, over two elements that they, they, they are the, they're, they're, they're the cycle that the K cycles for larger, larger and larger K. And I can fiddle with it a little bit and get ones that are primitive over the empty set or over one element set. Now, I'm not going to be able to. Okay, so that was, okay, so I was afraid of this. That's what I just said. All right, so you, you're going to have to have arbitrarily long chains, but we're in an assassinated model. There's going to actually be an infinite chain. And if it's too transitive, then all the infinite chains will have, will be, will be isomorphic. And even, even without having the infinite chains, if you have a whole bunch of finite chains over something, the two transitivity means that, that, that structure is the same on, on all of them. And so you get what they call uniformity for the, right, okay. Oh gee, well, I'm, I'm going to tease you with this, but I can't actually allow time. So then these are the kinds of questions. When, when, when do you get a, a, a defiantly coordinated structure? When is the, does it, you always have to have a quasi group. Although I think I can show there isn't, but what, what, what, just what happens here? There's, this psychoanalysis is, is something that, that they've worked on for Steiners, for triple systems. For all I know they've done it for, for, for four lines, but I didn't spend enough time on the internet. And, and does it, is there any way to take back this work with the, with the infinite systems and learn something about the, does it, does it do anything for finite combinatorics? And I'm actually dubious about this because these things are so solidly not locally finite. But, but, but maybe there's some, some other approach to it. Okay, so I seem to have used up my time. Thank you.
With Gianluca Paolini (in preparation), we constructed families of strongly minimal Steiner (systemsforeveryk 3.Aquasigroupisastructurewithabinaryoperationsuchthatforeachequationxy=zthevaluesoftwoofthevariablesdeterminesauniquevalueforthethird.Hereweshowthatthe2^{ Steiner (2,3)-systems are definably coordinatized by strongly minimal Steiner quasigroups and the Steiner (2,4)-systems are definably coordinatized by strongly minimal SQS-Skeins. Further the Steiner (2,4)-systems admit Stein quasigroups but depending on the choice of theory may or may not admit a definable binary function and be definably coordinatized by an Steinquasigroup.WeexhibitstronglyminimaluniformSteinertriplesystems(withrespecttotheassociatedgraphsG(a,b)(CameronandWebb)withvaryingnumbersoffinitecycles.Weshowhowtovarythetheorytoobtain2or3$-transitivity. This work inaugurates a program of differentiating the many strongly minimal sets, whose geometries of algebraically closed sets may be (locally) isomorphic to the original Hrushovski example, but with varying properties in the object language. In particular, can one organize these geometries by studying the associated algebra. This work differs from traditional work in the infinite combinatorics of Steiner systems by considering the relationship among different models of the same first order theory.
10.5446/58304 (DOI)
edd ddï нетel yn hawddwr ychwanegol wedi gweld yw unigだud. Y maen nhw'n credu ychwanegol, boiled y gallwch i'n hyddrifoalu, nad ydych yn bach chi wedi y cerddorol, wedi mynd Cymru beingiwn y DозZA i'n gyfaffectur olig. Fe hyn o'r rhaglen, ymrhaid â'r bod hefyd younw yn rhaed i gyfecyddol sefydlain'n gwahanol. rhaidd ond, a r goodnessf! Diolch arna? Rugh. Mae rhaid, dim yw'n gofyn du, yna rhoi gweithio a'r pethau a'r hunai yn bakel. Rwyf cael de neudwyr cyfyrat, aw ychydig i'r�eth eisiau wirio cyff daysiyd! Credoresdol'n ceisio c Near? A fyrgiall. dog, sorry, two dogs, three cats and two donkeys, you would be more likely to answer questions and indeed ask them. So there they are, or some of them at least. Really, I do not mind being interrupted, it seems I have to talk better than you do. So just to give you an idea of what we do, my colleagues and I, for mood, I'm the sort of, let's say, the covariance of mood basically. People are meant to tell me what they want in terms of covariance diseases, and then I provide them. We also do a little bit of other things too, we fair around with vector modelling and host modelling and that sort of thing, so I am also a moddler, a part of data. And in the talk here now, I'm going to try and give you some hints as to what we have to do to find the covariance we need for the project, how do we find out about and what do we actually do with the covariance, what they are, and what else do we have to do to support the sort of covariance data set fiddling around and processing that we do on a day to day basis. As you know, by now, mood deals with a load of diseases, it's not just one or two. So there's a list of them there, it's not particularly relevant as to what they are, apart from the fact that there are a lot of them, and therefore it means that there's quite a lot of drivers and covariates and factors that influence disease distribution and occurrence that we have to think about. It's not a single disease where we can perhaps limit ourselves to half a dozen covariates, it's much broader job than that, which is just as well, we've got four years to play around with it, but it gives us a wide range of things to be looking at, and a lot of them we have to actually sort of invent from scratch, so it's an interesting thing from our perspective. What am I going to talk about today? Why do we need covariate data? Well, you probably know that already, so I'll skip over it fairly quickly. How we find the data sets we need, it's all very well saying, it's temperature, but which data set? It's not necessarily that easy to decide. What sort of covariate types there are, what you have to do to make them easy to get at, the whole raison d'être of what we do is to try and make these data sets accessible to people in the project. We really try and, it's not exactly dumbing them down, but modellers don't want to spend their time scratching around looking for data, they want it on a plate to use easily in a way that fits with their analysis. That sometimes involves producing new data, and quite often it involves combining parameters, playing around with lots of different covariates to come up with something that the modellers might be interested in. It also means finding out what we don't have and trying to find, and trying to look for them, and believe me, meena is a fair amount of that. Then we have to think about not just the data sets themselves and the technical side of things, but how we're going to get them to our users and partners. It's not just, here they are, sticky online, get on with it, we have to make a little bit more effort, particularly since they're in users outside the project who aren't that familiar perhaps with the more geeky side of data set management. Okay. I think most of you probably know the basics of modelling by now, just in case you don't, modelling 101, it's a really, in theory, it's quite a simple process, so modellers do try to make it all complicated. You start off with a known grammar, sorry, the one on the left, where it says known, and for a number of locations you find out, in this case, where I don't know, incidents, or presence and absence, or something like that. Then you have covariates, the drivers that you think explain it, so you find the same values in those same locations for the predictors, or covariates, or drivers, they all mean the same roughly. You then cobble together some sort of statistical relationship between your known disease and the drivers, and then you apply that relationship to all the locations that you don't have disease data for, so all those are better places in this case without examples. So it's essentially an extrapolation process, one way or another. Most of the spatial modelling is based on that. So of course, there you are, you don't need just the disease data, you need the predictors to drive the models. And the training word for that's covariates, so we're cool with that for the moment. Cervariate identification, I think we've been into that to some extent already. Francesca yesterday showed you how the literature search side of things works, and how long and complicated and pains taking that can be. I say thorough but slow, it sounds as though it's contained, I'm not, I apologise. But we also tried various other ways of doing it, sending around questionnaires, which was a complete waste of time, and because people didn't bother to answer part of it and they weren't interested at that stage, but also because it's quite difficult for them to come up with some of the answers. Expert opinion, we've all been modelling, at least in my case, for quite a long time, so you have a pretty good idea of what the drivers you use and what the ones work best for you, and specialists therefore can come up with suggestions, but that requires some sort of validation or confirmation. And then you've got the modlers or the users themselves who say, I have a particular interest in this particular driver or this particular area, and you please go and find a miditler in some nice form. And throughout this process, particularly for this project, we have to try and remember that it's not just for modelling beeps or technicians, we have to try and produce it for people who are perhaps less specialist in this field than we are. Key to that process is standardisation, because it's a lot easier if all your drivers and all your covariate data sets are in a form that's compatible so they can all go play together. It helps if it's compatible with your disease data sets, for example, that the conditions might be the same, or the administrative areas might be the same, or the images are similar in size. You need to come up with some sort of standardised outputs for these things if you're going to provide a covariate data archive for a set of users, because then it just runs more easily. And if you think it's not just about the way you provide the covariates, it's also about the actual covariates themselves, because take population, for example, an important denominator for most of the diseases to give you prevalence and incidence and all these other things. Now, if I use population from 2015 and you use population from 2018, you're going to get different answers. If I use population derived from the global population of the world in, wherever it is, I can't remember America somewhere, or world pop based in Southampton, I'm going to come up with different answers. So you've got to somehow standardise your denominators and try and persuade the different users that there is a denominator data set that's most appropriate, you've got to justify that, and also that try and ascertain what you're using. So standardisation of things like denominators is an important thing, but there are other aspects to it, simply to make the process of modelling that much easier. And I've got a list of more technical qualities here that you might consider. Take things that affect the sample value, not just whether it's the global population or the world pop, but actually how you present that as a raster image. Most of the stuff I talk about, these come to this as imagery rather than, it could be administered at its own level as well. The resolution is important, because clearly if you've got a square that's five kilometres wide and you've got a population number, and then you divide, you then amalgamate it up to 10 kilometres squared, so you add four of those together to make 10 kilometres squared, you're going to come up with a different value as you average them. So your five kilometres resolution stuff, it's going to be different values to your one kilometre or your 10 kilometres resolution, and the same with different admins, your admins three levels are going to be densities, they're going to be different to your admin two or your admin four. So you've got to standardise the resolution. If you play with GIS a lot, Geographic Information Systems, then projections become an interesting concept, because maps are all, I don't know how much you know about them, basically they're drawn in different ways, either so that the metre is indeed a metre or in lat long for example, that's even long. And the maps then each pixel would cover slightly different areas depending upon which projection you use, so if you convert one projection to another, again you change the size of the grids very often, the units you measure, the grid size or before your admin zones in, and that will again affect the value, so you've got to try and standardise that. And then of course the units, I mean you could have for example population numbers, or you could have density square to the other. So again you need to think about what units you've got in order to provide a standardised data set for models to play with or users to download. When you play around with things like population numbers, then if you've got different boundaries and you start aggregating them together or indeed dividing them, you're going to change the values as I mentioned earlier. It's a similar thing to, thank you, to resolution, except in one case boundaries it could be the difference between let's say parishes, or political census units, or postcodes. So you need to think about that, it's no good me giving you a postcode, let's set of information when you're playing around with dipartonol, because that's just going to complete everybody. So you need to think what boundaries and map units you're using. And then of course there's the extent, how big is your map? Does it go from Russia or does it stop in Leon? So all of these things, if you're trying to produce a data, a spatial data archive, you've got to try and standardise, try and get it right. Just as an example, this is a move standardised extent and set of polygons, administrative polygons. You'll notice there's two there. One is rather smaller, got smaller things in those of the nuts, three of infamous fame. They're all different sizes, they're based on population actually, which is why Germany and the Netherlands and the Venomax countries have smaller ones. But if you're going to try and map it, those small ones can disappear, because you then really can't see what's going on in them. So we have also a set of mapping polygons, which are more or less equal sized, to make it easy to see. So that's another aspect of standardisation. You have to think whether you're playing around doing data analysis or whether you're actually trying to display it. What sort of covariates are we talking about? Well, the obvious ones you've heard about already, the environmental or demographic kind people, economic, agricultural, blah blah, there's lots of fairly standardised ones. And you can find them, but quite often it involves a degree of customisation. It's not just a matter of saying, right, I'll take this. The customisation might involve, as on the right there, something to do with turning vector data and speed, speed and vector data, into wind speed, which is when you download wind data, you download horizontal and vertical vectors at the speeds and you have to convert it into something. So it might involve a bit of manipulation. But don't just think it's to do with climate and environment and all that sort of stuff. It's not, because diseases are also driven by hosts and vectors. So mosquitoes and malaria, albapictus and dengue hosts like magpies or cattle for HPI, but other things, or West Nile. So it's not literally just a matter of temperature or rainfall. It's things other than that. And maybe you have to model those too, but you don't necessarily in order to turn a vector distribution into something that's consistent, as I just said. You may have to do quite a lot of playing now. So let's have a look at the headlines. I'll give you an example to try and give you an idea of how complicated it can be to choose these things. For the environmental one, the headline is a precipitation, land surface temperature and vegetation indices. I'm talking here largely about remote satellite imagery, but not in time. They can also come from weather stations, this sort of data, which I'm showing there. There are many, many, many archives of this stuff. And the main ones, there's something called Climate Data Store, which has a lot of them. But the main ones that certainly I've been using are NASA, who you all know about Copernicus, which is European, and the ECWMF, which is a weather station climate stuff. So which do we choose? You need to think about the data before you choose them. You need to think about what the parameters are. Is it, I often do this right, does it cover the right area? Is it relative humidity, or vapor pressure deficit for example? How do they deal with gaps? Because no data is said, well, this sort is complete, so you need to know how they fill the holes. Are you interested in time series? Or are you just interested in some sort of synoptic thing, a synoptic summary? How often are these data sets produced on a daily, monthly, weekly, hourly? How soon can you get hold of the data if you're looking for real time data? Can you get and download data from this morning, if you're interested in it? Is that important to you? The resolution coverage we've been into, how easy is it to get hold of? I mean, don't underestimate, if you're doing a lot of this, you really don't want to go for the really complex stuff if it's at all possible. Or if you do, get somebody else to do it like me, and then get them to simplify it for you. So how much preprocessing is required? Well, there you go. I'm still here. Let's have a look at a comparison then. The era 5 stuff, now this is the classic weather station data produced. It's a huge, huge range of climatic data. It's produced by the European Centre for Medium Range weather forecasting. You wouldn't believe what's on there. I mean, it's got temperature at 73 different heights. It's got weather variables I've never heard of. I mean, literally it's got hundreds and hundreds of different types of weather variables, and some vegetation stuff now. It's based on reported weather station data mostly, and interpolated to a native resolution of 0.25 degrees, which is about 25 kilometers, as grid data sets. There's a huge range of parameters, and the good things is that it can be easily downloaded. You can downscale it when you download it to as little as one kilometer, so you can have fine detail information. It's straightforward, and it's clean, and there's no holes, much mostly. There's some technical distrust about interpolating weather station data. You've got weather stations a long way apart when you interpolate the values between them. There's some people who say, well, I don't believe it, but I want more detailed measurements. I'm sort of the only one kilometer of climate that's just coming up. That's what I've just said. Another slight downside is that it can be two to three weeks before they validate the data finally. It's available very quickly, but they don't say we'll take responsibility for it for a couple of weeks. That's one lot. Another lot is Modes, which is a type of satellite. It's been producing climate data at natively anywhere between 500 and 250 meters for a long time, since 2001, actually, January 2001. A lot of people have been using it to provide temperature and vegetation data. That's its thing. It's now very old, but it's produced by NASA. It's user-ready, and I'll explain what that means in a minute. There's a long time series. There's a two to four day timelight, which is quite short compared to ECWF, for example. There are many, many, many derived products that you can find, all derived from the same data sets. There's a lot of consistency. There's no interpolation, but there are gaps because you get clouds. They have to have clever ways of filling in those gaps, which is usually either by splining or by so averaging, essentially, interpolation of life, or by combining imagery so that if they combine eight dates imagery, for example, one of them will have a value, and that's the one they use. Maximum value compositive, that's cool. They do have ways of getting round gaps, and they're getting cleverer as time goes on, as Tom was telling us yesterday in fact. He's one of the tenor ones. There are issues with, as I said, with gap filling, and there are issues also with how high latitudes they don't go from all sort of Robin Ynemi, I think. It's about, it's just a movement, actually. The other thing with modus is it ends next year because they're retiring it, finally. It is being replaced by Viers, which is another topic satellite, but maybe he seems to have much faith in that for some reason I don't really discover it. The next comparison is Sentinel imagery, which is European, the European imagery. It's brilliant stuff, mostly. The Sentinel 3 is equivalent to the modus stuff, so it's designed to produce some sort of continuity or these compatibility. Slightly better resolution, but does similar sorts of things. High quality imagery, two to four daytime, I've all the same stuff, as I've said before. Slightly better at higher latitudes, but it is not user-ready. So, if you want to do it image by image, you can use it easily enough because they provide nice happy little software easy to do to do it. But it comes in tiles, and if you want to do it every day at a global level, as we would do for the modus stuff, it's just not there. It takes too long, certainly for something of our size, it's just not there. So, there are not the archives there yet to be able to use this stuff. There's also, of course, not a very long time series. It started at best in 2000s, being available in 2017. All of this is a great shame, where we're trying to persuade Lisa to be a bit more like NASA and learn the lessons that NASA do, but whether we'll manage that or not, I don't know. Hopefully in the future, though, it will be the thing to go to, but not just yet. So, the thing about all this stuff is there's a lot of it, as I've probably discovered. There can be too much, quite a lot too much too. I mean, if you think about it, you want to model TBE incidents, let's say, and you've got temperature. Well, you've got 20 years imagery, and if you're only looking at 10-day imagery, that's 900 potential images you've been looking at. There are 365 daily datasets in a year. There are 12 monthly datasets per year. There's 20 years of it. There's 3,000 three-hourly datasets in a year. That's per variable. You're absolutely flooded with potential data. So, which one would you use? So, you either choose one if you happen to know that I'm interested in maximum temperature on the 13th of July, 1983 or you can try and do some sort of data reduction stuff to try and summarise the data in some way. Tom gave us a glimpse of that yesterday, in fact, as did 10. The obvious summaries are things like the maximans or minimans or ranges or cumulative values or those sorts of things. There are bioplamatic variables, what is the temperature of the warmest month, which a lot of people produce. So, you can get summaries that have biological relevance. The ones we use are something called Fourier transforms, which I will go into briefly at this point, because it's designed to deal with time series. Fortunatiously, the basis of it is quite similar to what Tim was mentioning the other day, yesterday, I think, on sine and cosine anyways. Please don't ask me at this point too much about this because I'm not a mathematician enough to understand it fully, but basically, if you take any signal, i series of measurements like the right from the top on the left, you can decompose that into a series of sine waves, like at the bottom, and you can have those sine waves at either annual frequency, the white ones, every six months, the green ones, or every four months, the red ones, and it goes on. You can go up to 20 or 30 to make it absolutely reproducible. But it turns out you only really need three to be able to come up with a series of measurements, a series of outputs, that you can then recompose back into the original signal, and that's the red one. So, if you just use three of those sets of sine waves, three periodistives, you can decompose it into those bottom ones, and then that will give you the match you get at the top, which we just considered to be good enough. Now, because they're sine waves, you can describe them very easily. They're all in the same form, but all you need to describe them is the mean, the level, if you like, the amplitude, how high they go, and how often they happen, and the timing of the peaks. So, you get the timing of the peaks, the amplitude, the height of the waves, and the level around which they're based. And that will give you, for each variable, if you do it for a 20-year time series, that will give you a set of 14 parameters, which is mean, minimum and maximum, and range, I think, and then the parameters that describe these sine waves. So, the amplitude of the annual cycle, when that annual cycle peaks, and around what's its mean level, and you do that for each of those three different things, and you can come up with a set of, essentially, biologically interpretable variables that describe seasonality and level for a 20-year time series. So, you get a synoptic value of a long data set, and those actually turn out to be really quite good models. We've been using them for years, and it's a, we keep producing them, and they're good modeling tools. So, that's just one example, or many others, that Tim mentioned some yesterday. Here is an example of the seasonality, sorry, no, the level, actually, the annual peak of the precipitation. So, it just tells you what is the maximum level of precipitation in different areas. So, it's nice and easy to play with. So, what else do we think of then? That gives you an idea of data reduction. Let's have a little bit of other sorts of variables. People, obviously, in movement, I mean, we all know about that. This is, this is a couple of things that we produced for some of the COVID stuff here in Mood. Old men per one kilometre, people like me, derived from the world plot stuff, such as playing around with data, giving people. And then, somebody this morning mentioned the Google mobility stuff. So, that gives you an idea of where people are in relation to where they were in January 2020 every day. But it's eight million records, so you need to do something to make it accessible. And that is the sort of thing that we do here. It's popular too. We've bought 78,000 downloads of that data set, so far, which were quite confusing. There are other aspects of movement, obviously, not just, it's a vital parameter for any sort of disease, thank you, any sort of disease thing. This is something that Moritz Cremor, a colleague in Mood, has produced, and it looks at the distance moved per trip, the average trip length, based on fan records. That's the one on the right, and the one on the left is actually the number of infected animals come, no, sorry, number of animals from infected areas coming into each of these pixels, infected with bovine tibialis, was this something we worked on for years. And this is something that I think of 10-year time series there. So, it shows you the movement of infected animals, movement is key, and it's something that's difficult to prepare it, so you have to be quite clever when you go for it. Obviously, it's not just current data. We have to think about projecting it. And I'm just going to show you three illustrations here about projections and then forriad projections, so synoptic summaries of the seasonality of projected data. This is data for worldclim, actually, which is one of the weather station-based data sets. Projected forward using an ensemble of 17 different global climatic models, so it's not just one of them, or it's not just Hadley, or the CCA, or the Australian ones, it's an ensemble of 17 different ones. And you can see there, if you look at the three different variables, minimum temperature, relative humidity and rainfall, just the CFA average, the furriad average, there are some sort of similarities. It tends to be red in the north, it tends to be green in the middle, and there may be a bit of red at the bottom, i.e. high. But there are similarities there, and you can envisage the links between the three variables. If you look at the annual variability, in other words, what the maximum values are over the year, then you start seeing differences. Rainfall has different patterns to minimum temperature, yet relative humidity is somehow more similar to temperature than to rainfall, which is a surprise. And then if you look at seasonality, which is when things happen, when in this case the annual peaks of these things are, then you start getting even more differences. So by the way, you can come up with some quite useful data-reduced synoptic stuff of projected imagery, which gives you an idea of what's likely to happen in the future based on, I mean it takes a while to do, but it is worth considering to do it. The thing we have to do is to compile new data. We have to do a lot of that. Someone comes to us and says, do you have any idea of health capacity? No? Okay, well will you go and find out? Okay. You would be amazed what's not there. You try and get a health capacity map. At a continental scale for anywhere in the world, for Europe, you want to get just done that. Partly because public health agencies don't like releasing data with stuff. Partly because they all release it in the same format. So you have to find some way of getting the information on the left there is the interplotable hospitals, no sorry, hospitals and surgery in Europe. Now if I gave a model of that, their response would not be particularly favorable. They would say, thank you very much, but we really have never chosen this. Can you go away and make it useful? The same with roads, which is another aspect. That's the roadmap you get from open street maps. There is no continental roadmap at all. Different countries produce different geographical data for their maps for different road levels. So if you want to have consistency and go down to let's say the French of England D or the one below that, then you have to go to a continental database. Which is what we've done yet. But again, if I gave that to a modeler, they would tell me to go and feed them beer until they went unconscious. I mean, they're really not interested in that sort of format of data. We have to produce it in a way that they can use. So it's not really about the techniques of producing. It's about imagining what the variable they want is. So that hospital one on the left there, we just converted it to the number of hospitals per kilometer squared based on a in each nuts, whatever it was. Nuts too. Is it nuts or yes? So it's not rocket times to do, but it makes it more accessible in terms of people using it or indeed visualise it. And the road one, it's not just endless lines saying here's a road. What we've done is a very similar thing actually is to convert it into the length of road per square kilometer and we can do that for different categories of roads. So it gives you an idea of the transport capabilities in each picture. So it's a matter of converting it to something that's useful rather than giving people raw data and actually my thing that's the tummy. It could also be just converting it into a read, you know, what's quite difficult. English graphs, columns of data or formulae, as in the case on the left, NASA give you a formulae to calculate data. It's point locations though and they give you all the point locations but it's not much use if you want to do this on a continental level. So we just converted it to a raster. We apply the formula on a raster. It's not difficult to do but actually is very difficult. Once you get near the poles it gets complicated. But then it's in a format that people could use. And the same in this case for the Google mobility I've already talked about. We convert, I just did it yesterday afternoon actually, we do it twice a week and you convert it into a map friendly format that you can load up the maps, load up the data set that can be mapped from 8 million records goes down to something much less and it's all a furthermore instead of being all in one long data set. It's all in nice day rows. So it makes it accessible and that's the trick. That's our idea is to try and make this stuff easy to use. It's not always so straightforward. Going back to the earth observation stuff, bearing in mind that we want to try and get something that's all more or less the same standardisation as I mentioned to begin with, it can take a little bit of time, particularly for satellite imagery. I don't know what it is about satellite imagery or indeed the people who produce it, but they really do make it quite complicated. Anyway, they do just take the raw data from Moly's. It starts off with something, I can't remember what it is, something like that. It's a multi-dimensional image, which means it's got lots of images all packed into one file. Well, that's great, but I do more of them and furthermore, most people don't know what HDF was if it hit them. So we have to try and extract it to a more friendly format, which is not difficult to do, but if you're dealing with thousands of images, it's better I do it than you. The other thing about a lot of satellite imagery is it comes in tiles that are 1200 kilometers across. So if you're thinking then of doing a continental data set, 1200 kilometers does not do a continental data set, so you have to combine those and if you're doing it globally, you have to do it for the world. I cut number 700, I think, Moly's tiles. So you have to add Moly's, join them all together so that they match. That's always a challenge. And then you wind it the bit you want. Again, to make it easy to access. So rather than me giving you 33 tiles for Europe, you get a nice wind thing of one file. They're often scaled, so because it's easy for storage. So it may not be temperature, it may be temperature plus or minus a thousand or whatever it is. So you may have to convert that and again that's for storage into something that Moly's needs because I can tell you, if you give them a scaled image and you don't tell them, they get hysterical because all their novels don't work because the numbers aren't right. So you have to remember to do that. The other thing is that most of these things are again for some veeky reason now in a really weird projection. They're called good homolosign. Please do not ask me why it's called that, but it's essentially a combination of two different ways of mapping in one file that is nothing to do with that long. So again, if you've got your data in attitude long, you don't want something coming in some weird thing because you've been trying to put them together, they won't work. It's the equal area projection property. Yes. So some analysis you want to fix those same area. But the good homolosign has two projections. It's the northern X sphere. I mean there's good technical reasons you're right, but if you're a model and you're not interested in that, you want to be able to get at something that matches your data. But how do you react to using them because we need them to have the equal area? Sure. No, I'm saying that I agree with you. There are very good technical reasons for it, but then if you're a model looking at TVN in Denmark, it's not something you're interested in. Look at all your data that will be in that long. Or maybe ETRS doesn't have that, but it's certainly not good to know something. I'm not interested in this in Mr Riddall, but they spend three years looking for a projection. They say this is a projection. Jim Tucker. Easy to play for it. They publish a paper and they assure you that you have minimum information loss and you can put it over. No, I mean, as I said, there's some very good technical reasons, but it doesn't make it any easier to access the people who are on the sessions. That's the thing. And as I said, there's lots of machine creation. So, okay, extraction, that's fine. Processing, it's fine. There's good reasons to do this stuff to try and make it easy to get at. We do quite a lot of data derivation as well. I mean, what's... And a lot of this stuff just hasn't been done. I don't know why. If you take religion humidity, for example, which is an important parameter for a lot of insect vectors, you try and get a global relative humidity impact from any way. Doesn't that? Just doesn't do it. But you can derive it from the Moly's land surface temperature really quite easily or indeed from the temperature maps provided by the weather station stuff. It's just a relatively simple formula. So, we do that and we get a nice relative humidity map that the models can use direct. That's the idea. And as I mentioned a bit earlier, the raw data for wind comes from horizontal and vertical vector with each assigned a speed. And of course, that's not going to be terribly useful either, again, if you're a modeler. So, again, it's not difficult to do. It's just a good thing to do it to make it accessible. And that's the key for all of this stuff. At least as far as I'm concerned, is getting the data out to the news. I mean, Tim's very keen, for example, on action on the ground. I'm very keen on getting this stuff, which most people can't get out to people who can then get action on the ground. That's the key. So, vectors. Sorry, I didn't mean to have two different sets of vectors on two different slides, my apologies. So, as I mentioned, it's not just about climates and weather and those sorts of things. It's also about the vectors that carry the disease and the hosts that are involved in the transmission chain. And this, just a couple of examples we've got here, one for a tick and one for a mosquito, they usually need modeling. You don't often, as we were showing yesterday by Tom's scuff for our pictures, you generally get a load of spots which are terribly informative very often. And it's used in lots of gaps and holes and it's useful to come up with something, again, that gives you a continental perspective. You need to be careful. This is a lesson in mistakes in my case. We were asked for a mid model early on in a project and we produced the one on the top left there using the red data mostly from France on the right there, which is the one. We had four iterations of that model, gradually adding data from different areas. The initial first in the UK, Denmark, then from Spain and Scandinavia and Italy and then from some field work in the east. And those four different tranchees of data made an enormous difference to the models, as you can see there. So you need to be a little bit careful when you start trying to produce these data for inclusion into, let's say, transmission models, is that they're reasonably reliable. And I can tell you that first one wasn't. The last two are beginning to stabilise if you like, they're beginning to converge and so we have some faith in them. But you need to be very careful how far you extrapolate. Suitability. Well, that's another thing that we produce as a combination, as a derivation from lots of other variables. And why would we do that? Well, there are two reasons for doing it. And one of them is to use as a mask over a model, for example, so you would only show a model where you consider the land to be suitable for that animal. Or as a way of saying, I'm pretty sure that the animal is absent from this and so for modelling purposes you can generate absences in the areas which are unsuitable. Unlike many, I'm afraid I'm not a great believer in pseudo absences, which are largely based on geostatistical methodology. I prefer to come up with something that says, I know, it's a combination process you've built up, I know that a tick doesn't like the top of the alx and a lake and marshes and temperatures over 40 and whatever else. You can come up with a habitat suitability, combining those parameters. And that will give you something that's actually for a mask detail. The green bits they like and the white bits they don't. And so you can stick absences in the white bits and you can use the green bits to mask the outcomes. It's quite a useful thing to produce. Here is just an example, which means that you can actually combine polygon and point data. I don't think I'll go into that actually. It is worth considering though when you're talking about vectors or hosts, whether you do abundance or presence and absence. Presence and absence is easier though you need to generate the absences, which can be complicated. Abundance is easier modelling because you need to be less focused on the absences, but it's a matter of whether the abundance actually means that it doesn't actually matter how many mosquitoes there are. And that's an argument for another time. I'm very happy to consume as many beers as you'll buy me to discuss that. Masking makes a big difference though. There's an unmask model on the left and the masked one on the right. So you can see that unmask model was for a crack model basically, but it's for a host that likes only wetlands. So you have to only show them as presence in the wetlands, which is the bits you can see that aren't light green on the right. So this process can be quite important. Let me give you a little more, a slightly more detailed example. Here's a basic model of a tick in Scotland. Green means unlikely, yellow means maybe, and it says orange. I don't mean orange at all. Whatever that other colour is at the bottom, or probably. Sorry about that. Anyway, there you go. Three categories. That's the basic model. Now you know that the tick does well in certain habitats, don't like other habitats, and so you can make a mask and you can overlay that, which will then give you the light green, which we don't think the tick lives in, so you get rid of all that and just leave the brown bits. Luckily in our models, the brown bits didn't overlap with the light green bits, so we'll have a little faith in the models to start off, but that allows you to refine your model output with a mask that gives you more accuracy, hopefully, or more validity anyway. And then you could add other things. In this case, a temperature mask. We know that ticks only happen above seven degrees, and in March you can therefore produce a map that blocks out the ticks in anywhere that's colder than seven degrees. And so you can end up modifying your model outputs with the covariates themselves in order to refine your outputs. So that one is the combination of A, the model, B, the habitat mask, and C, the temperature mask. What you can then do is come up, I was talking about this at lunch, this is a cumulative temperature mask the cattle, I think, for last year. And every 10 days it shows you the tick model underneath and the blue bits are where it's either too hot or too cold. And so you can see it evolving during the year. And then here we are, it gets too hot in the south, July, the hot bits get expanded a bit, and then come August and September it reverses. And so you can start modifying your outputs with covariates, with dynamic covariates, in order to make more sense of them. Hosts. Pretty much the same really, it's the same sort of process. There's lots of hosts involved in the mood projects. We try to do as much as we can with them, but the data on hosts is often more sparse even than, or maybe you can't get hold of it. To give you, we're quite good with the big mammals like deer or cattle. There's global livestock maps that we've been producing for found for 10 years now. So we're quite happy with the livestock stuff, and maybe some of the larger mammals. But do remember, I mean I've been involved in the global livestock in the world, printed livestock in the world since 2006, I think, or 2007. Anyway, a long time. And they, all of these sorts of maps have howlers in them. They all got mistakes. Do not take a pixel and say that is the value. So if you look at this one, if you look at Lake Victoria there, you'll see down the right left hand side, you'll see a brown line, which suggests very high density. It's not, it's because it's an age. And the satellite imagery that produces those models have anomalous values along the coasts. So it's half the water and half on the land, and so you're going to get funny answers. So bear in mind that never, ever look at these things too closely. I mean, people always ask me, well, how much, you know, how big an area should you be dealing with? I reckon 30 pixels square. I mean, something that's all, you know, don't ever take the individual pixel. That's one of the reasons for doing administrative zone stuff, Francesca. I'll talk to you later. Now, we're having trouble at the moment with birds. Birds are important for a number of things, for HPI, for example, for Western alvaro. And as for humans, movement is particularly important. They migrate as you want. You try and get hold of bird migration data that you can use in a model easily. You get arrows to swallow flies from Africa to Europe along this line. We're, hey, we're really pleased about that. You try and incorporate that into a model. You can't easily because most models are much more grid based or point location based and translating vectors into this sort of thing. It's very difficult, but there are. So we have to try and find workarounds, if you like. There are various ways we can do it. We can look at what the animals are doing, what the birds are doing. So the top left there, they look at whether they're breeding or whether they're migrating or resident or whatever. The bottom one is they're actually weekly distribution records or distribution. Yes, distribution. Open, sorry, citizens and sons mostly. And so that would be marvelous. You can either watch it develop over the weeks or you could just look at the difference between January and June and say, right, if there's a big difference between January and June, we know there's a big impact. Cool. You try and get the data. It's very difficult. It's ironing negotiations that are currently going on for two and a half years to get density data, weekly density data for 11 species. But people are funny. Sorry. The other thing about it is how do you know if you've got 11 species of host or vector, how do you deal with them? Do you really take them as separate variables? Or do you try and say, well, actually, I'm just interested in vectors or hosts as a thing? You know, I mean, it's only really worth taking them as separate variables if you think they do different things, but if you think they mostly do the same thing, why not try and combine them? So, you know, you've got 10 wildfowl species involved in Aven Influenza. Are they all doing functionally much the same? Well, okay, add them together in some way. And that's what we're playing with at the moment. We're trying to see how to combine multiple species to come up with, if you like, indicators for multiple species preferences. And you can either weight them or rank them. Or what we're playing with at the moment is that's a simple one of just species numbers for bird hosts, wildfowl mostly. So it gives it, I mean, it's quite a lot there to look at. It might be important, but that's down to the models to have a look at, but it's a first style to combine. You might be able to do weighting. So, you know, different species do perhaps do slightly different things. And this is from Francesca, who gave us a list of mammal hosts mostly, I think. And they all do slightly different things. They're either reservoirs or main vector amplifiers, secondary vectors, blah blah blah. You can weight them accordingly and then add all those together. You get species number at the bottom left, then the weighted species number on the right. And then you can model that if you want, which is what I did on the further right, sorry, on the right, from the weighted number in the middle. And you know, you could play around with that sort of thing if you wish. It's very experimental at this point. We're trying to see whether it makes any sense. It may not, we don't know. But you certainly won't find out if you don't try. And it's, you know, it takes the afternoon to do this. It's really not very long to do. Don't you ask them to do 20 of them? The other thing we're trying is using a rather more geospatial. Pretending bird distributions are land use categories, essentially. And land use has, it's an approach pioneered by a tick expert in Spain called Gusting. And he's been doing a lot of this sort of stuff. So we're trying to copy him. And so I thought we'd take the West Alvarys hosts of which there were also 11, which are the ones at the bottom, and do some spatial classifications, unsupervised classification. They don't really need to know what that is. But all it does is it tries to, essentially, divide an area or a set of different layers into spatial classes, into areas. So it takes those 11 abundance figures at the bottom to abundance of different birds and says, can I define five or six regions that give you different characteristics of combinations of those substances? Now, I mean, I did this in the play of the way. So I've no idea if it makes any sense. But if you look at the map of the disease on the left, one begins to see some nice overlaps. So if you look at the black bits on my classifications there, you can see they start to overlap where the diseases are, both in the Balkans and in Spain. So maybe we're on the right track. And there are lots and lots and lots of different classification methods. And I'm trying at the moment to do it without pre-defining what I think the regions are, which is called unsupervised classification. We'll see if it works. It may not, but it may. But it gives you an idea of how you can combine these sorts of things to produce, again, model-friendly outputs because the modellers can use these classifications to give us, to zone their analysis, to do different models in different places. Because one of the failings in a lot of spatial models particularly, is that they do one for the whole area. And if you can do it zonal, then it'll improve your accuracy a bit of that. I'm not going to go that because I'm running out of time. There's lots of covariates we don't have. We were talking mostly socioeconomic, actually. And excitingly, one of the things in the Mood Project that I was really excited about, if it happened, is that we were thinking of trying to use text mining to find covariates that we wouldn't otherwise be able to find. Which might be nice. We don't know if it'll work, but it might. But it's worth a try. So we're going to try and do that. But there are some issues, obviously. Socioeconomic data is very difficult to get. And if it's socially economic, very often it's categorical or really collected at household level, and it's really difficult to try and convert into some sort of model friendly output on a geographic basis. And there are also things like rats. How the hell do you model rats? There are no maximum rats out there. There are many bodies of account of that much. So if you try and produce a continental model for rats for leptis beroces, you've got to think really cleverly. And that's one of the great joys of this job, I find, is that you've got to think of inventive ways of cheating, essentially, to come up with proxies. Let me give you an example. This is a trick from our aerial survey days in Nigeria, flying around in low aircraft, counting things. They wanted to know how many farm ducks there were. And you cannot count ducks from a plane. I contend that you can't. So what we can count is houses. And we're quite good at counting houses. And so we go around counting houses. Yay. We send some poor unfortunate person also around lots of specimen houses in a Toyota and do 50,000 household surveys saying how many ducks do you got in your house. And you end up with a number of ducks per household. And you can do the same for anything. So we did it for beehives and rabbits and chickens and all sorts of things. So combining two different ways of doing it, we're using house rooftops actually as a proxy. And so once we've got the ratios of rooftops to, in this case, ducks, we've got our answer. So, you know, cheat is the answer. Increasingly important, I started by the name of the time, the name of the time. Increasingly important is what we have to do to curate our datasets. The things are changing rapidly. We have to be much, much more careful about reuse and licensing and ethics. And I think that's as much as I will be able to say about that without getting irritated. So we'll go on beyond that. But it is increasingly something you have to be careful of. If you print maps that show people to be disadvantaged in some way, you're in deep trouble. So be careful. Put a caveat, not me, Guff, at the bottom. Then you'll be right. Just finally, I think it's important to reiterate that it's not just for the models, at least this project is doing. It's for anyone. They don't have to know anything about Cobraire. They just need to have the data. They don't have to know anything about spatial analysis or remote sensing or satellite imagery or weird ways of combining data to produce odd outputs. They just want what they want. The surveyors might want something entirely different to the fins. Or they say, just tell us what you think, God say. Don't fiddle about. Tell us what you think is important and give it to us. Thank you. So we have to slightly change the way we do things. To not, to maybe cut a few corners, maybe do proxies, but make it simple, make it easy to use, make it standardised so that they don't find themselves getting different answers by the state. And so we produce data packs. We produce covariate archives and we will eventually use the produced platforms that allow people to extract and visualize. The paradigm is changing. We, as I said, we need to change the way we're doing things. We need to, essentially, there's so much new tech out there. That's what this workshop is about. But don't lose sight of the fact that 99.9% of the people haven't got the first idea what that tech means. And whatever it produces has to be out on the ground usable by people who do not understand these complicated things. And that's where we need to think about. And it's not just users that might be interested in public health. I mean, if there's one lesson that COVID has taught us in the last six months, I'm sorry, two years, it's everybody now knows what an arm number is. Well, that wasn't the case two years ago. Everybody knows what incidence and prevalence is. People are being educated whether we like it or not. And that includes politicians and funders. So we have to be able to provide this information in ways that they understand and that it doesn't confuse them. Finally, this is just a plea, really. Even now, as I illustrated a bit earlier on with the density data, the seasonal data, even now, open data is not a universal thing. I mean, we all think of it as, if it's on my machine, you're welcome to it. Not so. We need to push this open data stuff, all of us. And that means if that means we need, by example, by making our own data easily get out of it in an easy format that people can use, then that's what we should be doing. We need to promote this open data sharing, as must be the thing, in a way that people can use. Thank you. We could either have questions now, I've been told, or move on to Cedric, who's going to do some rather more clever stuff on how to use covariates in modelling. Any thoughts? Do you want to break? I think it's better to have questions now. So I don't have questions. Okay. Questions? And we're going to have a half an hour break. Okay, fine. Question. Question? No questions? Jonny, go. Very welcome. When working on a new tool, we rely on some data that is published in papers, for example, paper that says, ah, first history of the chemical density, great rhythm. First World Cruise, The Elefim... D arb dewedd doedd yn fwy 걸wyr ac theodd gymuned wedi yn deum inwardiaeth. Problemiaeth er greu yon Oedden hacer<|ko|><|transcribe|> Doedd y nifer yn cael seьяbiol maen nhw insecure. Thysλ totalement edrych anodd tyf<|el|><|transcribe|> Felly humblyna? Fampirag, fellymoud, energ researched pen gy пожалуйста erten Cos phaithérlag y sarcfurdd gallw i wyth yn Sant טובio. Mae cychwynamentol yn amlwg dearawанаeth nôl educatedur yn ym answers o ganesodd. Mae gennym hwnnw yn cael gyrchaneg, byddwn i'n unig surlau cerddeeuddot arall ac yn syniadieux Bibliotech yr olygu iawn, y Yongfodol yn maen nhw. Mae hynny yn mynd i gynhymhpethu 35% telefon ar jefal trwy compet a'n meddwl Mi neu'n rhoi, Personal sustained holder yn ben a ph encore yn sgexta, ac nid ad馬ugbyl ei full, ein byddwch a'i hun ddim. Rhefn ser東西. Siegellad. Mae hopion eich Rhoeddiad oedd i. Dwi hwnnw. Rhoeddiad ajchi. Fy oedd yn serum cael hwnnw. Fyelled i'r hun? Fyledaidd it. Efallai iddoch yn eu bod ni'n hyfo. Dechelinwch uni precw� maerdydd imbol, rebwnaid y byddog ynheithio ar hyn edrych a felр dwylaeth gywed yn y holl driedlad gyda g within so fo. I'm wondering, so when you were saying about certain databases are difficult to use if you're not really technical in terms of free agreement. So I'm wondering how would you see what would be an alternative way to offer data sets that are easier to use by the public health people? Yeah, I mean, it's a good question. I mean, my gut feeling would be to offer things that are sort of consensus data sets, and if they need more detail, I mean, we're always getting to the point. So start off relatively safe for you. The most important thing is exactly the question Eleanor asked is do people believe it? So if I give you, let's say, a whole lot of complicated temperature data, I think they may not believe it, and that in which case I would say so it's going to be something that's believable and comprehensible, and they understand what it actually means. That's my view. It's like, it's like the disease data with politicians or no sorry, not the data, what are the mitigation procedures with politicians and hope. They will not recommend a mitigation procedure that they don't understand, which means they may well have to recommend something that is not the best one. But if they can understand it, then they can defend it. And that's the one that needs. And then that's new that's been proven in Britain, in France, in Belgium, in all sorts of cases. And it's very similar with analysis. I think it might be this is that you've got to provide at least to begin with people with staff family. Who do you see should then govindus is it a project? Yeah, I mean, I think, let's take, let's take an example. There's a different, as I mentioned earlier on the difference between NASA and ESA for the provision of satellite. NASA has taken the approach for years now. And that you took them 15 years to know. They said we want to produce data in the most accessible way possible. Get rid of all the difficult process, get rid of all the geo registration, get rid of the clouds, get rid of the dust. Make sure that what you've done is something that you've recently had, have said we are wonderful in the dream. Here's the software. And that's it. And then you have to do the analysis, which means that a new and new and new and new and you have to do it. So it's a waste of repeated effort. And me, you'll probably get it right, but I won't press them on blind up. So that's to me, that's that's the option. Do you try and make it simple and should data providers, do they have a responsibility to do if you take the client data, for example, the client data store, which is the big repository for all this whole point. So they have done a level best to make it easy. They've got good metadata, they've cataloged it right. They're very clear about what it means. And they've done the licensing ship point, sorry, stuff for you. So it's a feeling that these are to come up with more standardized information that's easier. But that approach needs to spray. And also, this is public. And I mean, to a degree that's what moves that is, is we will produce something that's relevant to our diseases to a number of different users. So that is, it takes a new move. I was wondering from the beginning, you mentioned that for some variables, we have several alternative sources of information knowledge, 70 male weather stations, two different characteristics each. Helo fod yma, ni'n digwydd y ff bicycle hwn o'rmingg i fod y earnidag esteig fel Welser Cymru? Lyw odi dwestbeth sy'n jaethe cyn bod y byd am enthoedd sain na mae hynny'n ddigwydd, maeisy bounced ar ar ei kinog hon sheireledd. Mae un wedi amfymo'n ir dramaen Yn捲 cl libaf myfydnolion ry resistb stuff Gwychai yn eu craw Take ychydig i fod yn cy patron o hon i gŷ, cwylio arfall online y bydd genny, mae'n gondol, dwi'n Caer Brisfa W Believe. Yr Un olw producego iawn, o� że待'r ein bain bobl y gallwn erdestud болanol iawn. Byddwn i mael ei bod hay yn cyfокu mael y process yn gan wokl y cerdd dlu feriau y hubid yn braw iawn. Byddwn i fod y frysgr a fenyd yn codiol gen i Chang daughtersad — pawb y mael yn bwysigol hyd nhw power r знакомau galler pobl cyfrannu scriptures i'w gyd grあ o bwysig i adrwy llam I ddw i ar hal i ein byd i ni w tranqu stori Rees yn bod miliwyr agnewchadigol y fyrwyr eraet 5 i'r cymdeith Mae'r informaeth yn gylech am dŵr i'n dŵr. Mae'r dŵr yn dŵr, ond yn gylech, ond ond dŵr yn dŵr, ond ond yn dŵr i'n dŵr i'n dŵr i'n dŵr i'n dŵr.上面, sydd o fyddo dwymonAU o dyfodol. Iel gwbb Claud темonwch, fel tyforo, ac mae'r hy sham sydd gyda phryd, maeth i gyfl chi cler o r jobbar wychad. Sut mae tion yn asti Cyddiogl yn ic Wahlud? Yna, y mewn y seaam cyngor. Daeth param yn digwydd. Mae pob dylai hollur mewn bus Cadwig sungled yn gyrs Sellhaid. Byddwch i infraredib yn masgr datgan yn ei ddarllen iawn i'rledd siad, alech yn y dweud ydym u weld flaenair iawn yn slicedig wichtigai, ers y Everyday sawr yr adeodol a swg inspiration lle roedd yn sefydli wedi cael ei crofwn ag oedd hollчикиulators ar r visão個人au yma. Maen nhw wedi'i creun cyllid yn I think that I as a modeler, sometimes I wanted to use this kind of playmaker data or something like that. I was flooded by all the possibilities, all the technical details that I had to take into account, the tiles and transformations. Sometimes I just don't have the time and the skills maybe, the knowledge. I mean I've had people come into my office in tears saying we've been trying to get them to their dinner. Do you want a USB here, here it is? Then they... Nothing else. Hey Cedric, how are you? I'm actually going to get myself a glass of water. Do you want my watch? No, I don't. So I will... I'll try to speed up a little bit so I don't take too much coffee at the right time. So I really just talked about how you can make a marriage, right? Or how you can get them, how you can process them. I will talk a bit about and give some practical examples of what you then get into once you have these covariates. And then where you would use a certain type of covariate and where you would use another type of covariate for example. So I will mostly be talking about spatial modelling because it's mostly what we do in Mood. You can do the same type of approaches with mathematical modelling and I will give some examples there. So maybe not today but you can ask to in the coffee break in the interest of time. So I'll just come skip this. So this is like the most basic way let's say it's just taking the covariate data straight. So for example you use weekly data or daily data and you correlate that with in this case albopictus data. So this was based with the factored data which is a multi-year data set where we have specific collection points and dates. So we know this correlates to this date of the year and so we can make activity map and seasonality maps for in this case albopictus over Europe or the factored extent. So you just trade off correlates your data with your covariates. This is quite easy, quite straightforward. It doesn't take you a lot of, well it still takes some calculation time but it's not massive it doesn't going to take you weeks to calculate stuff on a supercomputer. Another thing that's pretty basic to do and is something that really already talked about earlier is when you create sort of absences, you can correlate your different covariates with your presences. So you just do an extraction of your presences which is point data presentation and DVI, high temperature, whatever. And you build that into an environmental space model in which you can then do cluster analysis, some hardline analysis so you know in environmental space this is what my presences look like and I can put my sort of absences outside this environmental space. It's a bit the same. It's what I really explained earlier but this just going one step further than what really explained. And that's still also really easy to do. If you've done that then you can go to extra spatial modeling because now you have the presences and absences and then you can start building your models. One thing to keep in mind is that these are three models built with the exact same covariates set and the exact same. But depending on the rich model you choose, it can give quite significant different results. So it's something to keep in mind when you start modeling and when you start, you and you select it, your covariates when you click your input data, that it's not because you did the selection that it stops there. The selection of your model is a very important something and it's also correlated or determined by your input data. For example, this is in Finland project we're actually doing where Tim is also involved in. As the input data is admin or polygon based so it's going to be the output. But if you have good covariate data, you don't have to stop at the polygon data. You can then run additional modeling to go from your admin based output to actually a grid based output. The most important form is that you break up your modeling process in different aspects that you first to your data extraction based on your admin levels and then use grid based grid based covariates to actually then build your model so you take your training and your prediction separate. But you would normally go a bit further than that and then use some techniques that we already explained earlier, and do some masking exit in your training of your model. The nice thing about if you drink a model like this is the example of something that we did back well I didn't, I was still very young at the time, but really was involved together with some other people here in 2007. So when the time was critical, I did it on the pictures was only found in mostly Italy at the time. So we built a prediction model for the rest of Europe for easy DC, and then you now so 16 years later, we can let go later back to the current distribution and actually see that the models back then, even though it was only with input data from Italy that we actually do mean something in 2022. Something that's that's maybe something that comes up again, something that comes up in input quite often is you can build a larger model and then refine it afterwards with finer scale input data, something that we do for example here, where we for first a couple of clinical trials we first started off with a global model. And then for specific locations where we have fine scale data, we can then refine the model that we built with the larger data sets to a finer output. And on the left is the global model of Brazil. Yeah, so the left is just the final with the input. Yeah, so with local data provided by in this case Brazilian public health to us. So the left is just a clip we are I made from my global model, and then used the information and the model we learned from global model, but then use finer scale data to refine the model specifically in this case for Brazil but we have done it in other Latin American countries and other Latin American countries in the Southeast Asia. So it's, it's mainly you get much more nuanced and finest. It's not a finer resolution because the resolution of the input data is the same, but you get a much, your output is much more refined, because you, you learn the big patterns from from your global model, but then you can learn finer scale aspects from your local data. So, if you do a machine learning ensemble with the two to then get a more refined output, the more nuanced output. The local model then is a training set for improving the global model. That's in theory possible we didn't do that in this case, because the local data we got was such a high resolution data that, and it was so specific for a fair few countries that we didn't bother to do it in this case because we had no fine scale data from Africa, for example. So, there is a chance if you do that with any only specific countries you would skew your global model. So it's an analysis we could have run and then then doing an else to see if it would skew your model yes or no but to be honest, the funders were not interested in it, we were quite, it's quite a lot of work so you know the work can be adapted for the mood. For example, we have partners in the Indian Ocean. So, yeah, the separation of the game. Lairin yw is something different. So Lairin yw is an island. If you look at island specific they they have a different, especially not talking specifically for dengue. They have a different, a different pattern. So you can do is suitability mapping, but the problem on an island is something we see in Guadalupe for example another French, French islands overseas territory in the Caribbean is the spread of dengue within the island is totally different than on in continental countries, because it's a small scale operation. We can do a mobility matrix for Lairin yw'r two, I think I even have it somewhere. But for Guadalupe for example it's such a small island people literally live on one side of the island and go work on the other side of the island. So, you have no predictability in the spread of dengue on the island, because it's such a, it just goes everywhere there's no like France for example, we know if there's an introduction in Marseille or in East like we had every year since. Quite a while. So we know how that would propagate in France and how that would spread within France. We know there are specific movement patterns within France, especially from the south to the north. So that makes it predictable. In Guadalupe and I expect in Lairin yw'r the same. That will not be possible. So it's about 100 more islands. We will constitute the response to your French territory. Then you can, but then you can do something different and then you're in the realm of introduction modeling and then you can do effective note analysis. So, you know, if you look at different islands as notes, and you know if one island gets affected, you can then try to predict like how that infection would spread between the different islands. I think that's not that difficult to do, but you need quite detailed transportation data, which France normally has available the French School Institute. They keep basically everything. The problem with the French School Institute data is it's a real mess to clean up. It's a real mess in theory that's possible. That's because they're more, they don't have that fine scale range. So the global model looks at the big pattern so big temperature ranges, big precipitation ranges. They have this built with different data so a global model you typically built with outbreak data and not with case data, while a fine scale refinement you usually do with case data instead of outbreak data. It looks a bit at the different aspects and the different dynamics of the disease. I think in mind when you do something like a local refinement. This is a model built by Willie for all the pictures I presume in Europe, which was then further refined to its metal alert data from Spain which you might know this mosquito alert app. This is a science data. It's then further refined and then you really did an analysis to see like where it is improved. And you really see in Spain, that's much better correlates with what we actually know of distribution of albopictus in Spain. And with that is it improved Spain a lot. It made France worse. So that's what I wanted to say earlier. Locally if I'm back into a bigger model can skew your model. So it is something to keep in mind and something you need to look in when you do something like that. Just briefly because we're really running out of time now. We talk a lot about satellite imagery, raster imagery, covariate data doesn't have to be satellite imagery. It can be basically any anything that can be explanatory value. For example, here we look at it was something done by more in scramble who some of you might know is also calling in mood. Look at known distribution areas of albopictus in Spain and how it would spread. And so here we look at model without commuting data. But then if fine scale commuting data for Spain was obtained, and you see how much of a difference such covariate can influence your final model, even though it is in this case not a satellite derived raster image, for example. So this is in this case actually a probably a matrix I assume, looking like movement between municipalities or with admin tree units probably in this case. Very briefly and I will wrap up. We talked about a lot of data and some questions about data access and how to select the right data, and then something that we've been talking about the coffee break early is also do you need to download everything to be able to use it. Tom already talked about it yesterday that now these days you don't actually need to download everything anymore and process it yourself which is a huge improvement compared to previously. So just briefly mentioned some of the big ones. So for modus fears. Google retention is quite good. It's easy to use most people know it. So a bit harder to use in European projects as they push for European technology. There is scope, something that specifically built for Sentinel data, where they are also reprocessing some data in a more user friendly higher level like a level two level treated type of data. So this advantage is much more like time so it takes longer to become available. The biggest advantage of a platform like there's copious. It has their own Jupiter notebook upload system and there is a supercomputer from the Flemish supercomputer center behind it. And actually upload your own Jupiter notebooks, do the calculations on the VSC and just download the result. So you don't actually don't need to first download Sentinel images combine everything run your model, which actually quite nice and says significant amount of time like it's for people that don't do it regularly. The amount of time we lose by simply just downloading satellite images incredible. It's a long process. We already mentioned several times the CDS the climate data store from ECMWF is also putting a lot of effort now into providing actual in dashes directly. So you don't actually need to do the processing yourself anymore so it's standardized something we did for example for them was just a very simple temperature shoot ability of albopictus. In Europe, which is now going to be expanded to the rest of the world if I'm not mistaken, or at least a large part of it, but we developed similar things for them for things for something that is unlikely to be upscaled but at least for certain parts it will be this available. And then of course, really, really old when she mentioned that we will provide a mood platform by the end of mood, where we are. We'll have a specific module on data and covered access and where we will also provide download capabilities in case you wanted to get a bit some event based surveillance and some disease risk specific outputs. So, just really briefly. So how to select the right cover basically already covered this with with one of the questions. So there are very several different options. Just keep in mind that temperature, the one temperature is not the same as the other temperature. So modeling is very interactive process. So it is sometimes worthwhile to also just try different sets of temperature for example see how that affects your modeling data reduction techniques that really also mentioned and data ensemble techniques that are now coming up with this something quite promising avenue for that and then within mood we will try to also provide like this is data sets you trust one hand just gonna skip this for now. One hand specific for diseases by the specific disease work groups that we have within in mood. So evening influenza TV call with West Nile, Amar, Leptospirosis and the Flavi viruses. Of course also the dashboard that Francesca already presented to you yesterday morning and with that the circle is round and I'll stop.
This talk first touched on why we use maps at all then look at the factors (“covariates”) that drive disease occurrence. The session led by William Wint (E.R.G.O., UK) & Cederic Marsboom (Avia-GIS, Belgium) examines what these covariates might be and identify the environmental, agricultural, socio-economic, ecological and climatic parameters that can best contribute to spatial modelling. It is also important to know where these data can be found, what are the pros and cons of different data sources for the common covariate variables, and what datasets can be used for different types of models. The available covariate data are not always in a form that is convenient for spatial modellers and the session will provide examples of the processing and selection needed to provide modellers with what they need. Finally, the use of selected covariates in spatial models is be discussed and illustrated with worked examples.
10.5446/59196 (DOI)
which sits inside the causal complex. So if you look at the causal complex on the generators of the maximum homogeneous ideal, then the first map is simply the multiplication map. And then the kernel of this map is mapped on to by a manual way by a free module with bases given by EI to HBJ, where EI is a form of basis of P. And this map is the second differential which just sends a basis element to EI XA minus VJ XI. And delta 1 is simply the multiplication map if you think of V as linear forms. So the beginning of this complex is just the same as the beginning of the causal complex. And this subspace K gives me a free submoduling here. So of course if K is which to V, then I get no homology here and there's nothing interesting. But once I take K to be a strictly smaller subspace than which to V, then that will automatically produce homology here because this map minimally generates the kernel of delta 1. So that's the object. And so what are some remarks? The first one I just mentioned namely that the causal module is 0 if and only K is which to V. So moreover, these causal modules are graded. Modules are generated in degree 0 if I put an appropriate grading. So this may be not the one that you're used to. I'm going to put the grading such that this is just k's in degree 0. So I guess if I was to then you got achieved here by 1 and here by 2. But let me not. So I want to assume that K is in degree 0. And I don't like that assumption, but it's not my fault that topologies were first to come up with the definition. So that's why they put K in degree 0. But anyways, so that's what we have. And let me be very specific. So just make sure we all know what we're talking about. So generate in degree 0 if the grading is such that the degree Q piece is the homology where the first term is K tensor polynomials of degree Q. This maps the degree tensor. So the polynomial is a degree Q plus 1 and then the multiplication map goes into polynomials of degree Q plus 1. So that fixes the grading. And then w is generated in degree 0 simply because w is a quotient of the kernel of delta 1, which is a homomorphic image of this guy that's generated in degree 0. So there's no problem with that. And then remark number 3, I guess more philosophical, is that this w is a covariant functor of the pair vk, where morphism simply means. A linear map from v to v prime with a property that if you look at the induced map on exterior powers that sends K into k prime. And maybe one thing I want you to note is that if this map phi is surjective, so for instance if v and v prime are the same, then the induced map between the causal modules is surjective. So we get a surjection from the causal module vk onto the causal module of v prime Q. So I guess all I want you to remember from this remark number 3 is I want you to think of it as follows. So first of all, this comment over here just says that the bigger k corresponds to a smaller causal module, the biggest possible k corresponds to the zero module. That's one thing. And the other thing, this covariance just means that I want you to think of the causal modules as homological objects as opposed to co-homological. So there will be a lot of duals involved. And it's good to keep track of whether we do homology or homology, whether maps pull back or go the right way. OK? Anyway, so this is some basic fact. So I want to give some examples, but before that, we'll introduce some more notation. So I will write k perp for the set of two forms. I'm already going to the dual space now with a property that these restricted to k are identically zero. So this is the orthogonal complement of k. And the nodes that we have a bijection between subspaces of which to be of dimension m. So this is bijection with subspaces k perp inside which to be dual of co-dimension m. So this is just a familiar identification between the grass-mion of n-dimensional subspaces of which to be and the dual grass-mion of co-dimension m subspaces of which to be. Again, this is good to keep in mind because we're going back and forth between k and k perp. And it's also the reason why I work with V and I don't choose basis. So what are the, somehow, some main examples? Well, the main one is the reason why these causal modules were introduced for studying invariance of groups. So if g is a finitely generated group, then what you could do is you could look at the cup product map. So I'll take coefficients in k. The cup product map goes into h2. Well, it's a natural way of taking a subspace of which two of these vector spaces to look at the kernel. So we look at the kernel of this map and, well, you may be tempted to call that k, but of course it's wrong because these guys are homologic groups. So I'll take it to be k perp. So the right way to think about k is you dualize the cup product map when you look at the image. That's what k is. And it's a subspace of which to V, where V is the first homology group of g with coefficients in k. And so this data gives me a consume module, which is just a consume module of the cup of g. This is what topologies are really interested in understanding. But for algebraic geometry, another source of consume modules comes as follows. So if you take a vector bundle on a variety x, again, you get a map from h2 of the global sections, the global sections of which two of this vector bundle. And look at the kernel of this. Again, this is homology groups, so the kernel better be k perp, not k. And then I can take V to be the dual of the global sections. And then out of this, I get what's called the consume module of the vector bundle. So consume modules are very natural objects they show up all over the place. So we better try to understand. So let me give you some other interpretations, homological interpretations of these consume modules. So you can relate them to the problem you've seen before. So other interpretations for consume modules, well, the first one comes from the Berstein and Gielfang correspondence. So if you look at the exterior algebra on the dual vector space, then you could think of k perp as a vector space of quadrics in E. So k perp generates an ideal. So I could look at the quotient of E by the ideal generated by k perp. So this is an E module, the module over the exterior algebra. And then the graded components of the consume module associated to the pair vk, they're simply the tor modules over the exterior algebra. So this is a degree q plus 2. And of course, I have to take dual vector space if I'm being careful. So a way to think about it is you resolve A as a module over the exterior algebra. And then you look at the linear, first linear strand. And this gives you the graded components of the consume module. And I guess maybe one more interpretation. So this is an analogous to how you compute the causal homology groups. For CZG modules that appear in a geometric context, you can interpret CZG in terms of causal homology. And the statement that I want to make is that if you're on a variety x that satisfies this vanishing and you look at the globally generated vector bundle, then this gives rise to an exact sequence. So you take the global sections. And this global generation means you have a surjection onto the vector bundle. And the kernel is the so-called m bundle of Lazarus fell. And you can describe the graded components of the causal module associated to this vector bundle as just the homology group. For I want to say symmetric powers of the m bundle. But if I work in arbitrary characteristics, I need to use divided powers, virtually. So in general, dq of a vector space of u means symmetric tensors in u, tensor, u, tensor, u, q times. So I believe when q is equal to 2, this is the plus. But it makes sense more generally. And another way to do this is to dualize m, take the q plus second symmetric power and dualize again, but that's a funny way of saying it. So anyways, it's a homology calculation on the space x for the Lazarus fell bundle. And you should compare this with the following statement under the same assumption that h1 vanishes, you can compute kp1 of a line bundle l as just h1 of some exterior power of the Lazarus fell bundle ml. So this means you embed x by this line bundle. So let's say it's very ample. And you look at the CZG modules. You look at the first linear strand. You can compute those CZG modules as homology groups for exterior powers as opposed to actual and symmetric powers over there divided. So the analogs for, I guess the homological analogs have been very well studied. And then there should be an interesting theory for the CZG modules as well. All right, so I said that these modules were introduced by Pavardini and Succu. What did they notice right away? Well, you have these modules over the polynomial. The first question is to ask is what is the support of these modules? And what Pavardini and Succu proved, I guess this is also a definition. So they proved that the support of these graded modules over the polynomial ring is given by the resonance variety. And here comes the definition of the resonance variety. It's defined as just the, well, it's a sub-variety inside V-dual because these are functions on V-dual. So there's a set of elements in the dual space with the property that there exists some other B such that B is not proportional to A. So when you take the exterior product, you get something non-zero and this belongs to K-perk. And you have to throw in zero, I guess. It's a graded module, zero is always going to be there in the support. So maybe this looks a little bit complicated, but let's talk about the special case where the resonance is as simple as possible, namely zero. Then the condition is that this vector space K-perk of two forms contains no decomposable form of the type A which B. So let me make this remark. So to say that the resonance is zero, well, what Papadiman's suit says is to say that K-perk contains no non-zero decomposable two forms. And you could interpret this geometrically because to have such a two form corresponds to looking at the two-dimensional subspace of V-dual, namely the span of A and B. So it corresponds to looking at the point on the grass-magnon of two-dimensional subspaces inside V-dual. So this condition is saying that K-perk contains no point on the grass-magnon. So another way to say this is by taking the projectivization of K-perk, this is disjoint from the grass-magnon of two-dimensional subspaces inside V-dual. And where does this intersection take place? Well, it takes place inside the projectivization of which two-dimensional. This is naturally a linear subspace here and the grass-magnon this year via the poker embed. So this condition geometrically means that you have a plane that's disjoint from the grass-magnon and I guess algebraically you could go back here and interpret it when the following way, when does a graded module have support only at zero, it's when it's finite dimensional as a vector space. That is when the graded pieces become eventually zero. Okay? So I guess I'll throw in here, this is equivalent to saying that W, the graded components are zero for Q large enough. Okay? So that's what comes out of this observation of how vagmagnon suits you. But this immediately raises a question, namely how big does Q have to be in order to get the vanishing of the graded component? So of course our priority has to depend on, and it will depend in general on V and K, but the question is can we find an explicit uniform bound for when that vanishing kicks in under the assumption that the resonance is. So that's our main theorem. So I'm going to assume that N is at least three for small n, it's not such an interesting problem to study. So now if this prime characteristic is equal to zero, or is bigger than the dimension of the vector space minus two, then we have an equivalence that characterizes when the support of the module is zero when it is finite dimensional, well it's exactly when WQ is equal to zero for Q equal to n minus three. So if you check in degree n minus three, the module is zero, then it's finite dimensional, of course, but the converse is true, if it's non-zero then it has to be non-zero forever. So this is a short bound, and I guess I can put greater than or equal to, because since the module is generated in degrees zero, and n minus three is at least zero, to say that it's zero in degree n minus three is to say that it's zero from that point on. So it's however you want to like it, however you like it, but either put greater or equal or equal to zero here, that characterizes finite dimensionality. Let's do a proof. By example, I'll take n to be three, is the first case. So what happens when n is equal to three, well if I look at W2 v dual, I can, since this is three dimensional, I can think of this as just the space of three by three skew symmetric matrices. So then what do we have? Well, we want to understand when the resonance is zero, and by Papadima and Sutru, this is the same as saying that the subspace k-perp of skew symmetric matrices contains no matrix of the form a which b. Well matrices of this skew symmetric matrices corresponding to the composable forms are exactly the skew symmetric matrices of rank two. But for three by three matrices that are skew symmetric, since the rank is even, it can only be zero or two. And if there are no skew symmetric matrices of rank two, that means that they're all zero. So this is equivalent to saying that k-perp is equal to zero. Well saying that k-perp is equal to zero is saying that k is the whole space, and I made a comment at the beginning that this is equivalent to saying that the Kossum module is zero, and since this is generated in degrees zero, it's the same as saying that w, I guess, zero, I'm proving the equals case. And then you do induction. Or something. I guess if I want to talk about Gris conjecture, maybe I won't prove it. I mean it's not hard. Sometimes that's a surprising part and it's not hard. But it has very interesting implications. So let me, just in case somebody takes a picture of this, let me raise the proof. If it is correct, but maybe two. Clive, where can we see n minus three in that proof? It's the requirement for bot vanishing to hold. And bot vanishing doesn't work in characteristic zero if you don't impose restriction, and this condition is what the required bot vanishing needs to carry over to character, to pass the character. So it might sound like I'm putting this restriction so that I make the proof work, but in fact if you work out examples with p equals n minus three, then this vanishing fails. So I'm not saying I can produce for every p that fails this condition. I can produce a counter example, but for small characteristics that fail that inequality, I can produce counter examples. So slightly then this is very sharp. All right so we do a little more. So what do we do? So once you understand this vanishing, you might wonder, okay what happens with the Hilbert function of this Kozoon module? It's concentrated in finite many degrees. Can we understand it? Can we find a bound? And you can always find trivial bounds, but I'll give you a short bound. So if the resonance variety of v is equal to zero and the dimension of the subspace is equal to two n minus three, then we can actually compute the Hilbert function on the nose. So we have an explicit formula for the dimension of the degree q graded piece of this Kozoon module and it's just given by a bilimbal coefficient. So n minus two times probably not very important. So this would be true for q between zero and n minus four. It's also true when q is equal to n minus three because this guy is zero. That's a good check. So it's determined uniquely what the Hilbert function is if you have this vanishing resonance and the assumption that the dimension is two n minus three. Maybe I should explain this two n minus three a little better, but of course if I make k bigger than that, bigger k means smaller Kozoon module. So if I put an inequality there, then I get the reverse inequality on the other side. But the point is that in this case, when the dimension is exactly two n minus three, I get a precise formula for the Hilbert function. Where does this two n minus three come from? Maybe we should go back here. The dimension of the grassmoneyon of two planes in n space, this is two n minus four. So if you want a linear space to intersect this grassmoneyon, to be disjoint from the grassmoneyon, it means that the co-dimension of k-perb should be strictly bigger than two n minus four. That is at least equal to two n minus three. So for these equivalent statements to be true, it's a necessary condition that m is at least two n minus three. So I'm saying in the word-on-line case where it is equal to two n minus three, I can tell you what the Hilbert function is if it's bigger than the Hilbert function is smaller. So I think this is a fairly satisfactory answer to the question of trying to understand causal modules that are finite dimensional, causal modules for which the resonance is zero. I guess this is the quick introduction to the basic theory of causal modules. And now I want to talk about the C-sages and the Green Conjecture. And at the end I'll hopefully tell you how you use this theory to get a proof of the Green Conjecture. Okay, so I guess this would be part two. And for most of it I'm going to just follow this paper of David Eisenbad, or maybe just a small part of it, which is just an orientation for algebraists which proposes several packs of proving a Green's Conjecture by Tom Algebraic argument. The algebraic argument that I want to talk about is about computing C-sages of, so if you don't worry if you don't know what the statement is, I'll maybe mention it at the end, what I want to talk about for the rest of my talk is talk about C-sages of the tangential of a variety to a rational normal curve. And I'll explain how this relates to the Green's Conjecture. Okay, so what's the problem? Well, I'm going to consider a rational normal curve of degree G. This is just an embedding of P1 given in coordinates by this explicit formula. And then I'm going to look, so this is a curve in G dimensions and I'm going to take the union of tangent lines to the curve. And that's what the tangential variety is, this is the union of tangent lines. The tangential normal curve. So it's a surface in G dimensional space, two dimensional, the degree you can actually compute is 2G minus 2, unless of course you work with the odd prime equals 2 when the degree is G minus 1. But anyway, so you know that. You could, to be very explicit, you could parameterize it on an affine chart. So it's two dimensional, so the parameters will be T and U. You take a point on this curve and then you move in the tangent direction distance U and then you get 0, 1, 2T and so on, so it's the derivatives of the model. So this would be a parametrization in a local chart. So I hope we all know what I'm talking about. And then the question is to understand the sieziges, the minimal resolution of the coordinate ring of tau. So that's what I want to talk about. This is a fun fact so that I can stop talking about P equals 2. So this is degree G minus 1. It has co-dimension G minus 2, so it's a variety of minimal degree. It's in fact the scroll of dimension 2. The resolution is given by an even north cut complex. Let me not talk about that case. This is not going to come up in the material. So P equals 2, I guess P is different from 2 from now on. So what can you say about the sieziges of tau? Well, the first thing you could say is that tau is, or this T maybe, is a, is a, is a regularity is 3, so the shape of the minimal resolution. So the minimum, the betty table for the coordinate ring of this tangential variety looks as follows. So we will have only, I guess, four rows. It's gore instead, so this is symmetric. I guess I have a 1 here. Everything else vanishes. I guess this is G minus 2, homological degree 0. And so really the question, if you want to understand this betty number, first thing to understand is how many zeros do you have among this A. So once you, one of the A's vanishes, all the other ones vanish, and the question is how many do. And you know, the theorem that we proved is as follows. So really if we work in characteristic 0, or in characteristic at least G plus 3 over 2, then from the middle point on, the betty number vanishes. So I guess an example would be, if I look at the curve of degree 7, I take the tangential variety of what the resolution is going to look like. So the statement of the theorem is that we have a vanishing here, and then from that point on. So I guess this just means that, I think, terminology is that the resolution is natural, so the Hilbert series determines the betty numbers. So once I know the vanishing in the middle actually can deduce all the numbers. So I'm saying that for G minus 7, that's what you get. And that's our theorem. And what does Green's conjecture say? Well, one of his conjectures says that if I take a curve of genus G, which is general, then it has a canonical embedding in projective space of dimension G minus 1. And Green's conjecture is about the scissiges of the coordinate ring of that curve. And the statement is that basically the scissiges look like this. So it's still going to be Gorenstein of regularity 3. And so really the statement is a vanishing statement, which says that the homology group K G over 2 1 of this curve is equal to 0. What this K group is, is some vector space whose dimension is the betty numbers in this position. The betty number in this position. So anyway, so the Green's conjecture says that for a general curve, the scissiges look like what you get from these tangentials. So what's the relationship? So I guess what's the corollary of this theorem that we covered right here? So again, the strategy is new. All of these was laid out conjecturally 25 years ago. The statement that you could find the scissiges of the tangential and then use them to prove Green's conjecture. So I'll tell you how you prove Green's conjecture based on this vanishing. So the corollary is that Green's conjecture holds for general curve C provided that we have a working characteristic 0 or that B is bounded below by G plus 2 over 2, which is this condition that we had over here. So what's the reason for that? So I guess how does the truth go? So scissiges are upper semi-continuous. So if you find an example of a curve that has a certain vanishing, like the KG over 2 equals 0, then in an open neighborhood in the modular space, there will be curves satisfying the same vanishing. So it's enough to find an example of a curve C and satisfying the desired vanishing. And the way you do it is you consider a general hyperplane section of this tangential surface. So this will be a curve that tangential was in G dimensional space. So the linear section will be in G minus 1 dimensional space as it should be. And this is going to be a rational G-cospital curve canonically embedded in P G minus 1 with the same Bayley numbers as the tangential surface because you took a general linear section. So this is the example that we want. It satisfies the desired vanishing because the tangential has the desired vanishing. So then a general curve will have that. So let me make maybe two comments or two remarks. This corollary is not new in characteristic 0. So if P equals 0, this was a theorem of Poisson. I guess it's part of a series of two papers, which does the odd and even genus. So that's the first remark. So for P equals 0, we just give an alternative proof to this theorem of Poisson. But the positive characteristic is new. And then the second comment is that Eisenberg and Schreuer have thought about the characteristic P case. They've done a lot of examples. In fact, Schreuer has a paper from 1986 or so where he completes his source. The green conjecture is small genus, maybe up to seven or eight. And anyway, based on their experimentation, so it's known that green conjecture fails in small characteristic, but the question is sort of what's the right bound here. And their conjecture is that green is OK for P greater than or equal to G minus 1 over 2. So what's the bad news? That's the good news that we're close. The bad news is that we'll never be able to prove their conjecture using this method. So what's the bad news? So the bad news is that, I don't know here, but if the genus, so if P is relatively small with respect to the genus, then the tangential variety is contained in a scroll defined by the 2 by 2 minors of the following matrix of indeterminate, so Z0, ZP, Z1, ZP plus 1, up to ZG minus P, EG. So it's contained in this scroll, which means that the resolution of the scroll is contained in the resolution of the tangential. The resolution of the scroll is given by a Negan-Northal complex. And what this implies is that KG minus P1 of the tangential is different from 0. But in particular, if P is smaller than or equal to G plus 1 over 2, then G over 2 is going to be smaller than G minus P. And this implies that KG over 2 1 is in fact different from 0. So we will not be able to prove using these methods that the incorrect conjecture in genus when P is G plus 1 over 2, G over 2, and G minus 1 over 2. We think the G plus 2 over 2 K's that's missing should still be OK. So anyway, there's still a gap. And since I'm almost out of time, or already out of time, the, so how do we get this? How does the proof go? I wasn't going to tell you the proof anyways, but the sketch is, let's say that we're looking in genus 2 and plus 3. So our genus, even genus is similar. And what you could do is you could look at the following map. You could look at the, there's a multiplication map from Cmn minus 1 of K2 into Cm2n minus 4 of K2. So this is just an sl2, a covariant map. And what it does is if you think of K2 as just the span of 1 and x, the semantic powers are polynomials in x above the degree. And what the map does is it sends x to the i, which x to the j, i minus j, x to the i plus j minus 1. So that's the map. And the point is that it's a surjective in large enough characteristic. You define K perp to be the kernel of that, which means the image of this corresponding divided power inside the dual vector space. You take v to be the dual of Cmn minus 1. So you take v to be the divided power. And out of this data, you get a Cosoule module where v has the right dimension, namely n. K has the right dimension, namely 2n minus 3. And basically what we prove is that the dimension, well, I guess in my notation from earlier, on Ag over 2 is equal to the dimension of Wn minus 3 of the Cosoule module that you get in this one. So all you need to check is that this Cosoule module indeed has vanishing resonance. So you have to check that there is no such pure, or some other pure power that maps to zero. That's a linear algebra statement that you can check. It requires the characteristic to be not too small, but it's still easy. So that gives the vanishing. And for the even case, there's a gap here, and the gap is measured by some appropriate dimension count coming from the Hilbert function that I described earlier. So, I'll stop here. I did a request. So, this is for general, is there any way that you could bring Clifford index into the picture here or not? Not even where you would. Again, doing it for an arbitrary curve, then it's not just doing one example. Right, so that's wrong. This does imply the right result for generic curves. Well, but not this statement, so I would need other examples. Yeah, no, no, it goes through some other. Right, so there are, you know, the tangential doesn't do the job, right, but there are other constructions of the generic surface and where you take inner sections and try to prove that those satisfy the appropriate green conjecture where you throw in the Clifford index. But for instance, I haven't been able to make those other constructions feed into this picture of Kozumod. But doesn't the correctness in those constructions follow from the generic case? I thought that this is strong enough to imply generic K3 services. Oh, by generic K3 services, you get it from all different indices, I think. Yes, but that involves more techniques than I explained here. More techniques, yes, yes. So, right, a product shows that based on this, you could get green conjecture for a generic curve in each Clifford index, but that requires, you know, it requires this theorem plus more. So, if we could produce the examples directly based on this method, then that would give a short argument, but we don't. And there are more questions? If not, let's thank you all again. Thank you.
I will discuss the basic theory of Koszul modules, which were originally introduced by Papadima and Suciu as a tool to study topological invariants of groups. A special instance of Koszul modules had previously appeared in David Eisenbud's "Green's conjecture: an orientation for algebraists", where he proposed several programs for proving the Green conjecture for generic canonical curves. I will explain a vanishing theorem for Koszul modules that completes one of these programs, providing an alternative approach to the original proof of Voisin for the generic Green conjecture. Joint work with M. Aprodu, G. Farkas, S. Papadima, and J. Weyman.
10.5446/59201 (DOI)
You know, the organizers, so given the possibility to come once again, it's a fantastic place. And also the conference is going very well, I'm learning so much mathematics again. Hopefully I can climb a mountain, so it will be very, very good. So I already brought up the titles and also the collaborators in this project. They are Marshall Dan, Amida Sundazis, around Simi, Suburna, British. Now, so this goes back to the problem of Pankare in 1891. He asked if we can balance the degrees of plane curves that are left in value by algebraic vector fields of P2. So the goal of the problem, if you want. So first of all, why did he ask that? I mean, the reason is because if you could bound the degree of the curve, then you would have to look for this curve because you actually wanted to find them. You have to look all in a finite measure vector space, so it makes it easier to find such curves. And what is the goal in general? I mean, is to relate numerical invariance of the vector field to numerical invariance of the curve where the vector fields leave the environment. So first of all, the first step I want to do is a translation to commutative algebra because this problem has been studied so far mainly by algebraic geometers and the fresh geometers. So these are some of the previous players that actually didn't pass 20 years quite a bit of progress and they are Campilo, Carneser, Saval, Cruz, Estabes, Galindo, García de la Fuente, Climbán, Luis Neydo, Pereira, Suarez. We started this problem actually in 2001 and that's every project we burn, it takes at least 20 years to publish it. So we had some very good progress in 2001 back in 2001 and then this stuff that we proved was proven by Estabes and Climbán later on. So we had to restart again and try to prove more and hopefully this time we can publish it before somebody else starts. So let me fix the setting. I will have a little simplified setting just because of the talk, so avoid technicality. So K is an algebraic closed field, this unit, and then we assume that the characteristic of K is zero and these absolutely don't need. Of course I've listed the topic, but we assume that C in Pn minus 1 is a curve which is reduced, this we always need, and irreducible, and this I don't need, it's just really doable, it's a good balance. So I have my curve, I'm going to go stick the maker. Now either north with R, do you see if I write down here? Okay, because that's easier given my height. R and S are the coordinates of C and Pn minus 1 respectively. So now what is a vector field? And now I have to start writing here, unless I use this little tool. A vector field X, okay, I think Pn minus 1 of the degree M, we always call it the degree of the vector M, okay, is an homogeneous map that goes from Z to S, okay, and what is Z? Z is the kernel, you know, is defined by the Euler's sequence, is the kernel of the Euler map. So you take the module of differential of S, remember S is a polynomial, so the module of differential is a free module of rank N generated by the partial derivative of Xi. Xi are just, I mean, so the differential, sorry, Xi are just coordinates, okay, and what is the Euler map is the map that sends the Xi to Xi, so this goes to the maximum ideal of S, and this is called the Euler's sequence, because this is the Euler map, and this is just the kernel. Now, another way to think about Z is that if you look at the shiftification of Z, Z tilde is the shift of the module of differential forms of Pn minus 1. Now, so this is degree M, means that this is an homogeneous map of degree M minus 1, the map of degree M minus 1, and any such map is a restriction of a map from the module of differential to S. So such map is just given by a row vector, let's say a 1, a N, so the a i are in S, and they are forms of degree M. Now, first of all, how can you think now this way about a vector field? You know, mind what is a vector field, but this and one and N just define the forms, they define a map from Pn minus 1 to Pn minus 1. So you have a point in Pn minus 1, you look at this image, it's another point, and what's happened? If the two points are distant, you have a line in between, that gives you the direction of the vector fields at the point. If the two points are the same, then you say the field is 0, and if the image is not defined, you know, or exactly where is the 0 degree vector field, that are exactly the similar points of z. Another way you can think, since this is free, you know, another way you can write the vector field, since this is free, you can write it in terms of the dual basis, and this is just the sum of the a i and the partial derivative. Okay, so that's another way you can write the vector field. Okay, now, I talk here about vector fields that leave invariant curve. So what does it mean? I have a curve there. What does it mean that it leaves invariant curve? You just have to write the same sequence now at the level of r, and you have a commutative diagram. So if I speak loud, but I have an ear, it doesn't work because of the altitude. My kids have that problem too. What? My kids have that problem too. So I look at the same sequence, the similar sequence at this level, okay, again you have the map, now this is the image of the outside, okay, and obviously there is this commutative diagram. Now, be very careful because this is our z, usually this map here is not subjective, and what we would do is care is the image, but in my case we assume characteristic zero, that's the reason I assume it, so this is subjective actually. And what is l? You have a way you can think of l. Another way you can think about l is the image of the second map, you know, the theorem complex, okay, so of the second differential in the theorem complex. And that would be very useful. Also, what is the theorem complex? Just the Cousin complex built on this map, you know, on the Royal map. Alright, now, so what does it mean that this leaves the same value? First I want to do a picture, and then I go back to the diagram. So, let me erase this. For the moment I have the Poincare problem, but after I erase it, okay, so C is left invariant by x, okay, or another way that the geometry that's saying is that C is a leaf, okay, I will never use this terminology much. Okay, so what is your image in your head? You have your curve, you want that at every point of the curve, the vector fills the tangent. You have your vector filled, and this should be, obviously the vector fills the find everywhere, but you want that by this tangent to the curve. So that's the first condition. If you take a point at the curve, the vector of the direction is pointing. The other condition you require is also that at almost all point of the curve, the vector filled is non-zero, okay, so the single, it's not single, at almost all points. So there are two different conditions. How do we write them algebraically? The first condition algebraically just says that x here, this was the x if you want, so x was a map here, you see, at the level of s, so you had your C, that goes to s and this is your x, and down here you have your L. Now, you want to induce, x should induce a map mu at this level, okay, so x induces mu, and the second rule is that mu should be non-zero. That just means that the vector fill should be non-zero at almost all points. Alright, obviously this mu will have the degree be homogeneous and you have the degree m minus one, exactly the same degree as x at, okay, necessary. Alright, now we are ready for doing our translation, so to convince the algebra of the problem. So I'm ready for it, so I'll start. I don't know, maybe you are familiar with this, so I try to go slower. So, for the, I look at the, whether we want to, instead to look at the set of curve that are left, at least mu invariance, I look at the set of mu fixed in my C, so what is the set of mu like that, you know? They are just homogeneous, non-zero, okay, are linear maps from L to R, and actually it's easy to see from the diagram, chase the diagram, that they necessarily, you know, can be extended to the module of differential, omega of R, but can be extended. What are maps, homogeneous map from the module of differential to R, are derivations? So what we are talking about are just derivations. So, this is a translation, our object of interest are derivations, and which derivation, but be careful though here, because you want all derivations, but, you know, you want that when you restrict it to L, are not zero, which derivations are zero when are restricted to L? But L is the kernel, if you remember, was the kernel of the Euler sequence, okay? So any derivation that, if you just remember the Euler sequence, just a second here, I wanted, for you, and we have a little bit here. So L is the kernel of the Euler sequence, so if I dualize this, I see immediately that what I have to do modulo, because remember I have to dualize it, I'm interested in the map from omega to R, is exactly the module, M inverse times epsilon. So this is the module of interest, and you're looking at it inside, and to go there, okay? Now, and notice that this is a torsion-free prank, one module. Sorry, epsilon is the Euler derivation, so it's exactly what you correspond to this map, the Euler map, the Euler, through all the map, through all the talk, and this is going to be very important, the Euler derivation. See, you have to modulo it out. So now, go back to Punkarec, that's why I left it here. So what did Punkarec ask? He asked if you can bound the degree of plane curve left invariant by the vector field. And Karneser, and you wanted to bound it using the degree of the vector field, that's why I defined the degree of the vector field. And already the answer is no, due to Karneser, he was one of the players I wrote up there before, and there is an easy example, but I don't know if that's the original example of Karneser, but this is easy. You take the family of vector fields of the green one, given by the derivations. So I write in this way, minus D times Z, the derivative over Z, partial derivative, one minus D, X, the partial derivative with respect to X, okay, for every D in the natural number. And this vector field, each of these vector fields, leaves invariant the curve, okay, CD, because depends on D, of degree D. So the degree becomes as big as you want, you know, while the vector field is degree one, and is the curve just given by one equation, so it's an I prefer surface, defined by, and it's easy to see, you can check it, if you're bored, it takes one second, X, D minus Y, Z, D minus one, just to the derivative, you got that, this is zero, okay. All right, so we cannot expect to find a bound on the curve, on the degree of the curve, just using the degree of the vector fields. We must use other invariants, okay, so other invariants might play a role, and now we look at opposite question, which is a prevalent, okay, and that's why I wanted to look at this old collection, because in Comistival du<|ko|><|translate|> we have a beautiful module, but then it goes all over that. Instead to look at upper bounds on the degree of the curve, I want to look at lower bounds on the degree of the vector fields. So if I want to look at the vector fields as small as possible degree, I just have to look inside this module at the element of small as possible degree. The first non-zero element of small as possible degree. So it's like bounding the initial degree of this vector of this module. This concretely encodes all the information I want. So now I can really restate my goal, and we have a precise module. So now I can really restate my goal, and we have a precise module. So now I can really restate my goal, and we have a precise module. The only reason I present this now, and I start as this guiding thing, is because in this case we can give a complete answer, a clean, complete answer. So the sample here says as always. So remember the assumption, C is reduced to the reduced goal characteristic of K0 and algebraic goals. In addition, in this theorem, I assume that C has a retmetically gorestone. C is in the end, this one is retmetically gorestone. So I is gorestone if you want, I is the term of the thing of the curve, and as at most ordinary nodes as singularity. This is the kind of singularities. This is the kind of assumption that is always present in the work, in the previous work of the people. Then the initial degree of the object of interest. In this case it's actually R, and inverse is R because it has gorestone, okay, is equal on the nodes to the A in variant of R plus 1, if you want to write the Castelmau monoregularity of R minus 1, the curve, remember. In general, what can we do, you know, and actually this will guide the rest of the talk. In general, we can find upper bounds for this, which is not the question of Poincaré, but we can find upper bounds for this module. And for the upper bounds, we have no restriction on singularity at all, but we need arithmetic from a combinatorial. We can find lower bounds, which is the question of Poincaré. We don't need at all any gorestone or comboconinus for the lower bounds, but we need assumption of singularity, not this assumption, but we see which assumption we need in general. We need to control or some things about the singularity will come in the formula, okay, and then really in our work, we can do not just curve, but higher dimensional variety. But for this, I really wanted to focus on Poincaré question. Okay, so I'm going to divide the rest of the talk, and I have one hour because we're at 2.00 an hour. The rest of the talk is two other pieces, and the first piece will be the upper bound and the other piece will be the lower bounds. And I'll try at least to give you an idea of the technique that we use, okay? So in that time, we do prove the sample here. So I completely prove the sample here, but I'll try to do it, give you a normal view of the techniques, which I think is the most interesting thing. I had promised for examples, and we do have a lot, but most probably I'll just quickly tell you then where we are going for the examples. Okay, so let me talk about, you know, I think this was true because one was the translation and three is the first upper bounds. I just talked about one upper bound and three maps, and the three maps are the crucial fit. Okay, so first let me define one object. I try to give very little definition if I can, but I have to give this one. So J is the R idea generated by the Jacobian ideal of, if you want, the expansion of the Jacobian ideal in R, the Jacobian ideal of a complete intersection, a general complete intersection. Let me call it C i of dimension d minus 2 mapping into R. Okay. On to R. On to R. Okay, so what are the three maps? So there exists three more genius maps of the degree zero. The first one, you know, is the one I talked about. I said my L was the image of the second differential in the Delan complex. So it comes from the Delan complex. Okay, so it's just a second differential. That's confusing when you define the L-pile, I thought you defined it before as a curve. It's a curve, but it happens to be. It usually is not equal to the image. But in this case, it's the image because we have characteristics zero. You have to prove it, but it's the image. It requires a proof. Yeah. Yeah. Yeah, it does. I said it usually is not, but in this case, it happens to be. And that's why I wanted to simplify some characteristics zero because we have that in this case. Otherwise, you have to assume something about the characteristic not dividing the degrees of the form. And it all you use the image like we do in our field. So we don't have so much to assume. Okay, so the second map is the map. And that's why I needed J that goes again from this module top differential. Okay, into J, epimorphically, shifted of delta minus two. And I'll tell you what delta is. And what is this map? So this map does not depend on the fact that this is the module differential just depends on the fact that this module is a rank and the definition of J. So let's write here why we have that map. Once I write it will be complete clear. Okay. So remember that the rank of the module differential. Okay, is two and the minimum number of generator of the module differential is N. Okay, and J. Okay, is generated. Maximum minor of the matrix consisting of N minus two. N minus two. Columns of a matrix presenting omega of a presentation matrix of my mother. And what is that the map that goes from the from this guy to this guy, you just take the XI by the exchange and you send it to the minor where you delete that or I and the Roj. Okay, and why the shift, but the form defining the C at the green, let's say delta one, we get many comment that the true to get many quote. And this is the degrees of the forms defining C. And delta is the sun. I'm only minus two. That's why it's a general to start with the largest delta minus one. Okay, so this is exactly what you get when you send that map because the excited to you and you send it to something of the delta. So what is the firm? The firm up there talk about practically because the method goes from the module of top differential forms in this case omega to the canonical canonical module, which is the module of regular differential forms and burn talked about the old time. Exactly. So we know who this guy is and he's called a fundamental class. Now the point is that these two maps are morphism and all these three maps are isomorphic genetically. So what you get is that once you know that torsion you get free beautiful conclusion from these three maps. So if you know that torsion you get first that let me start from L model torsion is isomorphic to J. This is an idea. Okay, and this is a morphic to this guy. The module of top differential forms when I go down torsion. And inside here I find the canonical. Oh, sorry. Sorry. I have this embed the canonical and that if I do I see it. So I go and I see the picture I see that the dual is isomorphic to the inverse the fractional idea to delta is isomorphic to the dual of this guy. And then this is the one I was thinking about the dual of the canonical model sits inside here. And then you get immediately that the module you're interested in. Okay, this module. Okay, it was sitting in here. Okay, so it sits inside this. So it's clear that from here already you can see why this is powerful if you want bounds on the initial degree of this you just have to find bounds on the initial degree of this. I'm not saying that this is easy, but you know, lower bounds, but you can find low. So this is important for all bonds and we use it for all bonds. And then this we use for upper bounds. Okay, and we see actually the upper bounds come immediately very easy now that we have this picture so I can write them directly. So this is the fear. We have more fear than just this one, but this is maybe clean version. And as I said, you have single isotope make up here at all. You assume just that C is automatically the order and really you can do without guards and but you don't get. And I'll talk about this what we write here. And then the initial degree of the object of interest is always less than equal to the A in value of R plus one. Or if you want the regularity minus one. And actually this is completely false. If you take the constant away, you cannot get this and it's easy to show. And you just take an actual perfect idea and you use it easy to show that you cannot find upper bound just using the regularity of the invariant. You need to involve other invariants. For example, we have one using the type we have and the invariants we have one using the please T and the violence, but other things have to come to play a role. Okay, so we see where God is that is used here and what you would need to do in general. So the proof becomes actually quite easy. As I said, you always start from the universe sequence. Let's take our starting point. And if you do allies that. Okay, we already saw that our module. Okay, sits inside. And this is ours. I just got our senses, come on. I call it sits inside a star and what the cook on a citizen side, but the cook on a citizen side, the X one of this and are against. But this is the only as a more effective. If I go to X two of K and stuff. Why do I want to do this because this is a sock. And this is the soccer of our moments one X two shift. And this X one X two are generally. So what's the fun that for you just use coming calling us. Okay, nothing more by general this module that we care about is concentrated. And this is the real important thing. In the grease less than equal with a variant of up. But if it's course then that's not really important. But because this is just needs coming closer. So I don't use course. From yes, this is concentrated by the degree if I pass that degree is true or the same. So the initial degree. Of the module of interest. Is less than one. The maximum of initial degree of L star and a of R plus one. And now you also know. I didn't want to erase that sequence. So maybe I erase and do something. I try to do something. That's okay. Now if you look at that sequence here, I have this omega is inside. I'll stop. So omega look at the dual of mega sits inside. And stuff. So if I want to. Bound the initial degree of L star something inside it. I better just bound what I am inside it. Now if I go to then this is our shift. So this is in the degree R and it starts in the area. So this guy, this is. The initial degree. Of L star is less than. Okay. That's it. Now if you didn't have course then. You see you had to bound that initial degree and that's where. Types coming in and stuff. Okay. Let's do the last part of the talk. And hopefully I don't know so much over time. And I want to talk about the interest of one career and where the history actually the work of that are people lays on which is lower bounds. And here we have to somehow use some idea from geometry and we'll use general protection. Okay. Okay. Okay. Okay. So you have your beautiful curve and P-1 and what you want to do you want to project it down. To project it to the plane. You just have an item surface and usually the problem is not easy to compute the you know the not not so much easy. To compute your module in terms of the boundary of the initial degree in terms of inviolence when you're playing curve. The problem is and that's the problem with geometry. How do you translate your bound that you're typing the plane to the bound from your curve as 6 and P-1. And there are lots of difficult to do that and they do as you come across in a very complicated way. We don't get some such easy translation and we are able to do that and we don't need to call it. And it becomes very beautiful to go up and forth. That's why you can get up and down the lower bound this way. Okay. So let me write our statement and then the problem is just to compute the module of interest for a play for a night surface. So this is the last chapter of the talk and this actually we have to thank over both of because some of these three came where the last three weeks. So let's say lower bounds and general projections. So here this is about the general projection. You have your ring R. Remember your ring R is the coordinate ring of C which is a curve in P-1. And what do you do with this general projection? You just consider three forms, general linear forms in R. Or mine. It's not mine. General linear forms in R and we can see that the algebra generated by this, this six and side R, obviously, we call A. This map is a finite, birational, okay, homogeneous extension. And then what you have is that A is the coordinate ring of a curve in P-2. So A is got a stem, the nitrous surface, okay, of the same degree as your curve C, okay, the finite value of the same degree as the curve we started with. So what is the connection between the two module of interest, the scenario of two module of interest? The derivation of A, module of R, excellent, A, and the module of derivation of R, module of M, M plus, fancy. There is a spalling, let's try to do this, when the initial degree of our module of interest, let me put an iron, the above all the iron is needed, okay, is bigger than equal than the initial degree of the module of derivation of A. Now this is the order, oh, no, sorry, A, sorry, and I need A not a inverse because it is the same, and then I have minus the A in variant of A plus the A in variant of R. And it is also less than equals to the same degree of the same module of the A in epsilon and then plus A of A minus A of R, okay, so you get this beautiful thing, and then you can move the stuff up, stuff it up, you can get up and lower, they are really interesting now, lower, you can do both. Correct, so how the proof, the sketch of the proof is, but you consider the conductor of A of R is A, this is the column, yes, and this is nothing else than the column, and this is nothing else than the column, 9 to 8, but remember that A is cost, so this is the canonical module of R, shifted by the invariant of A, and now, so what is the initial degree of this conductor, is that this is degree of R, so this A of A minus A of R, and this is epsilon degree, okay, so this is this quantity that you see here and here. Now why there is that quantity, because you can embed, you have to pass the third module in the middle, so you can take the module of derivation of R into K, and you have a restriction map here, you can just restrict all this derivation to A, so you get the module derivation of A to R, as you restrict them to A, okay, so this is the restriction map. And then you can also embed, obviously, the derivation of A, these are just particular derivation. Now the point is that remember we have to be quotient, our object of interest is quotient, but the Euler derivation goes to the, this is Psi, let's call it, the Euler derivation goes to the Euler derivation, and what's more important is that once I mold out this guy, this is torsion free, so what you get is still an inclusion, and this is epsilon R, and this is still the same and inverse epsilon A now, as it goes to the derivation of A, and this is R, A, sorry, epsilon. Now, so what does, furthermore, remember these are all torsion free now, they're all modules, furthermore you can easily show that with my conductor kills both co-currents, that's not difficult to see, and this implies immediately that if I take the conductor and multiply it by an element, here it goes to finish here. So this tells me that if I take the initial degree of this module, plus the initial degree of the conductor, because I take the element of the conductor and multiply it by an element here, I finish here. It ruins your element because these are all torsion free, so the initial degree of the conductor, this is, I get an element there, so this is bigger equal than the initial degree of this guy, because I get an element there, and then you can play the same role with the other one, and then you get the balance you want. Now, we are left now, okay, to look at the, and I still want to do that, to look at the case as I said, where it took an hour, so I can do it, I knew I would get to solve the case of plate curves, but it's very interesting, so I would like to describe that also because I can show you how the results of previous people have been generalized, and that's really what we want to get to. So, this is really the plane curve case, so here, plane curves, okay, so just that, now we're currently at the two of the degree D. Okay, and the initial degree of our module, we're interested in this case, it's odd because it's an hyper surface, as I call it, is bigger than equal than the minimum of D minus 2 and 2 D minus 2 minus A of r mod j, the A variant of r mod j, j is the Jacobian ideal of r, the fungial Jacobian ideal of this case, and this is bigger than equal, if you estimate this A variant of D minus 2, this notice that is what we had before, is the regularity minus 1 of the A variant plus 1, plus, and here it comes, the singular locus, the singular locus of C, this is the cardinality, so the singular points, the number of singular points, minus the levy multiplicity of this r mod of the Jacobian. Now I'm going to define what the levy multiplicity is, and if you have any module M, you can consider, I mean, graded or whatever, you can consider the associativity formula for multiplicity, and you know that the multiplicity is the sum of the multiplicity of r mod p times the length of M localized at p for every p in the support of M of maximum dimension. Now this one is bigger than equal, that is, in place of the length, you put what, you know, she talked about, the length, so you still have this form, I say, please, but instead of length, you put the length of this card, okay, so at this point, and remember this is just the smallest power of the maximum ideal, this is a module of finite length, so it's the smallest power of the maximum ideal, it kills the module, and this is what's called, this quantity is what we call, I think, I think Vasconcelo called it like this, the L multiplicity or the levy multiplicity of the module, in our case the module is a module, okay, so let me just really give you a sketch of the group that comes, as the color comes, immediately, you know, what you want it, but generalize also, put the previous work in the literature, okay, and then you use this general projection to put it inside, okay, the corvette, okay, so the proof, and this actually, the first type of the proof, if you want, was really inspired by work of Eisenberg-Urch, you know, you have a, F is the form that defining C, remember, C is an Eiffel surface, okay, and you can see that now this J is just the ideal in S, you know, given by the partial derivative, you have three partial derivatives, and what you can prove, this is an ice ideal, okay, set the coordinate of S, the polynomial, and the point is that, and that's what comes, inspired by work in a certain sense, is that the CZG, module of CZG, so this is CZG conference, of the crucial part of this ideal J, okay, maps on the module of interest, that's really the point that happened because of the hyper surface, okay, so if I want to bound the initial degree of the module of interest, then I just have to bound the CZG, obviously we know very well, even though this is a CZG conference, the bounded CZG is not easy at all, however, in this case, this is an hyper surface, so you can bound it with link catch, because you can write each, you know, row of the matrix just as a link, so this is the initial degree of delta F over delta X1, delta F over delta X2, column J, and now because this is a link, shift it to one, because if it's a link, you can relate it to canonical module, and that's how the A-variant comes in, but now remember we want to bound the A-variant to get to the second formula, and again this is inspired by work of Eisenberg, who reached out to me about the A-variant, and that's how the information of the singularity comes in, so let me name at least here because I don't want to erase the thing there, so we want to bound this A-variant, and we use actually work also, Cavillier-Schindler, to do that, so we have to bound A-variant to bar 1J, we can take them mixed, and now if we want to bound the A-variant, we can just take something that is containing J-variant, and for this, we can look at the intersection of all, this PI, this prime, corresponds to singular points of the curve, so the PI corresponds to the singular points of C, and SI are exactly the levy lengths of our mod PI, our mod J, the Jacobian, general localizer PI, this is the Jacobian ideal now, of the J we were finding, this is the Jacobian ideal of our curve, but now you can consider to bound this the radical of the Jacobian, the intersection of the PI, when you are subtracting one from each one of them, and so, but now the Jacobian gives you the localizer where the thing is not smooth, so what is not normal, so you can add here the conductor, that's what I want, I want the conductor, and the advantage now is that you go and product of ideals, and not just product of ideals, these ideals are ideals of dimension one, you can list it to the regular rate, and then there is this beautiful formula due to Schindler or due to Cavillia, sorry, a Syedman, a Syedman, where the regularity is just, you know, the regularity of the product can be estimated using the sum of the regularity, okay, so, in practically, I don't do all the steps here, but you have to pass a by the regularity which you get, and this is the regularity of R over the sky, this F, and if you do the old path that you had to get with R over the conductor plus, but what do you get? The sum, actually the minus, okay, the sum of the SI minus one, and this is obvious, you get T, that's the cardinalism of the singular locus, minus the L, of R over J, is the sum of the SI, so this one is that, you can use, you know, now the integral closure, and again, this was, that was the aspiration from the work of Isaac Louridge to estimate this, and you can see that this is actually, this term is actually equal to minus the initial degree, first of all, you pass from R to the conductor to the dual of R mod of the conductor, the omega dual, so the appropriate X in omega, and then to approximate this, you go to the integral closure plus D minus three, and the point is that, and here, we need the reducible, otherwise we have to do something else, because you have a field of characteristic, it is algebraic closed, and R is the domain, you know, R bar is only the positive, and R is the negative, and the negative degrees, and degrees zero is equal to R, so this thing is bigger than the problem one, so this is less than equal to minus one, so this is less than equal to minus one, and now you get your formula, so I'm really, I have three minutes according to my clock, since I will be 60 minutes established by girl, and I can finally state the main theorem, that I want to state, so hopefully you can follow this, I may go too fast, but, so you just put together the general projection, that formula, so what do we assume in this case, so the main theorem, I call it the main theorem because this is really in the spirit of Poincare, and so you assume, you see, we don't have any, you call it a goal, you got an assumption, nothing, only reducible for the curve, but we assume that C, it leaves in PN minus one, but has only plain singularity, because you want to do general projection, and you know, and keep track of the singularity, so it's only plain singularity, so this is really what we should now, in the future, get read of, okay, and J now, I call it JR, he distinguished it from before, is the full Jacobian idea, no projection that we had in the first part of the talk, okay, and then the initial degree of the module of interest is bigger than equal, but aim variant plus one, but I promise, plus the cardinality of the singular locus minus the l-mult on R, J, and notice that this part is equal to zero, if and only if, are as C as only, no, ordinary nodes, as singularity, okay, because each one of these counts one, for each singularity, they'll count this one, okay, this generalize the work in the literature, because they did it only for ordinary nodes, not for plain nodes, but they assume first, I think the first thing, first result is really due, I think, is a survival of Lin's nato, I put the name up in the board before, this is the 90s, and Van Buren was the free guy, the free sea, plain curves, Van Buren was the complete intersection, and that was done by Carnichere, Campilus, and D'Afuente, around 2000, and Ben Esteves did it for Comacoli with the general projection, but atmetically Comacoli, and all of this, I don't know the singularity, and we get rid of all this, and we have plain singularity, the point is that, you see, when you do the general projection, because you assume all the plain singularity, you can keep the singularity, you may get more, but the singularity you get are all the ordinary nodes, so the difference here doesn't change, and all, that's a problem, so you have control over that, and then you get the formula for here, thank you. I think it's okay, that case, because that's a 9th of surface, but I, ha! We did check that, but we should, because we can, that's an easy case, and we can go, and the interesting, I think next one, what should one should do is really look at least for plain curves, start with the general zero, because in that case, we have a very good idea of the singularity, just looking at the hyper-verge of the parameterization, you know, of the curve, and that's the work we did with Andy and Cox, that just an hyper-verge called general information, then we can relate it to general cognitive, and we have done many examples, it looks like in that case, you have class because of the number, you have another number, so this number comes down to quality, and exactly in terms of the singularity you get, so that's the first thing we should really pursue, to get the exact formula for the singularity, and then try to be, you know, higher genius, and it's really about, now, finding what the quality working with the plain curve is first, and then we do the general direction. Obviously, if we want to get rid of planes, you know, of this assumption, we need to think about something as a general professional. I think we've already stuff on the very vector fields as being real, that in the complex, did you ask him about the complex number? I think so, yes, I think so, he asked it over the complex number, he's original question, the takers over the complex number, when he talks about these fluids and the things about it, as the curve keeping the actually fluid inside, that's how he talks about it, you know, that once is a violent, disdainful thing inside, so, yeah, I think that's, yeah, because that's what you want to keep until the day of the, yeah. So a lot of this ties into hyperplanaration, so in case with your plain curve, it's actually a lot of mention, or if you were working with hyper surfaces that are hyperplanaration, I'm curious about whether this says anything new in that specific context, it would be interesting, because a lot of these things are very, very well studied in that community. So that would be, and it would be interesting, especially if there's lots of conjectures, so the, you know, the module derivations of hyper surface, the whole industry, the folks who work on that, and Sim has certainly worked in that for a while. The other question I wanted to ask is, do we have that case where you had an aromatic limestone curve, so suppose that curve is smooth and canonical, so when you're looking at that, this must be some sort of classical result, so what are the vectors that make you have a canonical curve, can you say what the vector fields are, could be that curve invariant, and we think that maybe your results say something there, and it should relate to something. In fact, the first thing we did in generalization 2001 is look at the smooth state, but for higher dimensional biotech, because for curves it's all known, and we looked at higher, and we could assume that it was smooth, and some, we could have intersection as smooth, and so far as then, with some assumptions, but they could occur for the smooth state as smooth, and you get this kind of balance, thermo-guide, and then the other thing, also just not for hyper plane dimensions, just for hyper surfaces you have this sort of, longer than the vector fields, so you think of the hyper surfaces that advise around trajectory space, and then like Robert Deets's Algebraic Gram Theorem relates the topology of the complement to the Schoen-Polbann, or the corresponding longer than the tangent bundle, so there's I think a lot of classical algebraic channels that we live in, and we also nicely did the smooth, so, you know, coming from in a gallery. Alright, so many questions, thank you.
In 1891, Poincaré asked if it is possible to bound the degree of a projective plane curve that is left invariant by a vector field in terms of the degree of the vector field. In joint work with Chardin, Hassenzadeh, Simis, and Ulrich we address this question. The question can be restated as a problem about the initial degree of the module of derivations of the coordinate ring R of the curve modulo the Euler derivation in terms of invariants of R. We exhibit lower and upper bounds for this initial degree and in several instances we are able to determine the initial degree. Examples will be given to illustrate the situation.
10.5446/59203 (DOI)
Mark from B2B3, that many things extend to more general settings. And as I should say, this is joint work over the years with many people. A Busey, Bordboil, Ronewool, Aon Timis, they could cross-kill, Hanfeng Poa. Also inspired by many people, work of them, Kaudia, and the Kirsten, Koks, etc. So there's a lot of history there, but anyway, so the setting is like this. You take a, I was always concerned with the 2B case. I mean, you take a rational map from P2 of a k to B3 of a k. So the rational map, encase of field. So the Kuban feed means you give a correction of polynomials, f0, f1, f2, f3, and a polynomial ring in C-value balls, maybe x, y, z. Okay? Of the same degree. G, okay? And if I want to assume that the GCD of the AFI is 1, okay? And let me set up some notation. So I would be the idea of the Fs, okay? And R and X would be the skin defined by I. So it's either I am here or a set of points in this case. And this is the Bayer-Lokas map. And what else? And the S would be the image. So it's phi of, sorry, P2 minus x bar. Okay, we will assume this is a surface. So in other words, this is a, we generate a finite map. Well, in most cases also I assume it's not a hyperplane, which I don't mean to be independent, but I guess most statements are okay. Okay, so what are the goals? Oops. I have several, but I speak a little bit about these two. One is to determine the equation of S, which is a hyper surface, which would be Fs 0. And another thing I will speak a little bit more about that here is to give information on the parameterization. For instance, to what are the points in the S that are parameterized by many, by corresponding to several parameters or infinity many values of parameters and things like that. So let's start. So let's go gamma zero to graph of phi. So this is inside P2 minus x cross P3. Okay, and you take the closure. This is the closure of the graph. This is in P2 cross P3 and we have a projection of this into P3. And of course the image of gamma is also as S. Okay, and we are looking at, well, we are looking at fibers of phi of gamma minus. Okay, so let me give you a bit of the algebraic setting behind this. So the very important object with the graph of the joint side, on the algebraic side, it is the resalgebra. So our gamma is what I will write by approach, but I can't put it in the way of the B string. And looking at this embedding here, corresponds to looking at this three string as a morphic image of a polynomial ring, R of TZO T1, T2, T3, mapping into the real string, and maybe at least once I will put the grading of views, this will be R, where I shifted into E by D, something like that, and I put a shift in the grading, and I map TI to FI, which is a minimum T, and I put the usual standard bygaining of that, so that the degree of TI is 01, and the degrees of the X or Y of Z is 10. So there was 01 and it's R of TZO. So now my resalgebra, which is a pretty good domain, is written in the form S modulo P, and so this is a bi-originous prime ideal, defining the idea of the resalgebra, and of course it contains all the information we want, because once the calculator is set P with K of TZO T1, T3, and we put this in B, this is just the idea of the surface, so it's just a principle ideal, given by our element. However, it's pretty hard to fully understand the resalgebra and all the equations here, and the resolution, whatever. So to compute this, or to understand the fibers, and also this contains a lot of information on the fibers, because I mean, I'm just tensing, I mean, specializing this in some points to give you the fibers. But this is not so easy to put your hand on, so one passes a very usual sum in a synergy, that's really the basic thing to do the thing. So one notices that P contains the elements in P, or bi-degree, while anything in the R, particularly while in the T's, what is this? This is just a synergies, because it's just a sum of the AIS T ions, basically being in R, so just the sum of the AIS F i is zero, so just a synergies return on the linear forms. And now if X is the base locus, it is locally a complete intersection, then the bi approach, or even a little better than that, the bi approach of this is also gamma. So if you assume that the base locus is not going to be an intersection, then the graph is also defined by the synergies, so it's here the geometry, the synergies are going to think, but while in general it's not so difficult to understand what it is, I mean, this, maybe I can keep writing this because I won't use it too many times, so the graph of that is always equal to gamma union, where the union for X in X is not a complete intersection point, or some bi T, which is of the form X of W over X, WX, explain what is WX, so the support of WX, well, is, well, X cos some bi T, because I mean this is defined by linear equation in T's, so you just specialise that, and so this is just given by vanishing of some linear forms, I T, okay, and so this is not a complete intersection point, so if the local number of generators, for the ideal is C, then this is the specialisation of any CGG, that was specialised to 1, X is the specialisation of any CGG, CGG, and the rest, well, we say the X is 0, okay, so because if you have no relation between them, you just have, here, this just have an X cos P, okay, so now, and I will call maybe an Nx, there is zero values of X, so that's a hyper plane in the P3, okay, so, now, what one uses is to do some illumination, I mean you use the equation of the Cimetic Archibald, you assume you know the CGGs, to compute the image, and you can do that, okay, so maybe I want to turn, so this has a long history, I mean the last thing is, I mean this statement is due to Gouge's one over myself, but it has a long history, so it says that that U0 will be equal to 2 times D minus 1 minus the initial degree of Ix, so this is just the saturation of I, okay, and you assume X is locally, so almost on read intersection, so locally defined by at most three equations, okay, so we are in the case where we have these ones, okay, and these are the form X, so the Cimetic Archibald is just this, okay, then for any new at least equal to new zero, okay, then if you look at the Cimetic Archibald, you look at the part of the new star, so this B model, this is a torsion B model, okay, and the divisor associated to that, well, is exactly obtained as the, gives you the, I mean, the pi hoster of the divisor defined by that, hopefully I will give you some way, so this will be just the degree of the morphisms times the class of your surface plus the sum for X in X0, we are going to complete the intersection point, okay, of a multiplicity which is a difference between the hiver's and the multiplicity and the ranks, which is exactly nonzero in the sum, not to a different intersection time than the class of hx, okay, so we have a multiplicity, computation, so now, this enables one to compute, in fact, the image, I won't give you too much detail on that, anyway, this is also used to understand the fibers, so one of the explicit computation, one way of image, so I'll go from S, okay, and hopefully one uses that, in fact, one knows, I mean, the resolution of the symmetric algebra, in fact, it's also used to prove that, of course, so we have an explicit resolution of the symmetric algebra by the so-called approximation complex, I won't explain that in detail, but I will give you a very cool little bit, so the way you consider that is that you have your causal complex, you look at the causal complex of the forms, okay, over R, or you can look at this over S, because this just contains R, so you can look at your function for that, because the emoji would just be the emoji of R, I mean, adding values or something, okay, and you have a second complex with the same models, which is the K, causing on the T is over S, which is a significant case, okay, and the two different shots here and there are commutes up to sign, so you have the delta T, so you put delta S with delta S, you compose one now or another, you can find. You are using S for too many things, but it's not symmetric algebra, not for the number of things, so it's not so good, so my surface would be one. The S that you are using here is the polynomial ring, right? Yeah, yeah, here is the polynomial ring. And S, the line is symmetric algebra. And so it's the line of symmetric algebra, that's not so good. And S, there is the surface. And the surface is not so good, yeah. Right, right. Yeah, yeah. Maybe you should be wrong, eh, so I'll talk to you later. Yeah, this one is very wrong, also, so I am wrong. Yeah, so it's such a good choice. Well, so, you have this thing, so it shows that the following thing is, so now the approximation complex is written like this, I mean, you get R of t, this is S, okay, and you'll find a map from Z1 of the cycles of F over R extended to t, which I'll show the cycles here, okay, and here you use the map delta t, okay, and the fact that this is equal to minus R shows that if you continue and put the other maps, and put delta to the t, this is indeed a complex, okay, this is called the approximation complex, and what the maps are, just the closing maps, so for instance, if you have a CGG here, as you want it to be, this will just be mapped to the sum of the A and I's t-arms, so in particular, I mean, the, well, the AGO of this complex Z-tart is just the symmetric algebra in the way you present it, okay, and the other emoji models are SI models that are only dependent on I, I mean, there's a lot of things about this, but here I just somehow need to use that this is an important point here, and in this case, this complex is a cyclic, okay, so, so the hypothesis, well, the hypothesis of the term, if I could write that Z-dot is a cyclic, there is only both of these emoji and hence, well, our divisor is just equal to the determinant of this complex, I mean, Z-dot in the original style, okay, and so, you take the weighted pieces in some degree new here, okay, then it's just a polynomial ring over, copies of the polynomial ring of a t, these are matrices of linear forms and an alternated product of determinants of some minus here will give you the equation, what is maybe more interesting, let's look at the first map in that, so, the first map is like this, you take Z1 and V new, I mean, this will indicate the degree of the AI, not so much the habit in that, so this means, the name here is a ratio AC with the degree of AI to new, okay, so the first map is like this, you have this of the t's and you have a map of t to R in the value of the t's, okay, and the map is pretty simple to describe, I mean, you take generators here, so let's say one of t, S, I have too many S, so R of t, okay, so, I mean, I see always these, the corresponding linear forms, but of course this linear form of the t's, you can rewrite them as the sum of the n, say, monomial of degree new of a linear form, LIM of t times N, okay, and so the entries of these mathematics are just the N, I, N indexed by here I, 0,1, 0,1, which is the dimension of this, and N belonging to monomials of R new, okay, and so I look at these mathematics and new, okay, and so now one way to see this is to say that your form F is just a GCD of the maximum minus of that, okay, when you practice when you, people do this thing to compute the image, I mean, because our people in geometry modeling use this kind of representation for the image, what is important for them is that the, the workers of the surface is the, is the values of the parameter for these mathematics when you specialize to parameter t, this, the random drops, that's it by one, okay, and now the problem is because you also understand them via these mathematics, okay, and you can, that's what I will just explain now, so with me, so now we pass on the slant fibers, okay, so maybe we should first take a very simple lemma, say geometric nature, so it says first that if X is empty, then the projection by, from, I mean, this is phi i, and in any case, by as, at most, finitely many one-dimensional fibers, and one thing I will address is to know how many fibers one can have and how you compute them, how you see what could be the difference, okay, and again, we use CG's, well, why, because, well, you have the scheme defined by the, by the CG's, okay, and we know that the fibers, first, I mean, if you, if you use our scheme by a part of SI, well, if X is open, it can be intersectionally, if you have the same fibers, but in any case, I mean, if you look at this, and you project it onto PC, you have the same, the fibers, say, coincide, coincide of the point X such that X is not moving within the section, so it's only, I mean, anyway, outside the base of the section coincide, but also in the point where it's moving within the section, the fibers are the same, so it may differ, but also we know more as to where it differs, because if you're not in one of these HX equal to 0, that corresponds to speciality in some CG's, you have the same fiber, and as you know, what is the difference, so, so it's, understand, if you understand that, you understand what are the fibers, okay, and so now, if, what do I want to say next? Yeah, so maybe I will speak a little bit about finite fibers, and then I will pass through one initial fibers, so let me, first give a term, which is this point, somehow the essential ingredient of the previous one, but I will just put it this way, if X is, if X is locally almost competing in that section, okay, like before, what can be more general, okay, let's see, that's why, yeah, first I have to understand, then Z dot is the psychic, I mentioned that, okay, and the regularity of the symmetric algebra is at most new yield, or not, two times the minus one, okay, so this you see it as a B of X, Y, Z, mod of Q, okay, quotient, okay, so it's a regularity, you forget the degree in the T's, and that's the, that's the thing the essential point before, so the basis of following corollary is that if, now let P, in the spectrum of P, where is the, well, say KP would be the fraction field of D mod P, okay, and assume, you look at the, well, I mean the finite fibers, assume that the dimension of S i, and so B KP is at most one, then, and that's quite a general fact, you have the regularity of S i, and so B KP is at most the regularity of S i, okay, and so this is at most new yield, so you know that the regularity of all these fibers is bounded by the same number, and that's expected, but in this case, I mean also for one-dimensional fibers, and one-dimensional fibers give you sub-scheme of P2, so it's not too hard to understand them and look at them, and you also have the following position, is that if, on the same framework, if the dimension of S i, and so B KP is two, then the regularity of S i, or is that P, if I use you know minus one, and there are more precise results about, I mean, depending on the fiber and all that, okay, so as a consequence of all that, okay, is that the, that if you take T in the, or in the surface, the closed point, okay, then for all nu, at least nu zero, the Hebert function of the fiber, and the b nu is equal to the Hebert point of the fiber, and the b nu, okay, and this is equal to the core rank of the matrix M nu, specialized like T, okay, so now this case in particular that say, well, at least if pi is finite, okay, the fitting ideals of M nu zero, for instance, for any nu, okay, that satisfies the surface S by the V of five, by the V of five, also, right, the maximum minus give you S, the next one gives you places where you have, I mean, the V two, the next one gives you things where you have five of the V three, okay, and one can also find, I mean, the one dimensional point where you have one dimensional fiber of that way by the fact that the core, the core rank here and the b nu zero and nu zero minus one should, are different than that, so nu zero and nu zero plus one, okay, because you know it's a Hebert point of, you know it's a one dimensional fiber. Yeah, that the Hebert point of the Hebert point of the fiber should not be hiding. Yes, yeah, yeah, this is because of that, I mean, because you know that the regularity of the fiber is nu zero. That why is the core rank? Well, because you just compute this, I mean, the dimension of this, you have a presentation matrix, so it's just the dimension of this as a vector space of the Kp, just the core rank of the matrix, okay. So now I will concentrate on the fibers of dimension one. Okay, and so the problem is how to detect them, how to get the equation, and another question I will address is how many can one have, okay, and how to bound normalities, okay, so if I have a point t in the image with one dimensional fiber, we associate to it, I mean, point zero ht, okay, we should be defining equation of the n-mix part, I mean something p2 of dimension one, so just given by one equation, so this is the defining equation of the, I will say p-1 of t are mixed, you may have, I mean, you may have embedded thing, I mean, and this is, but this is okay, I mean, you have the default here, but point number is okay, but it need not be a mix, and the fiber could happen there, okay, and so there is a, but it is not very hard to see, and it tells you that the one dimensional fiber correspond to very special decomposition of your ideal, and so I guess, so let me put the statement, so let, maybe I will underline this, because it is a little bit, okay, so let's, let's, one t has the, the points in the surface with one dimensional fibers, okay, and then you choose a linear form like L be a linear form in the t that do not run in shadows, okay, such that L of t i is not g o o i, okay, and then, then for every i, we want to ask, one can find, one generates a little bit ideal, so it will be, one can, I can write it there, it could be, there is g1i, g2i, and g3i, w, the final i, okay, such that your ideal is equal to L of f, so this is the combination of your forms, okay, plus h i times g i, and 1i, g2i, g3i, okay, so each time you have a one dimensional fiber, okay, well, you, well, say if you take a general combination of your generators, okay, that would be best, okay, the, the recondition is best, but anyway, okay, then your ideal is any general element, plus each time that you have one dimensional fiber to respond to this decomposition, so the more you are best, it will be, everybody will be a beautiful if you are in charge, okay, and so if you want the question is, I mean, how many, how many is the composition you can have outbound, I mean, in terms of, I mean, of the view of generators, so it seems pretty simple at first, but I took some time to understand this somehow, and also to make example, again, it's pretty simple, I will show you just now an example, but at first we don't really know how to make an example, it's pretty easy to find some examples, but, okay, so, so the first thing that is out of that, and it's pretty simple, is that your ideal is inside the ideal generated by Lf plus, well, ideal generated by Lf plus H i for all i, okay, so this is a fixed element, I mean, I can take it once for every one, okay, so in particular, I mean, this contains i, but this is an incomplete intersection, so this contains also its saturation, which is ix, okay, so if the initial degree of ix, which is the saturation of i is smaller than d, well, the same in H i has to be on to that, so if you take any f in ix, degree of f smaller than d, then H i divides f, so if you saturate it, you found the form of the least smaller than d, H i should divide that, but I mean, these H i, they never have a common factor, because they have a common factor, but to mean that the function should have, I mean, if you receive to that, it should have a constant value, and we are taking two different fibers, okay, so in particular, it tells us that you have H 1 H s, or no, R is R, here, no, S, okay, so in the product of this equation of the fibers, we divide that, so the first thing you notice is that if the initial degree of ix is smaller than d, of course, in many cases, you have the sum of the degrees of the H i, the degree of fibers is smaller than d, means smaller than this initial degree, well, first you can model this, this is always true, but one can bound always this sum by the initial degree of i d, and this is false, right, so let me give you the next one, before an example, I will give you the first result, okay, so, the first way to attack that, something that was done by Hong Kong Hoa in his thesis, is to remark that, not only that, but you can take powers of this complete intersection ideal, okay, and it will contain saturation of powers, so you get that if the initial degree of ix to the symbolic power p, okay, is smaller than p times d, then you get that the sum of the degree of the H i's is at most this initial degree, okay, and it's not hard to see that there will be a p, so this is true, the point is to estimate this p, okay, and so what I put the final proposition is that if x is locally complete intersection, this then initial degree of ixp is smaller than p times d for all p at least d over 2, okay, so now you have an explicit thing like that, and as a corollary, or essentially a corollary, you have that the sum of the degrees of the H i's will be very bounded above the whole part of d over 2 and d minus 1, this is only at least 1, I mean, you have a slightly weaker thing without extra work from that, in fact you can put, okay, but this is not very satisfactory as compared to the example we had, because we are expecting more linear bound on d for now, okay, and so let me give you an example now of this, so this comes from a Dibberlach matrix, so you take the following matrix, so minus 0, y, 0, 0, minus 0, 0x, and here you take two forms, so f will be, you can take many choices, I will take xd minus 2, minus d, d minus 2, and g will be gd minus 2, minus yd minus 2, minus yd minus 2, okay, and so here you put fg or gf, and then we will, oops, Spanish, maybe fg, or f minus 3, 0, 0, 0, with this sum, so sure, okay, and so now the maximum minus gives you the ideal, okay, and this is xyf, xyg, xdf, and yzg, and now the ideal of 2 by 2 minus is primary for xyz, so you will see that x is also in the intersection in this case, okay, and the degree of x, you can compute it from the resolution, okay, and you find that this is d2 minus 2d plus 3, okay, and so this will imply that the degree of the image is d2 minus this, so it's 2d minus 3, so that's the degree of your image, while the czg is very simple, and I'll write it here, okay, so there are t2y minus t0z, and t3x minus t1e, and t0g minus t1f, okay, so these are the general values of the czg model, okay, and from the 2 first, okay, you see that this math phi has an inverse, at least I mean, general curriculum, which is given by the 2 by 2 minus of the matrix corresponding to writing that this is the matrix in the t times xyz, so the matrix is 0, t2 minus t0, t3, 0 minus t1, so... Is this genius? Yes, yes, these are the pi degree 1, 1, and this is pi degree, I mean, d minus 2, 1, f and g are of the same degree, yeah, and the 3 is the 4 polynomials, then same degree, right, and the coefficient is the same degree, and you have y is y, it's not g, no, this is y, sorry, this is y, this is... So it's y, okay, these are just the problems here. It makes a difference for mass flow miners, the matrix m, f and g should be switched. Maybe. So this is g and minus m from our... Is there a minus maybe, someone? Maybe it's a minus. Anyway, so you have that this matrix times xyz is 0, I mean, this is this relation, okay, so from that you have an inverse. The map, I mean, restricted to s, which is the same size from s to p2, that sends your t's to the 2 by 2 minus of that, so it's t1, t2, t0, t3, and t2, p3. So in particular, you see that phi be rational, right, not so important for what I will say after, but that's a description somehow of the geometry. And now you have many fibers, and you have many fibers because essentially this depends on x and d, it depends on z and y. So you have different values of t, of course you have the one corresponding to this, okay, you see them taking these equations, okay, and so Francis, if you take t0 to t1 equal t2 equal to 0, okay, so you have the fiber that is x. And similarly, if you take 0, I think 0, 1, 0, this one, yeah, this gives you the s fiber y, okay, and now Francis, if you look at the fiber of the form 0, t1, 0, 1, the equation becomes x minus t1z for 1, okay, the second one becomes t1 times x d minus 2 minus d d minus 2, okay, so these are the two equations, and so you have a solution for this, and then the d minus 2 sort of unity, so this gives you d minus 2 fibers by use of t1. If you have any polynomial in fact in x and z, you take the whole supply, okay, and this gives you the fibers. So similarly, you have the fibers for 0, t1, so the other one is 0, I think it's t2, it's a similar thing as this one, I didn't try it, but so you have d minus 2 fibers here, and you have d minus 2 fibers also, look in the paper, where you have two other zeros, and you see that you have d minus 2 other fibers, but it's symmetric in xz and yz, okay, so you change the rows, and so the rows of, let's see, and the rows of the, so f and g correspond to the two values, so if you look at things 0, 0, e2, 1, okay, we'll find also d minus 2 fibers that correspond to the factors of f at that time. So total you have twice d minus 2 plus 2, so you have 2 d minus 1 fibers. So that's the number of fibers, you have 1 dimension of fibers, you have twice d minus 1, 1 dimension of fibers in this example, okay, and now, maybe I won't pass on some of the example, but from back, I mean, you know from the previous result that the initial degree of Ix squared should be at least twice d minus 1, as it cannot work, it was more a good device, all the things, and in fact, so if you take the squared here, okay, and in fact, you check that x, y, f, g belongs to this symbol squared, in fact, I think the maximum time of this is inside, okay, so if you tell you that this, in fact, you have any quality, so the initial degree, so this belongs to the square, so the initial degree of the square is exactly that, okay, and the bound is sharp, I mean, in terms of the degree of the first element in that case, so that shows that I mean, in some sense, I'm at least the first element sharp, okay, so then it's meant to see if there can be more fibers than that and how one can bound, and for this, we use the Jacobian matrix, that's the most efficient technique we have so far, okay, so I mean, so the main thing is that if the characteristic of the field do not divide d, okay, then the Jacobian matrix controls the rank of the differential from the tangent space of the source to the tangent space of the target, and using this, okay, one shows the following proposition, maybe it's the only case where, because it's interesting in general, I will try to case pm to pm, so that one sees a little bit what is the numeric there, but p2 to p3, so you take a rational mark from pm to pm, and then you have the following proposition, so if you take v in pm, not included in the basiocis of phi, okay, and you assume that the characteristic of k does not divide d, and you let r be equal to the dimension of d minus the dimension of phi of d, I mean, you need to take the image of the outside basiocis, and you take the pose of the main question, okay, then you have that this v should be inside some idea of minor of the Jacobian matrix, so v should be in the zero-hoges of the idea of m minus r, versus two, so the size of the minus of the Jacobian matrix on the x. In our case, well, in our case we can be, I mean, totally explicit, give, I mean, we don't need this general result, but this is what indicates and what makes it work in higher dimension to control the same type of thing, but in that case we can do a direct computation, in our case, the Jacobian matrix is phi by four matrix, we have our ideal, and we know that if we have a fiber, this will be on the form f and hg1, hg2, hg3, that will be our ideal, then you write what is the Jacobian of that, okay, and you see that if h is like p1 to de1, p1 to de1, this is the factorization, okay, then you have that p1 to the 2er minus 1 times pr to the 2er minus 1 divides any minor of, in this case, its i, c, of g of f, okay, this is the, I mean, I've come out of that, but this is about the direct computation, okay, you just, I mean, write the Jacobian matrix, you write this like that, you differentiate, you look at what power of this divide, so you have a moderate computation, okay, so, so, so you are following corollary, is that, so let h1, hs be the equations of say one dimensional fibers, when the sense of boron and the un-mixed part of it, okay, and h be the product, okay, and you, you can pose this as p1 to de1, pr, p1, maybe t, pu to deu, and these are no compact one to the other, so it's nice, okay, and so if i3 of the Jacobian matrix is not zero, I'll comment on that just now, then the degree of h is bounded above by the sum from i1 to u of 2e1 minus 1, and the degree of pnr, okay, and this is bounded above by the degree of f, where f is the GCD of the 3 times 3 minus of G of f, okay, and this is of course at most 3 times 3 minus, okay, so we have a bound, which is not exactly, I mean the examples we have, but it's not too bad. Now, let's remark, I mean, is 3 of G of f not zero, well, it implies that the extension k of f, k of x over k of f is separable, okay, and the converse holds if p, or if the characteristic of k do not divide d, okay, so if in characteristic c always, always, always, always are not, so, and if characteristic, okay, do not divide d, it's equivalent to being separable, I mean, you need to be a generic E.K.R. and, well, that's, so, a guy that's zero, there's no problem with the application, people have the mind, there's no problem, that's the experience, there's still something to do to prove this type of bound, we don't know how to prove it, okay, let me end by, whoops, final remark is that one can improve a little bit worse, okay, so, comes out of following the Rheumat position, Rheumat, okay, so, in this case, assume we need that, like the characteristic of k do not divide d, okay, and so, if the degree of f, f now is this gcd, so more the question, if the degree of f is equal to 3 times d minus 1 minus delta, so if you have a high degree gcd, like I did this way, I'm going to mention it's high degree, okay, then, then I have a cg of degree delta, okay, so, but if you have a, if you don't have low degree cg's, you can improve the bound, okay, and the proof of that is very simple, proof, I mean, well, by, in algebra, if you di, if you go di, the i's sine minor, or minor, I don't know, I don't know, I put the sine one, I put the sine of the jack-of-all-methics, maximum I know, I mean, the c by c ones, you have the sum of di, dfi over dhj equals 0, 4k, okay, so now we use the other equation, and this tells you that vz0, vc is a cg of i of f, okay, but now di and vf times ai, okay, and so you deduce your cg of degree delta, okay, so this tells you also the overall rate of vz is that the degree of f that bounds the sum of the degree of the hi's, or even the mh's fingers is at most 3d minus 1 minus the initial degree of the cg of the moduli, so this is all improvement, but you don't know this, I mean, this is not sharp in our examples because the initial degree of cg is 1, maybe one can work with more than that because of the, we don't know between 2d minus, no, 2 times d minus 1 and 3d times d minus 1 minus 1, I mean, can you put something general? We don't need the exact, okay, so that's it, wow, thank you.
We study rational maps from a projective space of dimension two to another of dimension three, both over the same field. We will start by giving the general framework and first results obtained on this question by Botbol, Busé and myself. Then I will turn to questions concerning the fibers of dimension one that such a map can have and present two way to address this question, the first by Tran Quang Hoa, and the second by the same author together with Dale Cutkosky and myself. Examples show that our estimates are pretty sharp, but leave possibilities for improvement.
10.5446/59209 (DOI)
Se so je zela ataj〜 Dam popaneno Veliko kneadlad顯a Po zelo izgradnev кли Kh No, ne recimo vzhočnike gravi. Z findej se dabird Blacklight hiça pas graduate in boš ne bošовой 여č, Ta nemath je z strašnimi magrem u Belgianie revised In twojo vzločnih čn increasesave, asa. V komendog ni integere v i geta iz optim nobody da ne se... Proma násondu, na� Courta uste n iz imWait v te drojce d Danielleργa čim, na pomo abi rekozitik vdom, infrimonietar ochri lekeli, priložiti katahadecech, kSaljimi d trekإ. B Could 💥. K ko so iz Prajden, po dejel Un¬ več od oč, če če todo o Brigada o faju od redu potoza teder, HratUS a našel Úkacy nje amps found. Mi je sedaj, da n whaleče jeline porobiti. V swoji jeeg neskrsokrat needTE ne bik in sizing ne ne zgodila. Nos audience duša dizal ofiseneאתi, jelle muzen si Opiničудovatiako. Nim je to č Guardiše...opol n<|hr|><|transcribe|> rece moj skolje ki je krainila den Church because also debuted by Busco obis zanova,得 야kne zaredine meni je bilo sanjo, stridovali sratov. OVER Question号. Nam skala, eno se po competitive mačo,giliil da vem da se kako gracном z Vietnamem stajne izvene srednjih auton. Kako spotничito laži, pri vsem skala vavst harborval harmonię uno ignite se karma celostih postat案uh zelo je zelo. In tudi................................. diedlikegaencedine per sipide pharmaceutical electron prezvijel Lisa tako......... vse pravi je projomenj OK, na totalanji, pisme almost possessions, imel avisu poake, in ojevanjem je in op effekte pr conditioned, imanjemfish imave,ex-al ere inite Crazy das, k ga ar visualize dumpo, crudek basàng? Emil op kaf da so fastkee, ramoslje boстрой, ko نے, posnegal, kot tako kranje. Ta, če sem vid見 transformavač skvari, posebno se� taxičiro, in je jačno m widow smileske druga, in ne bo še z zvrčdonile. Testvedak na prvih delimi udem je preka tukaj za bolivalic AutoS Нчih diэmm z sailed. iz tamo divorcediteva stilja. Sa inde stacked iz Google rights. This is a sort of the innovations. In in this case... In in the original paper of the It Khan, the definition of model was a bit different Pictures Won't Stop NF��če druge smo podelo. po Bez leverniško zd<|sl|><|transcribe|> I expert. Pendevolj po stonesku, ligiških, ko malo v sej diving. drawings, ustalevami ne boh kicz drive,...�로arem šeины na komodor Cow... Number When We Have Homogeneous Dialine... wages there in there as well......................... alko<|bg|><|transcribe|>...... v Silence winez Re Donc...... wori licence voscale in ide priživa pred Мосkimi kounacijicimi. Mislijo ki ne sadal, Blood without Noury podt основali prav go exploiti. Žekl je, da biiГoviti gova za 2,3 manjska in večben Tudiautres, komregčne druge. Textreme rečenninga stismo tipis. Prijejte cele druk payoff in relegime. Voves prise kine oddajem, da vših se izgoljevolnih postoji. Boba temi, da Steve europeo morencia ja tudiowe komregčen Sol v kab reflexoje Meni, da ležíably nekako sa tabljanim. P間 je neko javreš drugi Huntingtona, ali je težetvec na š деньги scanje. Sino, tukajsta me Installiolaste naziji tako naš共 za svoju p dozensa, Shre diaperč lift sде je barkingovized naprav na izvomake. Videli... To je vese mixačna58. M drafted je interna lor Servicij,tar k sistem that is this, if Izone genius idea and deny a square promotion obvious, then does do that a smooth is minimized by microleaf is generalize microleaf means that telec parliamentary spun....v zgp cross meta in tega akriola V bl탄om, ešte, mmm, ki poracilo......h Husadec mati se�adė bila vsazs Edit rivers,...ako pa ne z nailedet Pharaoh Markov, Z athleticsu kas stočijo, sisa jeaco nundirzat. Um 별se ne istӘezverja tutrovkajeli pred ni, se nashtjevali ili. Opojreno, da vedem... brzmi voze,hmm... izrist v roboticsenem氣 vseys firedestim. Jedveë v underworld suddenlyo to predpoolal razilu designedy postiri. poznači, da poḛu cevajka je za t Fujov. Z dynamitej previs, tu je nogal聽en čas ti tako syrup subsranite. Eh, si najlič vsakрем. Tako aspe, dansno je razий. Aristically ne je倒 že tako? 난 reduces colarno in po kişi, pa izvoje Schosisjaquez ali toga po cousin in dragov grozn anticipation u sporesama porajte. Tako z zp täninga bil del v odber. to šečалje novogaživessna katko bil zamaz already in uspeš bila pomekna pastelole in are druge lepe. To monjela desna matzoוה del labe in bil scrapantove upradi courtesy These math played before. There is equal to this equation. Because the ideas are the same, and this is less than or equal to this because when you look, sose deli so k se site od factories as tebe zoz eingeseta esque in topMore j maj, sess lands Kylie is the reason why Hor handy. Ko rište pillar 12, je assistance do ir edited in nic beefine,če je to restoring tega decimalna�ja zredna. In scores ni appointment poured. Tr natva stranoj djer stirrijo vилиjo v drece Polnjo skupiny kot da je kot taj dinar Kinje. Ale bo vsajza drziana naradi o firmilom In je detecto del, od дальше Iz producesub, Razoulvamo dask najmačja. Se awardo vayače pop Waylanda, Seko naprej p �čin Truk z teba lebo, t define, flexa moj delaiz gentlyc čen, in priza konji extension. Trena o pol decisive je najdej al aj. Po 000 убilo velikoative ide. V plurici gorjeni boljezijo tez turin in iz prim desenj. Slovene ato 급il na boom. Daj ta. Zemnaか je atretina, teervesna nasne drota pri表示u ispeljo. donne in tekku dia ne biggest. Sem ni že v ожидmu iz veco povede Eddlyk. Net n burns.... Evo, vz taggederje, ta preceda jo ona zlovala entreprisesnja КП. Robiti trad practically, da je in sponsor jazoma si š democracy, the se ki, selič nismo po drugu, Nam. Zo,聞 dumplings, Arišy lemi drojceh izg tipic k inch prde. Z ange stvar dal ne, ko je lede? k kilosi LoveONične stream, korist like is presented. Toto je in se z prvoč iliiras, trenutkan diverse speciji poprek س없 ki se smo je otvoriš v开始ce, snapでוט p v last ni kšeata deuxi Veger Zule factor WE V V uregilnih browser impe in vs싸za, repreja da nam n�iti. Obissy ba se rokest. Lo s Vidато ne vedimo, kjer nbi res rapeldu Gregory. K회ja, in Cr carbonyrs, vs скажone...... as we have even a joint variable t that we use to homogenize. And we say that a is the quotient p mod on i. OK, they pretend to are these things by VBJU. OK, so we have that. They map from kT to A is a flat. And with special fiber as mod in i and generic fiber as mod i. By special fiber, I mean the fiber at the maximum radial t. And generic at the maximum radial t minus A in very different pronsi. OK, to so that if one looks at the proof that this difference of this local common universe is measured by the torsion of the index. OK. So in particular one can say that these are all equal if and only if the text has no torsion. No torsion. OK, so now in problem we just know by, I mean, it's written that this I am a ramp, I am in the local means that define the regnits modulo tn plus 1. So in particular A0 is isomorph to s moda initial. OK, now one can prove that it is not difficult to prove this that the following are equivalent to this text as moda torsion is equivalent to that fact that this text is a flat term module. This is all because R is a principle here to mean. And these two parts are equivalent to the fact that all these text are flat as RM modules can be seen because to compute this text you have to find a project resolution of A as p module. But this preresolution since t to the n plus 1 is regular as ae, is irregular. And those of irregular, then if you go modulo, you still have a preresolution of a n as p n module. And from this you can see that you have the equivalence between 1 and 3. OK. OK. So now we want to show 3 by induction on n. And now we are plus the initial there square. And the equal 0 is clear as RM is ae. OK. How is the proof? The proof is... I mean that here of the proof comes from a recent work of Kullar and Kovpac on the formations of the glass in Kullar. I mean they prove something much more general than this that I showed today. But I mean the general proof is more important instead for what you need here. The proof is coming down more easily. So now we assume the initial square. So since ae reduced part of ae is equal to ae 0, because you see, RM is just going modulo d to the n plus 1. So RM is p modulo n ideal, which is homogeneous ideal, homogeneous ziš of the valaj, plus the n plus 1. If you take the radical, we get the ideal plus d. And ideal is homological input, because we are assuming the initial square. So this map is subtractive, RM. We do map on local code. OK. So this is that. I mean if you look at this objection, the kernel is just t times ae. Now t times ae, is isomorph to ae minus 1. And so we have a short, it's a sequence ae minus 1, ae minus 0. We look at the long exact sequence on local code. But since all these map are subtractive, I'll trade the long exact sequence. So we have breaks, breaks in many short exact sequences like this. OK. So by using grotenic duality, we have the short exact sequence of the axed. So ae not here, ae and then ae minus 1. So we, now we won't say that this is equal, can be identified with t, ee, ee to the m times t. OK. I mean this is a submod here of this and this is exactly t to the m, I see. Why? Because we can see that in order can be divided by n, n m by multiplying by tm. Is this so? No. So we have this, I mean we have a subjection from m to n and then this injection. And the composition is just multiplication by tm. So this induces by applying text, controvar and text. So this maps and always as is linear funtors. So this map is multiplication again, i tm. So this says that this submod here can be identified with tm times this. OK. So from here we have, this is just from the, all the exact sequence above that this axed is isomorphic to this one, modulo tm times this one. OK. And then also by some more, some more work. We can also prove this that this is an isomorphic, tm, tensor, ts, ts, ts. OK. These two things together, since this, OK, this by induction is an rm minus 1 modulo. And if we have this and this, then we can say that this is flat as rm modulo. OK. And this is standard for flat, flat modulo. OK. So this is the growth. And, I mean, this is not a variable. One is not, one has to do many steps. But they are not very difficult. And the crucial part is that this is called the full property of the standard. OK. So now, let me just say that when we uploaded this paper on the archive, we concluded it with five, five questions. And we received several comments and turned out that only one is open, is still open. So now, let me give these questions to you. And also the answer is in most of the case. So the first question was this. It was, if p is a prime ideal, if square finish ideal, is it true that it does not p satisfies a stone condition? In this was the cause. And one can prove that if p is a prime ideal, if this at least the dot rm of p is bigger than or equal to d6. And so we are wondering if this general part goes to also. But it turns out that it is not true. In fact, the answer is negative. And this general code told us that there is an example in which the prime ideal is superfrim, initial ideal, not satisfying a stone condition. And this example was actually we thought we were at the Maserali in 2012, and Aldo gave her this example. And also this example was an example of a... With no square macho-finish ideal. And instead we tried to... Also this example provides a negative answer to this question. If p is a knutsom, a medial, is it not p, square macro-finish? In actually, Jaina asked for an example with this in mind. But we provided her... Aldo provided her this example. But we weren't aware of the fact that this prime ideal would be knutsom, that now we explain what it is. And instead, she told us that this also... This star is a knutsom prime ideal, this star is p. And it is a particular class of ideal that we think of square-finish ideal, which is obtained in positive characteristics by taking irreducible components, unions and intersections starting from an absolute hyper surface. We... I mean, you start from a polynomial with square-free initial term, and you start to take components, then salamna, intersection, etc. And then all the ideas that you get, at least, are at a square-free initial ideal. Last thing, start from the polynomial f, which is just the product of the variables. Like this, you get those square-free polynomial ideas. You can start from another f. And the most interesting ideas are these, for example, ideas of supervarajete, arne, etc. Ok, so, now two questions both. Ah, jaz, so, the prime ideal of the question one is a knutsom. So, the question three was this. In positive characteristic p, if a lie is an ideal such that initial square-free for the degree of X, we assume that e-square-free for the degree of X, then we asked if it was true that esmoda is actually... So, why degree of X? Because there is a famous example of an unworked thing. The example in Mitchell actually shows that after a good idea, that's an example. With that example, you can show that you can take that example in the third order, in a way that that example is not at pure, but the initial ideal is square-free. But that order, which works in that case, is relaxed. So, the problem is that it is more difficult, I think, but it is not true in any case. In fact, we discovered that there was a result of a utani in 2013 that proved that it finds the binomial ideal of a five-cycle that esmoda is not pure in characteristic two. So, binomial ideal means that you start from a graph, you take a matrix 2 by n, where n is the number of vertices of the graph, and you take the minus corresponding to the edges. Now, by the recent work of around the Negr and Gorda, the binomial ideal sarka price to has ideas that is an interesting notion that they studied, introduced and studied. And in particular, carbide storfels ideas are such that any initial ideal be restored to any monomial order is square-free. So, if you take that relaxed and you compute the initial ideal of this binomial ideal, it is square-free. So, this is also a negative, provides a negative answer to question three. And then, this is maybe the more embarrassing one. I mean, we asked this, we take a prime ideal such that the initial is square-free. For the graph likes, see, for the last model, that is normal. Also, yeah, the graph likes because for the likes is very easy to produce some, like x, y, z plus y cube plus z cube. Yeah, it would be, it is a prime ideal, not normal. And if you take x, x bigger than y, bigger than z, take any short term, x plus z. Ok. However, we just, takaj, in the form does that, in 95, je tretat in example of a standard grid that he has had, a rematch, which is not normal, and is a domain, even constant. So, since we have spent the time to admit that he has had met a growth in the generation to the initial, to a square-free, on my ideal, where we sweat, to takra blacks, this is a provides and I got a little answer to question. So, ok, last question. This is the only one that is still open. And we asked if p is a homogeneous ideal with a square-free initial ideal and the projective variety defined by p is non-semedial, is it all the rest not p, it is square-macular, and as negative invariant. So far, we didn't, we didn't, we are not aware of the context. And also, I should say that recently we are working with Alex Konstantinesko and Emanuella Denigri, and we proved that this is true, when the monomial rule is takra blacks. So, for takra blacks is true, for others it is not true. I don't know so far. Ok, so, I finished that much before, because this is the last slide, so, you can do those best use. Thank you. So, anyway, thanks to Giulio for such a powerful rolling. I got any questions? Do you know anything about some red states property? Left sheds from here. Left sheds from here. Left, yes. What? Can you also say something about the red states property, because there is an idea to have the red states property, so, I think you know something about this. Is there a problem? I think you mean the left sheds property of the antenna reduction? Oh, yes, yes. So, you might ask whether you have such a link, whether the genetic antenna reduction has the left sheds property. If you can transfer it from the... You can transfer it from the... Yes. Me, me, me. If you have an idea such that the initial has the left sheds property, and the square free, if the idea has the left sheds property, I have no idea. I don't know. I have a question. Is the question for... for a similar prime idea. Do you know some nice article in the questions, too? For question 4. Yeah, question 4, yeah? Yes. No, Marce, let me... Ah, yes, question 4. Posigra, ki je izvizicjonalizira. Por. eksem, pa je izvizicjonalizira. Por. eksem, pa je izvizicjonalizira. Yes. By result of prion. He proved that whenever you have a prime ideal, which is a multiplicity free, and we respect all the multilayer idea, that must be normal and quarnable. And carbide stofes are of this type. It can be long. Yes. In part, I will offer this question, for in question 3. Pa re. No, so, question 4 in question 2. Where's a just that we are demanding this theorem of theorem. And by the way, I should say also that for question 2, I asked to one in question 3. If he thought that it could have been 2, and I mean, he told me just that it would be very nice, but he wasn't aware of the example of the general. Is there a question? It would be interesting to see if the same theorem could be true in the local case. Then you take a local ring, and you can take a local order, and you assume that the initial ideal is quarn free. If you can conclude. Yes. It is good that I associate a gradient ring. Yes. The specific gradient ring is true because of your theorem. Yes. I think it would be interesting because if you start from a local ring, the associated gradient ring, I mean, for a genius ideal, the initial ideal, for interesting or genius ideals, is quarn free. Instantly, for a social gradient ring, I don't know if you get the standard as a gradient ring, but this, the same theorem, may be true properly, one is true in two some details, but if the associated gradient ring is homologically free. That's why it would be important to say to provide more classes of homological influence. For example, I showed it for initial ideas, but this is even true if you compute the sub basis and the torque ring is homologically free. And there is no characterization, but for example, if the torque ring is semi-normal, it will be homologically free. Any other questions? In tk, there is a pure question, and for example, is it too if we were correctly in conjunction, those guys are pure for life? Yes, thanks. So this question, because in fact, I mean, I have to say that the example of a Nurag's Shinga provided an example that is not pure in any characteristic but a finite. I mean, almost all characteristics. But instead, this example of the Grablac probability is not pure only in characteristic. And so one could ask if it is too... Experimental. Experimental. We checked that in the test. And so one could ask if for kappa storefals ideas is true that in characteristic zero as modi is not pure. Is modi. Can you say anything about the bedding numbers? Maybe how they can be compared with the numbers of the gen? For square free? What is, as you said, the extreme of bedding numbers are going to be the same. But the other ones, how much did they change? We cannot say anything. I mean, also, for example, e.g. binomija idea is generated in the tree 2 but if you take the initial idea between B square free but it may not be generated in the tree 2. So already beta zero changes. And this as parazon move works only for x. We didn't try to do something for torque but for sure in general it's not possible to get the same result. Maybe one can do for some other best bedding numbers. Could it be true that the bedding numbers are less than or equal to those of the generic initial ideal? So the gen has the same regular objective dimension. Is bedding numbers necessarily bigger than all these square free things? Or could you have a square free thing? What we have checked is that if you have an ideal and a square free initial ideal the gen of the two may not be the same. So they might have different genes. If they would have the same gene we would explain why they are the same. But they do not have the same gene. There is several problems. If you take the idea of two manors of three bedding numbers and you compute the gene the gene of the ideal. The gene of the ideal. And the gene of the ideal and the gene of the square free the initial ideal are different. So in fact this, I think, was interesting in this sometimes ago. Also to prove that it is a conjecture because it will be the truth. To the way we prove that it is a conjecture will not be the case. Any other question? Thank you.
Let S be a polynomial ring, I a homogeneous ideal and denote by in(I) the initial ideal of I w.r.t. some term order on S. It is well-known that depth(S/I) >= depth(S/in(I)) and reg(S/I) <= reg(S/in(I)), and it is easy to produce examples for which these inequalities are strict. On the other hand, in generic coordinates equalities hold for a degrevlex term order, by a celebrated result of Bayer and Stillman. In a joint paper with Aldo Conca, we prove that the equalities hold as well under the assumption that in(I) is a square-free monomial ideal (for any term order), solving a conjecture of Herzog. In this talk, after discussing where this conjecture came from, I will sketch the proof of its solution.
10.5446/59212 (DOI)
Hopefully become clear why it's good to have a topologist on board in a couple minutes, but first I want to give a shout out to Shri who showed this very beautiful application of the left-shed properties on on Monday. So hopefully everyone is convinced that these are good properties to have in your toolkit. So we're going to explore them a little bit today. By way of further motivation, let me start on the geometric side by considering a smooth complex projected variety. So for such a variety, the the chromology ring has two very nice properties. One is puncturiduality, which in our language, and I will prefer to use this corresponds to the Artenian and Boronstein properties. And I will primarily look at Artenian Boronstein Algebra today and I will abbreviate those properties by AG. And then the second important property is given by the Hart-Letschitz theorem, which talks about the properties of the multiplication map by the chromology class of a hyperplane on this ring. And algebraically this translates into what is nowadays called the strong left-shed property for Artenian, graded Artenian algebras. So I want to consider these two algebraic properties motivated by these two nice properties of chromology rings and I actually want to consider the two in the context of a construction that comes from topology, which is called a connected sum. So let me give the obligatory picture for a connected sum. Suppose again we have some variety x, so let me draw a cartoon picture of x, and take a disc on the boundary of x and sort of cut it out from the boundary, then take a different variety y, do the same to y, cut out a little disc, and then identify the boundaries of the two discs together, sort of like in this picture. And the resulting, I guess, the result of this gluing construction is the connected sum of x. So this topological construction leads to an algebraic notion of a connected sum. So to this construction corresponds an algebraic construction that takes the chromology rings of the two varieties and puts them together to give the chromology ring of a new variety. And so I will present that construction and then I will talk about its algebraic properties. All right, so let's get going with talking about fiber products and connected sums. So to define these notions, so the fiber product of two rings and b is the pullback in a diagram that looks as follows. So in b are both rings that map to another ring t. And the fiber product, which is denoted, this fits in the diagram as the pullback. So formally, the fiber product of a and b over t is defined to be the subring of the product of a and b given by pairs such that the map pi a of a, so the image of a through this homomorphism is equal to the image of b. So I should say I am presenting this following a really nice paper by Lutro Abramov and Anand and Francois from 24. And the other notion I want to present is the notion of a connected sum. So for this, let me just specialize to the connected sum of two of four, this being algebras a and b over a Cohen-McAuley ring t such that, let's say, the dimensions of these three rings coincide. So the connected sum resulting from a commutative diagram of the following form. So actually my diagram will look very much like the previous diagram or a and b mapping to t. This time I want to take these ring homomorphisms to be surjective. And so in this case, because a and b are a guaranteeing, there is a way to identify the canonical model of t with an ideal in a and b. So I will call those identifications or those inclusions i sub a and b. So given such a diagram, the connected sum of a and b is defined to be the quotient of the fiber product of a and b by the diagonal image of the canonical module. So by the elements of the form i sub a of x, i sub b of x, where x ranges over the canonical module. So for, to begin with, I want to present a quick example just so we can all be on the same map regarding these constructions. So let's do, for a, let's take k of x modulo x to the fifth power. And for b, let's take k of y modulo y to the fifth power. Let's have both of these map to k of z modulo z squared by the most natural maps you could imagine. So x goes to z and y goes to z. And then, well pictorially, if we think just at the level of vector spaces, right, a vector space basis of b would be given by the monomial one y, y squared, y cubed, right to the fourth, right. So the basis for t would be given by one and z. And what happens here is, I guess these two basis vectors are mapped to these two basis vectors. And now if we look at the canonical module of t, which is isomorphic to t, so I will write it again as k of z mod z squared. So this one maps to the elements that are due all to these elements via the pairing on this Bornstein algebra. So these two map here. So in fact, that means that one is mapped to y cubed and similarly one is mapped to x cubed. So now let's let's construct the fiber product and the connected soft. So it's not hard to see that, for example, x comma y is an element in the fiber product. Because they both map to z, it's also pretty easy to see that x squared zero is an element in the fiber product. And it actually turns out these two generate the fiber product of A and B. So it can be written as a quotient of the polynomial ring in variables u and v. And now some relations that are apparent will yield to the fifth powers is obviously zero. And I will cheat and copy the other relations v cubed is zero, u cubed v is zero. And here's one that you might not see immediately, but u squared v is the same as v squared. All right. So this one is weird because it's not homogeneous, but we can fix that actually by assigning to u degree one and v degree two. So these constructions are graded algebras, but not necessarily standard graded in general. And then for the connected sum, well, we get a presentation that is similar except that we have to kill the pair x cubed, y cubed, which is u cubed. So we will have to mod out u cubed v cubed u squared v minus. All right. So let me list a few properties for these constructions. First of all, let me assume that AB and T are our opinion-quarantine and A and B have the same solve. In that case, and graded. So everything will be graded in my talk. So in that case, the product and connected sum will also be graded. So this is the standard typically. The fiber product will be Cohen-McCulley of type two. The connected sum will be Bornstein, Artinian Bornstein, given that A and B are Artinian Bornstein. So Hilbert functions of the fiber product and the kinetic sum are completely determined, but by those of AB and T. So for the fiber product, the Hilbert function is the sum of the Hilbert functions of A and B, subtract the Hilbert function of T. So for the connected sum, one needs to subtract a little bit more. So the Hilbert function of the connected sum is that of A plus that of B minus that of T minus, again, the Hilbert function of T, but with a degree correction of the Sockle degree of A minus the Sockle degree of T. All right. So let me say a word also about the inverse systems. I have a question. This connected sum that you have on the board doesn't look like it seems to me. We've covered mention two-guarantee, but I thought it was completely different. That's a local rate. So I don't know how it's... OK. One generator is not needed. It's a V2. VQ does not mean it. Thank you. Yes. Minimizing that. Yes. Thank you. Actually, this context can also be asked. So this construction depends on the embeddings of the canonical model by model. Yes. In principle. Yes. So for example, you could tweak it by choosing the constant in K. It depends on what you get dependent on the arithmetic of the field. It can depend on the field and it can depend on how you pick these embeddings, but I always fix the diagram to begin with. And so I assume that the maps are given. OK. So actually back to inverse systems. So if I assume that my two algebras that I start with have inverse Macaulay dual polynomials, F sub A and F sub B. And their quotients of some polynomial rings are in R prime in this joint sets of variables. Then it turns out that the inverse system of the fiber product over K is given by the two Macaulay dual polynomials for A and B. And then there actually exist choices of such Macaulay dual polynomials such that the inverse polynomial for the connected sum is just the difference of the two inverse polynomials. So one result of this is that one can recognize connected sums over the residue field by the fact that their Macaulay inverse polynomial can be written as a difference of two polynomials in two joint sets of variables. And that's going to be very important for the next thing I'm going to say. So now finally, let's go to Lefschad's properties. And so I want to discuss the Lefschad's properties for these two constructions. Let me define them first of all. So in a artinian graded algebra, A has the strong Lefschad's property if there is a linear form such that the multiplication by that linear form has maximum rank. So this is the weak Lefschad's property. And it has the strong Lefschad's property if there's a linear form such that multiplication by any power of that linear form has maximum rank on any graded component. And I will call the set of such linear forms. I will just make an notation for it. I will denote it else. So this is if one of these properties holds, this is a risky open set non-empty in the space of all linear forms. So the jumping off point for this project is in fact the following theorem of Moenel and Wadenave from 2009. And I'm paraphrasing this a little bit. You won't find it exactly written like this by them. But here's what they prove. So if A and B are artinian-gorentz-steen algebras of the same software. Then the connected sum of A and B open the residue field satisfies the strong Lefschad's property if and only if individually A and B satisfy the strong Lefschad's property. So the key here is exactly the fact that the McCauley-Duo polynomial of the connected sum over K is the difference of the respective to McCauley-Duo polynomials. So that's what goes into the proof. And then as a small addendum here, the proof shows that the set of Lefschad's elements for the connected sum is just the cartesian product of the two sets of Lefschad's elements. In the sense that if one adds a Lefschad's form for A and a Lefschad's form for B, one gets a Lefschad's form for the connected sum. So inspired by this, we set out to think about Lefschad's properties for more general connected sums and fiber products. And here's our main result. The same hypotheses as before. So A, B are Artinian-Borenstein's same soft-core degree. Essentially, we have the same result with the fiber product instead of connected sum. So the fiber product over the residue field of A and B satisfies the strong Lefschad's property if and only if individually A and B satisfy it. And furthermore, the Lefschad's locus of the fiber product is the product of the respective Lefschad's locus sum. So this perhaps is not so surprising. What I find more interesting is what happens when we abandon the residue field and take this construction over a more general T. So what we can prove is the following. The property of the residue field is that, well, its socle is very close to the bottom. So one can think about imposing restrictions on the socle degree of T. So if the socle degree of T is roughly less than half the common socle degree of A and B, so by that I mean less than the socle degree of A minus 1 divided by 2, then we can show that the fiber product and the connected sum satisfy the weak Lefschad's property. And if A and B satisfy the strong Lefschad's property, then we can show that the fiber product and the connected sum satisfy the weak Lefschad's property, but need not satisfy in general the strong. So I want to illustrate this last part a little bit because maybe it is counterintuitive. So let me illustrate it by going back to this example and generalizing it slightly. So let's take arbitrary exponents here. So let's say we have x to the d and y to the d and z to the t. And we're going to have similar presentations, but let me just skip that and record something about the properties of the resulting algebra. So what happens is if t is exactly half of d, then the fiber product and the connected sum fail to have the weak Lefschad's property. So consequently, they also fail to have the strong Lefschad's property. Otherwise, the connected sum and the fiber product do satisfy the weak Lefschad's property, but not the strong. So I want to give a brief pictorial justification for this. So in some sense, the tricky part here is the non-standard graded aspect for these objects. So the presentation that I showed you before as a portion of the polynomial ring in u and v, there was only one of those two variables had degree one. That was u. So similarly in this case, the space of degree one forms in either the connected sum or the fiber product is one dimensional. That's the key. So degree one, we only have multiples of u, which is at one. And this imposes serious restrictions. Furthermore, let's look maybe at a basis for the connected sum, a k basis. So this is given by the monomials u to the i and u to the i times v, where i ranges from 0 to d minus t minus 1. And so all we need to understand to think about these properties is the action of this degree one form on this basis. And pictorially, the action looks like this. So u multiplies 1 to u, u to u squared, et cetera, up to u to the d minus t minus 1. So then u to the d minus t is 0 because it's the image of the canonical module. And then there's a similar strand that starts with v and yet goes u v up to u to the d minus t minus 1 times v. And the problem here is if t is exactly half of v, then this is the degree 2 minus 1 component. And this is the degree t component. So there is no way to multiply u to the d minus t minus 1 by a scalar multiple of u and yet to v. So that shows why the weak left-shreds property breaks in this case. In all the other cases, the strands are either far enough apart or close enough together that things still work out nicely. So I will stop here. Thank you very much.
The Lefschetz properties are desirable algebraic properties of graded artinian algebras inspired by the Hard Lefschetz Theorem for cohomology rings of complex projective varieties. A standard way to create new varieties from old is by forming connected sums. This corresponds at the level of their cohomology rings to an algebraic operation also termed a connected sum, which has recently started to be investigated in commutative algebra by Ananthnarayan-Avramov-Moore. It is natural to ask whether abstract algebraic connected sums of graded Gorenstein artinian algebras enjoy the Lefschetz properties in the absence of any underlying topological information. We investigate this question as well as the analogous question concerning a closely related construction, the fibered product.
10.5446/59214 (DOI)
Jason and Juryu, they really do a great job and organize a very nice conference. So thank you very much. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. OK. Now I would like to talk. I want to talk about some of my results, which is about the H-vector of Central Complex and the number of the generators of fundamental groups. This is the joint work with the Bernmobilt. And let me first talk about the motivation of my study. And basically, I want to consider the same pressure complex. If I write there, it is always a same pressure complex. And I always assume that it is d minus one dimensional. And connect it. And also, I want to assume that it is pure. Pure means that all the facets of the same kind of t. So I'm considering the same pressure complex as the combinatorial object. But this is, of course, a combinatorial algebra meeting. So maybe you don't want to consider combinatorial as much. Considering the same pressure complex is equivalent to considering the same standard of Iceland. So I mean, it's just a square free monomial. So maybe you just consider that we are then considering the standard of Iceland. And so I don't want to define the standard of Iceland. It's just a square free monomial ring module of square free monomial. And as you can see from my title, what I want to consider is the H-becta. So H-becta over Hd. So it's the H-becta of the same pressure complex, the standard of Iceland ring. The H-becta already appears in the series 2, but I just want to equal that. It appears in the Kiribati series. The standard of Iceland ring. Maybe you consider series 2 of the standard of Iceland ring. It is a rational function of some polynomial module one minus 2dd. And H-becta is just a coefficient of this polynomial. So I want to consider this question of the Kiribati series of standard of Iceland ring. And the other thing, which I want to say something, I need to say something is a fundamental group. So I'm not going to be careful. I just quickly recall what is a fundamental group. So if I write delta bar, then this means a geometric realization. And I want to write pi and delta together, fundamental group of the same pressure complex. So I'm pretty sure that everybody learned about the fundamental group, but some people forgot about it. So I just want to go into it. So basically, this is a group, a concept that holds the loops, but we want to be equal. And more precisely, so this is a group, also, of groups in the geometric realization. But if you only consider the group for a pixel part, it's the base point. And then we divide them by 2. So we just write some 2p. I'm sorry. Maybe I write some very, very simple examples. For example, for instance, this is two dimensional ball in the two dimensional plane. And then we consider to be some base point. And then loop is something like this. It's a continuous function of the interval with the beginning part and the end part. And so one big requirement means that if we consider this loop and this loop, you can storage this first loop to the second loop. And we just identify these two things. And of course, as you know, so if you consider ball, every loop is home to be equal. So the group is actually trivial. But for example, if we have some holes in here, please consider this is a hole. So if we have a hole in here, then these two loops are not home to be equal. So we have some non-trivial groups. And actually, I'm not interested in the general structure of the fundamental group. And I think this talk, I only consider orange. I only consider the minimum number of the generator of the fundamental group. So this is a group. So we can present a group by generators and relations. And of course, there are many ways to do that. So I just consider the presentation between the smallest possible number of generators. And I think that's the best way to do that. And I think that's the best way to do that. And I think that's the best way to do that. And I think that's the best way to do that. So the presentation between the smallest possible number of generators and these numbers, I thought this number would be m delta. And probably, I also should explain what is an intuitive understanding of this number m delta. So for example, again, consider the two dimensional hole and consider the two dimensional hole that we made, three holes. So what is m delta? So in this case, m delta is equal to 3. Because this is not a precise explanation. But if we consider these three groups, this norm of each of the groups cannot be generated by the other two groups. So we address these three objects to present this group. So roughly speaking, this m delta is kind of a number of the holes of the space. This is my understanding of m delta. And the motivating question of his study is that following conjecture of the carite, so he conjectured that if the simple complex is a closed topological manifold of a dimension at 3, then under this assumption, he conjectured that h2 of the structure complex minus h1 is bigger than or equal to d plus 1 to times m delta. This is the motivating problem of our study. And the problem, so I should say something about the combinatorial meaning of this conjecture. So for combinatorial, h2 is just like the number of the edges of the simple complex. And h1 is just like the number of the bars of the simple complex. And as I said, m delta is the number of the holes. This includes just tells us if the simple complex has many holes, then we should have many edges. So it's a kind of very intuitive, very trivial. Do I need that exact formulation? So that's the meaning. So we have many holes, then we should have many edges. And probably I should explain a little bit more about conjecture. So probably I first should explain what that closed manifold means for combinatorial algebra. I think not many people are familiar with this object. But basically, being a closed manifold means that if I use a random differential complex, the ring of the differential complex is orange. Or any face. Maybe I don't think the definition of the ring is important, but if I write the definition of the ring, the ring basically considers the local information around the faces. So we consider all the faces g of the simple complex, which complements f, then subtract f. And this is the ring of the simple complex. And this gives us a kind of local information of the simple complex. And so for competitive algebra, so probably you just think that being a closed manifold means that I consider the Stanley-Ryzenegger ring, which is locally orange. Stanley-Ryzenegger ring may not be a orange, but it's locally orange. The second remark is that I say that m delta is kind of a number of the poles of the simple complex, but probably for the number of the poles, you probably consider other invariants. I mean the first homology. So let's get the i in the simple complex to be the dimension of the i homologies over here to k. And then as you know, so d1, the first homology is also a kind of number of poles. But I just want to remark that the relation between m delta and d, the first vector in the orange, so m delta is always better invariant. This is because this m delta considers fundamental. But our realisation of the fundamental group is the first homology group. So the genital of this one always changes the first noise group. And I'm not a topologist, so I do not know much about the difference of these things, but these things could be different. For example, if you consider the homology manifold, as homology sphere, which is not a sphere, then we have no homology, but we have some non-trivial fundamental group. So in that case, this is 0, but it's not 0. This could be different. And the third remark is that... So we have these relations, but if you really place this m delta in the number of the genitals of the fundamental group, with this vector one, then we actually know the answer. This is also originally conducted by Karaj and proved by Schwartz and Nobis in 2009. So we proved that under the same assumption, h2 minus h1 is indeed larger than or equal to d plus bunches times... and I'm looking at the other side, sorry, d1, delta. So in some sense, the question is that can we depress this homology group information with the fundamental group information? So as I said, I'm looking at the homology, so I'm not very sure about how different they could be. So actually for me, this is the number of the holes, and this is also the number of the holes. So it looks very different. But the theoretical, it looks very different. So if you consider its vector, the standard alicinogen, so standard alicinogen has a very nice relation to homology. So if the basic approach starts forming, it appears in the local homology, and it also appears in the computation of the vector and that. So we have standard alicinogen homology group, very nice, but we don't have a nice relation to this object. Standard alicinogen is a fundamental group with a non-concrete object, but we are starting with the commutative alice of the law. As far as I know, I don't hear about any relation between the standard alicinogen, the number of the generator, the fundamental group, that's a radical difficulty. So in particular, when I hear about this conjecture, I really thought that it's impossible to prove that this conjecture by using commutative alice of the law. But at some point, Izabella and I realized that we can prove this conjecture, and we still use the commutative alice. And anyway, so now I want to write our result. So my first result is, it's just resolved this conjecture. Actually, under a little bit of a generalized situation, we consider it. Our first result is that each delta is a normal sum of my code. So this idea is in marginal store, but I will explain this soon. But anyway, so if delta is a normal sum of my code, we can actually prove that h2 minus h minus, in this project, we put the k plus 1 to 2 times m delta. This is our result. And probably I should say something of this through the mind for things. Through the mind for this, the combinational abstraction, the being of the mind for it. So it says that the through the mind for the cure and strongly connected. It's connected in the co-dimension one. And every co-dimension one case, is actually contained in two facets. Actually, I'm sure a couple of you will satisfy this three conditions called the through the mind. So cure and strongly connected under every co-dimension one face is exactly contained in two facets. And I also should explain what is normal. So normal means that the combinator around which, the normal means that every link is connected. But of course, if you consider the face of co-dimension one, then the link is just two points. So it should be disconnected. But except for this circular case, the link is always connected. This is the meaning of the normal. And probably I also should say that this condition, so the link is always connected, is actually equivalent to saying that the circular last name satisfies the sales S function. So maybe you have a better understanding of this equivalence. Then it's very strange to say that this is normal. The link is of course not normal, it's just different from the language. So anyway, our first result is that normal through the mind function, through the mind function which satisfies S two condition, exactly satisfies the connection of the color, and every mind function automatically is normal through the mind function. Because in that mind function, the link is actually golden. So of course it satisfies S two condition. But anyway, so then if you look at this condition through the mind function and normal, then as I said, normal is S two condition. And through the mind function, it's some combinatorial condition. I think for combinatoriality, it's very natural to ask what happens if we use this combinatorial condition. If we just assume S two and we consider H vector, then what can we say about it? And these are the two other kind of results. What we prove is that if the style lies there in satisfies, say it is S two condition, then we do not get this bound, but we get somewhat weak about it. So H two is larger than or equal to D two times the number of the fundamental. So through the mind function, it gives some ecliptic condition, but if you just assume that S two condition, then you can even say something. And maybe I should say that this is even new for going to a complex, even more complex, because we don't have any integration with the mental. So for a more complex, even should be interesting. And so any these are the results. And the first I should, in the rest of this talk, I should explain how I prove the deserterance. What this I should explain, what is the idea of the proof. And so maybe I should say that it would be very nice if we can find some relation between the simple complex and fundamental proof. So you can find that any relation between these two objects would be very nice, but unfortunately this is not what we did. So I still do not know any relation between, any direct relation between these two objects. And instead of finding these relations, I do something like the following. So recently we have some new tool to study the H vector, the simple complex, which is called polyhedron. Polyhedron. Which is, we usually call polyhedron more so. So I will explain what is this later. But this polyhedron most theory is very much, has a very nice relation to the starting line. And this polyhedron most theory also has nice relation to the number of the generator itself, fundamentally. So I somewhat prove this statement, so using this polyhedron most theory. And I will explain what is this polyhedron most theory. But maybe I first should quickly explain what you can do by using this theory. So using this polyhedron theory, you can basically do the four answers. So as some of you, as many of you here from the talk of R0, in commutative algebra, we sometimes consider the upper part of this gradient vector number. Some people, like me, consider the upper part of the, the vector is the gradient vector number, or fixed series function. And then, so basically, that's the polyhedron most theory. By using the polyhedron most theory, what we can do is a four answer. We supposedly have an upper bound base picture. Then, this actually implies that lower bound is the edge base. And this is, I will explain later more precisely, but this is a very rough idea. And I also should probably like more precisely. So I say that we consider the upper bound of the gradient numbers. But actually, what we ideally need is the upper bound of the gradient number. This form of alternating sum. And we actually need an upper bound of the gradient number for the ring of the sinc-chal complex. And if we have an upper bound of the gradient number of this alternating sum of the ring, then we get some low bound of the edge vector of the sinc-chal complex delta. So this is a very rough explanation of the use of the polyhedron most theory. I will explain later in detail, but I show that two results of why the absolute manifold and why the self-condition, but these two proofs are based on the following ideas. So I first consider the upper bound. The first result, I consider the lower bound of the edge vector of the absolute manifold. The absolute manifold case is a bit complicated, but if you just consider the manifold, so its ring is a Goren's type. And what I consider is that we consider the upper bound of the gradient of the Goren's type sinc-chal complex. And then apply it to the sum of the anterior lower bound of the edge vector of the mind. For the second result about the self-condition, and being the same condition, it's a technical assumption, but for the if you consider the Goren's complex case, the ring of the Goren's complex is the complex of Goren's complex. And we have an upper bound that we're teaching about the Goren's complex. And I just apply this upper bound, and then you obtain this lower bound result. So that's very rough, rough examination is not the proof of the idea. And back away I want to say something. So I really worked on this topic, I mean, up about the great Baychampas, what keeps him with function. I worked on this very hard, but I didn't really thought that when I was studying this object, this can be useful to prove something else. But I'm not very happy, because I can prove some nice results about combinatorics of the manifold by using the upper bound gradient body, but by using the big attribute body, so I'm not very happy about that. OK, and then, the rest of my tools, I want to explain more detail about the polymedverum most of them. And this is actually a combinatorial picnic, which is introduced by Breham and Kuhner in 1987, there on the third or fourth, I think. And actually, I talked about this topic many times for this three or four years, so I know that some of the people in here, these two are maybe three times or four times to hear about this one, so I apologize for this. Anyway, I want to explain more detail about the polymedverum most of it. This is completely combinatorial. We have a central complex, and let's be here, set of vertices over central complex, and if I let W, this means that induced sub-complex over central complex, which means that, so W is just a subset to a vertex, and then I restrict the central complex to the vertex at W. And for some technical reason, I want to write that D iq, that would be the denation of the reduced homologi group. As you know, the reduced homologi group and the homologi group are essentially the same thing, but for some technical reason, I want to distinguish them. And then, okay, now I want to define a little bit technical notation. Let tau be an order link, and some order is open. So, W1 to W is the same as B1 to Bn, as I said, and I consider it as an order. Then, let mu i have delta to be the following sum of the reduced region. And we consider the sum of the reduced homologi group, of the link of the vertex at Wk, but we restrict it to the vertices W1 to Wk-1. So, I think it's that first, but before giving an example, I want to say what we can say about this thing. We have the following inequality, which is called the monthly inequality. It says that the big i of the central complex is always smaller than y equals to the mu i of this central complex, and we can also say that the same thing was alternate in sum. So, we should explain more about these things by using examples. So, first, let me give a simple example about this mu number. Okay, so we'll consider the four in one dimensional central complex, and now we have the vertices, and this way, W1, W2, W3, W4. This is just a fourth cycle of length four. And if you compute the mu i of this central complex, then, so according to the definition, this is the sum of the four data zero. And so, we have, we basically consider this sum, so we have the four, sum of the four these data zero, and for each, in these places, we actually consider the link of each vertex, but when we consider the link, we only look at the vertices which is better in each vertex. So, if we consider W1, the link of W1, W2, and W3, but these are not better in this vertex, W1. So, the first one is empty, and second, if we consider the link of W2, then we will consider W1, W4, but so we only consider the vertices which is better in W2. So, the second one is just the vertex W1, and similarly, if we consider the link of the W3, then we only consider the vertex W1, and finally, if we consider the link of W4, then we get W1, W2. Of course, the beta zero by empty set is zero, and the beta zero is one point zero, so we get W1, and in this case, beta 1 is, of course, okay, no, not beta 1, sorry, P1 is equal to 1, because this is a cycle. And maybe I should mention that this notation very much depends on the order in the vertices, because if we consider this situation, and this is the same simple complex, but I change some lovely vertices, then, mu1, tau is in this case equal to, the first ring is empty, and the second ring, W2, is also empty, use W3 and W4 above W2, and then, the link of W3, then we get 2 by 6, and the link of W4, we also get 2 by 6, so in this case, we have 2, so this one could become much bigger than the homology group. And here I should, okay. That's what I want to realize here, and this is just an example of the computation of these new numbers, and probably actually, where this inquiry comes from. So this inquiry comes from some geometric aspect, so I'm again, we're starting the same simple complex, and we consider foreign situation. So we pick some hyperplane in the bottle, and then move this hyperplane to bottom to top, and then consider the space which is there with this hyperplane, and then you basically get 4 situations, meaning this is number 2, this is number 3, number 4, and the first thing is that if this hyperplane is between W1 and W2, so if you consider the space which is there with this hyperplane, the space is something like this thing, and if you consider a situation that's a hyperplane between W2 and W3, then you get this situation. And so if you consider some situation, then your space is something like this, and eventually you get the O-sympathia complex. And basically, this estimate considers that what can be happened to the V-tran bus in each step, each step means that basically the change of the topology only happens when the hyperplane passes the bus, so basically it is enough to consider what happens when the hyperplane passes the bus. And if you want to estimate the V-tran bus, actually you can easily see that if you consider this homology proof, then you can actually estimate the worst estimate of the change of the V-tran bus in each step. And if you know something about the most theory, this is really what the most theory says, what those theories do, and this new number is actually a kind of a critical point, I don't know what the critical point is, so this is a very simplified version of the most simple complex. If you don't know what the most theory is, then just you can know, but if you know the most theory, you can see what the most theory is. And... Okay, one nice, same result. We have this encode, which goes low, and this encode which bounds the V-tran bus in terms of this mu-tran bus, but it's not very difficult to prove that... Actually, it takes some time for us to realize that this is true, but it's very difficult to show that... Okay. So this is an actual estimate, not only in the Bacchian, but also some number of the geriatography, in the mental group. This is kind of this, this comes from this observation of the most theory. So it's just easy. I mean, this is an easy exercise of using the fund-compensate theorem, so you can give your students as this several other exercises, but we really need some time to realize that this is true. Oh, that's right, I forgot to tell you. Anyway, so at this moment, I say no-thing about the combinatoric algebra, but now I want to talk about the combinatoric algebra. So this kind of most theory is combinatorially nice, but one bad point is that this very much depends on the ordering. So if you change the ordering of the bus, sometimes the estimate could be very bad, sometimes the estimate could be very good. It's not a good situation. That's the point is such, if we consider that, if we consider all the order, and if it takes an average, so I need this new ideal to be the average of these new numbers. So how is that? We consider all the orderings of the bus, and we take the sum and divide it by n factorial, then of course we have n factorial computations of the n letters, so this is the average of the order. If you consider this thing, some nice thing happens. Let's say this nice thing, let me take one more notation. This is just a bit technical notation, but anyway, let sigma i to be the ordering number. So we consider all the subset of the vertices, and consider this number, sorry, one divided by this binary question. Let sigma i to be this number. And a nice point of considering this average thing is that this is actually conserved by two Indian guys, which is whose name is Babuji and Dutta. 2014 is actually just a computation, but they showed that this new ideal of the average is at the core end form. So we consider all the boxes, and take the sum of this sigma number. So they computed this number, and they realized that this can be written as this form, and this sigma i is actually this number. But this is a crazy formula, because they consider all the subsets and all the batch numbers of the Indian sub complexes, but this formula is actually very nice for computer algebra. This is mostly about the foxtas formula. It's obvious also, but let data ij to be the very rich number of star-rise learning. Therefore, using foxtas formula, you can actually see that you need some computation, maybe, but it's quite easy to see that this number can be written as a sum of the very rich petroleum variables. So, some linear strand is star-rise learning. So, now I just picked the average of these new paths, but you can think of an average, we can actually get the regis batch numbers. So you can study this object, the new i, and you can see the regis batch of the star-rise learning. By using this for the bin shown, but I first proved the point, I first proved that if delta is a normal submachine force, then you can actually see that h2-h1 is indeed bound to each variable by something times d plus 1, which is 2. Something is this number, mu1-mu0 plus 1. This is what I first proved. I hope everybody remembers, but I just want to recall that, as I said, this number is actually equal to the number of generators in the mental group, as I said, because this number is just the average of the new i-paths. For me, you just look at the single order in the boundless variable by this number, by the topological of the ratio. So this is how we proved the contextual parallel. So why this could be proved? So basically, look at this inequality. What this says is that this gives us lower bound of the h-numbers by using these new numbers. So this is in some sense giving a lower bound of h-numbers in terms of grade-batch numbers. But lower bound is the h-vector of in terms of grade-batch numbers, something like obtaining upper bound grade-batch numbers by using the h-vector. So you need to be very careful because this is considered a real thing, and this is considered an original differential complex. But this is based on the idea that if we want to prove this the way, we actually need to think about the upper bound of grade-batch numbers in terms of h-vector. And I will not give it details, but it's just really a technical complication. So this is the idea that we proved the result. We have to separate that. And about the same argument, the same idea works. Okay, I think this is too small for me. And about the second result, about the series condition, electric flow to flow, right? So we proved that if the system complex is pure, and so locally, if it is locally, it's a local estimate that every each link satisfies a sales consensus. Then we need some non-trivial row above the h-m, but the bound is something like this. So obviously we have to consider, no, this is r, alternating sum. But if there is a simple complex which is locally, sr, then hr of a simple complex is always bound with 0 by the d minus r, d2 times this alternating sum of the mu numbers. And we have to get mu minus minus, as the mu minus minus always works. And if you specialize this theorem to when the case r is equal to 2, then you get that h2 is less than or equal to d2 times mu r minus mu 0 plus 1. And then you get the bound in terms of the generator of the fundamental group. And this is what we proved. I will soon finish the talk. So everything we did is very combinatorial. We use the folks at Omeon and this is a great fetch number. But this theorem is really specialized in the standardization. So I actually want to ask the people in competitive argue if there is any generalization of this kind of thing. Actually, I would ask at least two times about me, but I just say that I have no idea. So if our theory just says that if the satisfy s2 function, then h2 is bound to be 0 by the d choose to times the number of the generator of the fundamental group. I do not know what to say if it is the number of the fundamental group for homogeneous idea, but maybe there is some generalization and maybe you can do something. Anyway, I stop here. Thank you very much. Questions for the production? If I have a question, what about if you have many problems with common? Is there something connected like in terms of relative expressions? Yes, you can just replace everything by the relative line. You can just replace delta boundary there. You can discuss the same thing. That is about we. I am Isabelle. I just specialize everything in the system. I am assuming that the Chinese data is a typologist. You can say anything interesting about the Chinese data. Do you guys play around with Chinese data? Do you think there is any connection to the Chinese data? I am just asking you. I have no idea. It is an interesting connection. I might have to call you more serious. I am just thinking about it. I do not know. Do you have a situation where you have a quality in the formula? Yes, yes. This is the equation when the other links are stuck together. That point is called the Gaussian sincfixion complex, which has a linear resolution, two linear resolutions. The other equation? Yes, the other equation is the same. For H2, I think it is just equivalent to saying that every link has a two linear resolutions. It is quite more boring. I need to be careful about the higher values. Is this the inequality to the number of generators? Is it equal to this one? I am quite interested. If the dimension is larger than equal to 4, then if these two are equal, then this number is equal to the beta one. For this thing, there is a characterization. The equality for here and equality for here is the same in the case. If the dimension is larger than equal to 4, and for in dimension 3, it is actually not a problem. We do not know the answer to dimension 3. It is a three dimension to four, it is too difficult. I have a question about a possible application of this Kalaic injector. Some people are interested in finding triangulations of many folds with a reasonable number of vertices. Could one use this? You kept many fold reasonable number of generators of finite y1, and then you cannot realize this difference? That is exactly the better and short part. This is a lower bound of the edges, but you also have a trivial upper bound of the edges. The number of the vertices choose two, and compare this lower bound and the lower bound. The number of vertices must be based on the number of vertices to choose two. Then you get some lower bound, also lower bound with the number of vertices that are running for. There is also some lower bound. I am asking if you want to realize some manifold, you want to triangulate some manifold with n vertices. The minimum number of generators of 5.1 is very big. Then maybe for some n it is not possible to triangulate it with n vertices. You can use the inequality. Here is talking about zero fixed, so, sorry universal finished Slovenian information analysis. If you haven't seen video where Slovenia's all again worried about Jaqueiroq Haute's statement, he put five beginner locateers Thank you.
Hochster's results tell that homology groups of a simplicial complex have a nice relation to algebraic properties of its Stanley-Reisner ring. On the other hand, it is unknown that how fundamental groups affect to Stanley-Reisner rings. In this talk, we present lower bounds of the second h-number of simplicial complexes in terms of the number of generators of fundamental groups. Our proof is based on recent results about PL Morse inequality and graded Betti numbers.
10.5446/59169 (DOI)
This is a great place at the interesting conference. So the theme of my talk will be about kinetic equations that appear in gas dynamics as models, for example. And I should mention from the beginning that this is about joint work with Franz-Kleidner, who is post-Hungary and Erwin. So to start this off on a very simple level, I will talk about this linear ODE with a constant matrix C, but to make an interest in this matrix will be non-symmetric. And the goal of what I want to discuss and construct here is the aponopfunctionals such that you can derive the exponential decay behavior, possibly with the Charm rate, for such an equation. So I will call such a matrix C coercive if this inequality is satisfied. So if its symmetric part is positive definite. So as an example, let's look at this simple matrix here, which has complex conjugate eigenvalues. The real part is 1 half. This means the exponential decay rate in this equation up there will be 1 half. However, since this matrix is not coercive, then you cannot see the decay behavior by a trivial energy method. So just multiplying this equation with x will not give you any information. And to see a little bit what the problem is, here in this plot, you see as a function of time, the norm of the solution. And you see it decays, but in a way, the way. And the bad part are horizontal plateaus that you have here and here. And these plateaus are exactly the points where you have problems to get an analytic decay. So a way out, a simple way out, is to change the norm and to introduce a problem-adapted norm realized with this matrix P. For the example at hand, this is the best matrix to look at. So here, the red curve is this nice exponential behavior. So if you use this matrix as a Lyapunov functional, you can find the exponential decay behavior. So let me just fix some terminology for matrices of what's a little bit exaggerated. So such a matrix C would be called hyper-corrosive if there's a positive constant blue such that all real parts are larger than new. So in standard terminology, this matrix is positive stable. And in this case, of course, and if I further assume that all eigenvalues are non-defective, this means there are no non-trivial Jordan blocks in this matrix, then you have exponential decay of the solution with exactly this red new, which is the spectrocar. So in order to make question for the example that I've shown you is, how do you find this matrix P? And this will be provided by this simple lemma here. So since this will be the key starting point for the discussion afterwards, let me just put that here. Here I'm in the complex plane, and I put here the spectrum of this matrix C. So I put here this vertical line, which realizes the spectrum at u. And then let's say further complex conjugate eigenvalues as the spectrum of this matrix. So what does the lemma says? It says if all eigenvalues that have a real part that is exactly mu, this means on this vertical line, if all those eigenvalues are non-defective, which means the algebraic and geometric multiplicity are the same, then there exists a positive definite matrix P such that you have this matrix inequality. So let me just copy this down because this will be the essence. So this spectral gap is the minimum of all the real parts of the eigenvalues of C that should be positive. And then there exists a positive definite matrix P such that we have this matrix inequality where mu is this spectral gap. And the idea of how to use this simple lemma is that this constant mu appearing here will be u decay rate. Now, in the second case, if on this critical line with real part mu, you do have some defective eigenvalues, you lose in the matrix inequality an epsilon. Translated to the dynamics, it means you lose an epsilon in the decay rate. Just a simple illustration of how to get this matrix P if the matrix C would be diagonalizable, then you just take the tensors of all the eigenvectors of the transpose of C. Let me point out the matrix P is not unique. Nevertheless, the decay rate mu, or mu minus an epsilon, is independent of that. Everything can be lifted to complex matrices as well with exactly the same idea. So let me just illustrate how to use this simple lemma on this ODE. So this would be the Lyapunov functional that you want to look at, this adaptive norm. If you just differentiate it along a trajectory, then this is the matrix products that you obtain. And here, this matrix combination is exactly what you have in this lemma that you can bound by this lemma. And then you have immediately this exponential decay with the rate mu. And so now what I want to discuss in the rest of the talk is what are non-trivial applications of this lemma in the amphipedies. So let me illustrate once more for the example that I've shown you what the reason is for this wave decay. So here I am in the phase plane. For the two-dimensional system here, so with x1 and x2 in the vertical direction, this blue spiral is just one trajectory of the solution. The red circle that you see here is clearly the level curve of the Euclidean model. And what is the problem here is whenever the trajectory intersects the vertical axis, then this spiral is tangent to the Euclidean norm. So there you do not have a sharp decay. So now the idea of introducing a problem-adapted norm with this matrix P is to change the level curves. So with the matrix P down here, the level curves will be these ellipses, like this red ellipse. And you can check for this red ellipse, all these spirals will always intersect at a non-trivial angle. Therefore, you will have along the flow always a strictly k, not this way of behavior. So now I make a bold jump to PDEs. I look at the kinetic Fokker-Planck equation. Here the Hamiltonian part in green on the left-hand side and the dissipative part on the right-hand side. And the analogy to the previous ODE is here again, in red, I have this weighted L2 norm for this phase space density with position x and velocity p. And there's a function of time you see exactly this weighted behavior again. And the reason for that is pretty much the same as for the ODE. So the goal would again then be what is the problem-adapted norm or the up and down function to catch the exponential decay. Here in this kinetic Fokker-Planck equation, the dissipative right-hand side implies a local equilibration, local imposition, towards a Maxwellian in velocity direction. And then this Hamiltonian part introduces a mixing in phase space that will mix the different positions in order to allow, in the end, a global equilibration to a global Maxwellian then, which is the same for every point in x. So the real equations that I want to discuss in this talk are BGK equations, which are, again, kinetic equations. Since I look at the tolerance, I don't use a confine potential, just this transport term on the left-hand side. And here I have a relaxation term. So the relaxation will be towards this Maxwellian mf that depends on the probability density at time t. So that's a local Maxwellian, local in x, which is such that it has the same hydrodynamic moments of order 0, 1, and 2, like your probability density. So this is the local density, the position density. Here the mean velocity is the first moment, and the temperature is more or less the second moment of the probability density. So this Maxwellian is then highly nonlinear, of course. So let me just recall questions like existence and uniqueness for such a BGK equation that we've known for 25, 30 years. And what I want to discuss here is exponential convergence to the global equilibrium. In some simple cases, with sharp rates, but for most of the examples without sharp rates. So finally, here is the outline of my talk. So in the beginning, I will just define what hypercoercivity means. And then I will look at various BGK equations of increasing complexity. In the beginning, linear BGK models with first discrete velocities and then continuous velocities. And the main feature of this first block will be that there's only one conserved quantity, like mass conservation in the model. Then I switch to more complicated models, linearized nonlinear BGK equations, which have two or even three conserved quantities. So let me start with the following very simple linear BGK equation in one dimension. I consider it on the tolerance in x, and velocity will be on the real line. So here, the relaxation term on the right-hand side relaxes the solution towards this Maxwellian mt. t refers to the temperature that is here fixed in the model. And you only multiply that with the local density of your solution. So that's a linear equation. The whole generator that you have here in this equation is composed of these two terms, the transport term, and then q stands for this relaxation term. The relaxation term is self-adjoint on the weighted L2 space, where the weight is the inverse of the Maxwellian that I've put here. And as I discussed before for the Fokker-Planck equation, it drives the solution to the local in x, Maxwellian of this sort. So it's the Maxwellian here velocity multiplied with the local density. And then this transport operator here will again lead to uniformity in position, because it mixes the different points on this tolerance. And then the interplay of these two phenomena, the local equilibration plus the mixing, this gives rise to the hyper-corrosion. So one thing that we can check easily, this operator L composed of these two parts is not corrosive. So you cannot find such a lower bound by the norm squared and the positive number lambda. So therefore, simple energy methods cannot give you an exponential. So let me just recall the definition of Cedric with a need for hyper-coercivity. So hyper-coercivity is a statement about the exponential k behavior of a solution. Here you have the c group applied to your initial condition f. Here you have exponential decay with red lambda. And a multiplicative constant C that is typically larger than 1. There are two Hilbert spaces appearing there. First, you consider your generator on a larger Hilbert space h. And curly k is the kernel of this operator. And then the smaller Hilbert space is embedded into the orthogonal complement of the kernel. Just to have an idea of the interplay of these two Hilbert spaces in many applications, the larger one is a rated L2 space and the smaller one a rated H1 space. To start off, I will look at this very simple model that now just has two velocities, velocity plus and minus 1. So f is now the vector composed of f plus and f minus. And this is the evolution equation. Here, this is the very simple relaxation term. Again, periodic in x. The steady state for this equation is just the constant in x vector with the two components, 1 half. And a simple strategy to analyze this equation because of the periodicity in x is to look at the Fourier modes in x. Then introducing, for example, this velocity basis with these two components. And then you have to consider the Fourier modes with the index k, which are then complex two vectors. Let me just mention these two references. So some years ago, in a paper of Jean-Claude and Bonneau and Christian Schweizer, they analyzed exactly this model and showed exponential decay. This model has just one conserved quantity. And then slightly afterwards, with Eric and Franz, the improved slicing on the decay rate and gave the sharp decay. So for each of the Fourier modes in this model from the previous page, this Fourier mode will evolve according to the simple ODE here with the constant in time matrix ck. Now, for each of these Fourier modes, we can use the concepts that I've explained to you at the beginning of the talk. So for the mode with k equals 0, this matrix has the eigenvalues 0 and 1, clearly. For all other Fourier modes, k, these are the eigenvalues you have a complex conjugate pair. The corresponding steady states for the 0 mode is the vector 1, 0. And for the other modes, it is the origin. So therefore, for each of these modes, you get the following decay estimates. The 0 mode converges to this. Seems the new battery is not that good. So you converge here to this steady state that I've put here with an exponential rate that is 1 that comes from this eigenvalue here. And for the higher modes with k non-zero, you converge to 0 with a rate that is 1 half that stands here from this real part. And the essential question is, what are the adapted norms for this ODE? So you have to find the matrices pk, and they are exactly obtained from this inequality. So the key point is here, for each spatial Fourier mode, you need to consider a different norm with a different matrix pk. So now you have an infinite sequence of these norms, and the corresponding matrices pk that give you the problem of the adapted norms. And in the limit of high Fourier, nodes converges to the identity matrix. So thank you. So the generically up and the functional for this problem would look like this, you sum up all the Fourier modes, and each Fourier mode will get its natural matrix or the up and the functional pk. Now by Planche-Reyes and by the fact that these matrices converge to the identity matrix, this norm is equivalent to the standard L2 norm that you would like to see in the phase space here, 0 to 2 pi nx, and then the velocity direction. So with this construction, we have the following theorem that the solution converges to the steady state with this sharp exponential rate, 1 half, and the sharp multiplicative constant, square root of 3. So now let me switch to the next more interesting model, where we have still one-dimensional periodic nx, but continuous velocities. Again, I consider Fourier modes in x. In velocity direction, I use a Hamid basis. And then since here you have infinitely many velocity modes, then you have to look at the time evolution and now of this infinite vector, both f, for each mode k. And they evolve according to this, say, infinite ODE. In this infinite ODE, the transport part is seen here with this infinite tridiagonal matrix of this structure. And the dissipative part is this matrix L2, which has minus 1 in all components except of the first one, which corresponds to the conserved mass. So the first, the zero-order mode does not decay by itself. But because of this blue 2 by 2 block, the zero mode and the first mode are coupled. So this coupling makes also the zero-order mode decay exponential. For this matrix, what you would like to get is the spectral gap. But for such an infinite matrix, we don't know yet how to do that. So therefore, we will make a compromise. We will give up the quest for finding the exact spectral gap. And instead, use or construct a simpler matrix P such that we are able to really prove exponential decay. So the key part will be this 2 by 2 matrix is responsible for the coupling of the zero and the first-order mode. So we will take this information to construct from this information our, say, transformation matrix P. So it is exactly this 2 by 2 block. And the rest of the matrix P will be the identity matrix for simplicity. So if we do that with these answers for matrix P, plugging this into this matrix inequality here, then we can optimize for this answers constant. And we can also simplify to get here the largest possible constant, mu. So if we do that and proceed in the same way as I described before, oops, this was too quick. Then here you have again the Lyapunov functional where you add up all the spatial Fourier modes. And you have for each mode is matrix Pk. Then you are able to establish exponential decay for this Pgk model. And the decay rate is off by a factor 2.5 compared to the true value, which is, I believe, not at that. So I've now discussed these linear VGK models. Now I go to linearize the nonlinear VGK models with the feature that they have more than one conserved quantity. So let me start with the following nonlinear VGK equation in 1D. So I just changed this relaxation term here on the right-hand side. This max valence should be such that it has the same mass and more or less temperature or second moment like your probability density. But I fix intentionally the mean velocity to be 0. So that's the local max valence that I converge to. So here you see the mean velocity is 0. Again, the procedure is the same as before. Fourier transform in x. And then write down the equation for each of the Fourier modes, which is decoupled. Again, since we have continuous velocity, I use a Hermit velocity basis. So I have to look at the evolution of this infinite vector bold H. Here's again the Fourier transform. Transport operator, here's the dissipative part. So here's the structure of the dissipative operator. Now it has two zeros because we have two conserved quantities. The zero order moment, which is the mass and more or less the temperature. So the question is, how do these two modes obtain the exponential decay? Well, the zero mode is coupled by this blue 2 by 2 sub-matrix to the first mode. And this second zero is coupled by this green 2 by 2 sub-matrix to the next non-Vanishing gavel. And then corresponding to these 2 by 2 sub-matrices, we construct our matrix P to again be able to apply this lemma form before. So the answers that we now use for constructing this matrix P is use this coupling between the zero and the first mode. Use this coupling between the second and the third mode. I divide by 1 over k in order to compensate for this factor k that appears from the transport. Then one way would be to optimize here for alpha and beta. We did not do that. We just set them equal. And we chose a factor 1 third to get our exponential decay result, which has the following structure. So again, the Lyapunov functional that we have here sums up all the spatial Fourier modes. For the time being, just assume that this exponent gamma is equal to zero. So forget about this first factor. Since all these norms Pk converge for k to infinity to the identity matrix, this Lyapunov functional is again equivalent to the L2 norm. There are two difference between your solution and the max Fourier. So the result is this Lyapunov functional here converges exponentially to zero with a rate which is 1 over 25. Since we did not optimize here, the rate is not as good as one could make it, but this was just for simplifications. So now if this gamma appears here, then you just have here the Fourier representation of a gamma-sobolaive norm. And in all sobolaive spaces with respect to x, you have exactly the same exponential decay. And I will need that for the following extension of the result. So what I've discussed on the previous slide was a linearized Pgk equation. Now I look at the nonlinear equation. And here we have exactly the same decay result provided that you choose the sobolaive norm larger than 1 half in order to have a sobolaive embedding into continuous fractions. I will just make one remark about the proof. So for the nonlinear Pgk equation, you want to rewrite it like the linearized Pgk equation and write the remaining terms or realize that the remaining terms of higher order, such that they can be absorbed by giving up a little bit of your decay rate. So last model that I want to show you here is somehow the true Pgk model, at least in the linearized sense. So the true Pgk model has three conserved quantities, mass, momentum, and energy. So going through the same procedure that I've explained to you for the last 25 minutes, here just in one dimension, you Fourier transform in position. You introduce a Hamid basis in velocity. You have this evolution equation for the infinite vector both H. L1 is the Fourier transform transport operator and L4 is the Fourier transform dissipative operator. You have three conserved quantities, mass, momentum, and temperature. So therefore, three zeros in your dissipation matrix, in the infinite matrix. And now it's this transport matrix that couples them to the first decaying mold. So in this model, you have an iterated hyper-coercive structure. The mass mode is coupled to the momentum mode or velocity mode, which still doesn't decay. The momentum mode is coupled to the energy mode, which still doesn't decay. The energy mode is coupled to the third order mode, which finally decays. And therefore, all the lower modes inherit an exponential decay. So by this coupling that I've explained to you over and over again, your useful Hansel's matrix P has here a 2 by 2 block for the first coupling, here a 2 by 2 block for the second coupling, here a 2 by 2 block for the third coupling. Then you could again optimize on alpha-deta gamma and use this simple lemma to squeeze out an exponential decay. So it's not optimal because you have just used a simple decay. Let me just mention, without going through details, what I've shown you here in one dimension also goes through in dimensions 2 and 3. The matrices that you have to consider there are up to the order 21 by 21, which is then tedious analysis. But it allows you to prove exponential decay. So let me just conclude. What I've started with is a simple algebraic lemma for ODEs, for non-symmetric ODEs. And I've tried to show you how to apply that to PDEs that allow for a model decomposition in space. So I've shown you then for various BGK models, kinetic equations, how to get exponential decay both in discrete and continuous velocity decay situations, and both for linearized and even the nonlinear setting. There, I forgot to mention, it is just a local decay result. And to finish off, what I've reported here is from these two papers. The first paper is on situations where you just have one conserved quantity. And here, this is the extension to more than one conserved quantities. Thank you. Thank you. Questions, please. I would say that you're basically in fact for of 18 of the optimal result. So when you start with this unsuspected, you make from the model that there's three conserved quantities. You use the same conserved model that there's two conserved quantities. Can you prove this straight? So here you surprise that we are so much off. The reason we are so much off is that we just, that we were lazy here. So this is the answer. And we just, so what you have to do then really on the algebraic level, you take these unsults, you put it into this matrix inequality, which now is in fact an infinite matrix. But after some point, you can cut it off. And you only have to look at the upper left block, which is probably a 6 by 6 matrix. And you have to show that this matrix when bringing this term to the left-hand side is positive. Same indefinite, which is tedious. So here, we just realize when we choose iPhone beta equal to 1.3, it works. And we did not even attempt to optimize. So if you would optimize here, I'm sure the factor 18 can be improved a lot. But here we didn't do it. When you go to, this is only the one-dimensional case. When you go to dimensions two and three, I mentioned we had the matrices of dimensions 17 by 17 and 21 by 21. So to really prove their positive semi-definiteness is tedious. And you cannot do it automatically because in all of these matrices, you have this parameter k in there. And you have to prove that for all integers k. So you have to do it by hand for each line of this matrix. And so this is tedious, but apparently doable.
BGK equations are kinetic transport equations with a relaxation operator that drives the phase space distribution towards the spatially local equilibrium, a Gaussian with the same macroscopic parameters. Due to the absence of dissipation w.r.t. the spatial direction, convergence to the global equilibrium is only possible thanks to the transport term that mixes various positions. Hence, such models are hypocoercive. We shall prove exponential convergence towards the equilibrium with explicit rates for several linear, space periodic BGK-models in dimension 1 and 2. Their BGK-operators differ by the number of conserved macroscopic quantities (like mass, momentum, energy), and hence their hypocoercivity index. Our discussion includes also discrete velocity models, and the local exponential stability of a nonlinear BGK-model. The proof is based, first, on a Fourier decomposition in space and Hermite function decomposition in velocity. Then, the crucial step is to construct a problem adapted Lyapunov functional, by introducing equivalent norms for each mode.
10.5446/59164 (DOI)
Thank you very much for that introduction and thank you also to the organizers for organizing this wonderful workshop and beautiful band. So like I said today I'll be talking about minimizers and gradient flows and the slow diffusion limit and this is based on some joint work with East and Topologyloo at Vinn Virginia Commonwealth University. So as we've seen already several times in talks today, non-local pairwise interactions arise in systems throughout the natural world. We heard about chemotaxis twice today, we also heard about non-local pairwise interactions in models of neuron simulation. So here I have some, a little movie of chemotaxis in which case the cells experience non-local pairwise attraction mediated by the Newtonian potential like Yau was talking about in the previous talk. You might also see in models of vortex motion and superconductors Newtonian repulsion between the vortices or in models of biological swarming you could see maybe attraction and repulsion at different length scales. So for example in this flock of birds you might think of the fact that the birds don't collide as a type of short range repulsion and then the fact that the birds stay in sort of a cohesive flock with short boundaries on the edge is a type of long range attraction. So these types of non-local interactions show up in a variety of systems and maybe the simplest mathematical model to code these types of interactions is just the aggregation equation which is exactly the same as the Keller-Siegel that Yau was talking about but for now I don't have any diffusion here. Just to establish a bit of notation throughout today's talk I'm going to suppose that rho is a non-negative density and all the problems that I'm going to consider can serve mass so without loss of generality we'll just say that the mass of rho is something positive in. Okay so on the right hand side we have the aggregation equation. This is a continuity equation for rho that evolves according to some velocity field grad k convolved with rho and it turns out that this equation has a gradient flow structure with respect to the Wasserstein metric. So this is the gradient flow of the interaction energy and I'll say a little bit more about that gradient flow structure in a second. So as I illustrated with some of the videos at the beginning and as we've heard in several of the talks today these types of non-local interactions show up in a lot of different contexts for different choices of interaction potential k. So for example if you were thinking about these models of vortex motion and superconductors or bacterial chemotaxis then you might consider a plus or minus neutronium potential. In models of granular media you might see a cubic potential and then in models of biological swarming you might see one of these repulsive attractive power law potentials. So here I'll think of A this exponent tells us how big the long range attraction is as compared to B which is telling us about the size of the short range repulsion. And I'll always use this just rotational convention that you know in two dimensions this is just the logarithm which is you know the case that Yau was speaking about. So one thing that I want to highlight just with these examples of interaction potentials is that a lot of the potentials that we see showing up in these applied models have singularity or more generally are just not good connectivity properties. So on one hand that's kind of a pain when you want to start setting things using a gradient cell structure you know conducts to such an important tool but we somehow kind of have to do with that and sometimes for some of these applications. So over the past 10 years there's been tons of interest in characterizing minimizers of these interaction energies. We were just hearing an Angelus talk about you know wouldn't it be fascinating if you knew what sort of properties of an interaction kernel K would lead to different patterns of the minimizers of an interaction energy. There's been an incredible amount of work done on this so I'll just briefly highlight a few examples. There's been a lot of numerical work so you know studying the competing effects of attraction and repulsion just coming from this interaction potential K and they can lead to rich structures in the states. Here's a numerical simulation done by Klikolnikov-Sanjumanski and Bertosi where they just took this type of interaction potential and perturbed these parameters A and B and ended up with this whole zoo of minimizers of the interaction energy. More on the kind of analytical side there's this organized result by Ballaget-Curriol-Loronton-Rohl showing that if you have an interaction potential that is repulsive at short distances the more singular the interaction that type of repulsion is the more singular the interaction potential is of the origin the higher dimensional support minimizers of the interaction energy will have. So for example they fixed one of these interaction potentials that was repulsive at short distances and attracted at longer distances. At first they started off with just a mild repulsion in which case minimizers of the energy were concentrated on sets of zero dimensional support and as they increased the singularity of the repulsion it went to one dimensional and two dimensional support. And then more recently there's been work kind of characterizing what types of interaction potentials K guaranteed existence of minimizers of the interaction energy. So this is just a very small snapshot of what has been a very active area. And recently there's been some interest in a specific type of minimizers of the interaction energy which is set value minimizers. So in other words what we want to do is we want to find a row that minimizes the interaction energy up to the constraint that row is the characteristic function on some set omega and then where omega has volume in. So there's our mass constraint. Part of the motivation for studying these specific type of minimizers of the interaction energy comes from shape optimization problems. And I think the connection is really clear if I take row equals to characteristic function on omega and substitute that in to the interaction energy. So in the particular case where this interaction kernel was just a Newtonian potential this becomes exactly a Poincare's problem so which is trying to find the minimum potential energy of a fluid body under the exception of vanishing total angular momentum. And it was shown by Leeve that the unique minimizer is a ball using rearrangement inequalities. Of course we also have a similar shape optimization problem, isoparametric problem, even more classical which is just trying to minimize the perimeter over all sets of omega of a given volume. And again we know classically that the unique minimizer of this problem is the ball. And then more recently these two problems have been combined to give what's known as the non-local isoparametric problem. So now we're trying to minimize the difference of these two energies. So we're trying to minimize the perimeter plus this kind of Newtonian potential term. And so on one hand balls this first term, minimizing this first term, this first term is smallest when omega is the ball, but that's exactly when the second term is largest. So we see these sort of competing effects just like I mentioned the competing effects of attraction and repulsion with the interaction energy. There's been a lot of recent work on the non-local isoparametric problem and in particular it's known that there's several values of critical, several critical values in the mass that determine the minimizers of this problem. So for example it's known that if the mass is sufficiently small then the minimizer is definitely a ball. If the mass is a bit larger it's at least known that minimizers exist. And then if the mass is sufficiently large it's known that minimizers don't exist. But we don't for example know whether or not M1 is equal to M2. So the situation turns out to be somewhat similar for these set-valued minimizers of the interaction energy. So when most of the recent work on set-valued minimizers is considered the specific types of attractive repulsive power law potentials. Because this gives us the same dichotomy that the first term of the interaction energy makes the minimizer want to be a ball and then the second term makes it not want to be a ball. So for this specific choice of interaction kernel this problem is studied by Gouchard, Toxy, and Topolyglue and also Frank and Leib. And they show that in a similar way as with the non-local isoparametric problem there exist two critical masses. So that if the mass is small enough minimizers don't exist and if it's large enough minimizers do exist but it remains in general we don't know whether or not M1 equals M2. So what would it mean for them not to be equal? Yeah I think they're certainly equal it just hasn't been proved. So for them not to be equal it means that there's some range where minimizers don't exist but then there's a larger range where they do exist together then don't exist together or what? Say it one more time. So I mean let me ask it a different way. You're trying to show the set where minimizers exist as a connected subinterval of the positive wheels and similarly that the set where they don't exist is a connected subinterval is that right? We're trying to show that the set of masses for which minimizers exist is connected. Is connected. Yeah that's what you mean. Yes yes yes thank you. And so in some specific cases this is known for example for quadratic attraction Newtonian propulsion we know that these two things are equal and that they equal the volume of the unit ball but in general it hasn't been shown and also we have no idea how these two parameters depend on A and B which is kind of controls the type of attraction and propulsion. So most of the recent work on the set value minimizers have considered a relaxed problem. The benefit of this relaxed problem being that minimizers always exist and if you can find a set value, a minimizer of this that set value well you know sure enough that gives you a solution to the original problem that you had in mind. And this relaxed problem also has an interesting dichotomy with critical masses. Frank and Leib showed that there exist values M1 and M2 such that if the mass is smaller than M1 the density is strictly less than 1 or in other words the set by which by the density I mean if the mass is strictly less than 1 the density which solves this problem which is the minimizer is strictly less than 1. So in other words the set on which it equals 1 has measure 0 and if the mass is larger than M2 then the density is identically equal to 1. So in other words the set on which density equals 1 has full measure. When you say strictly less than 1 could it be that still the thing is 1 on the sphere and it dips inside or is it? I said the measure yeah as long as so yeah when I say strictly less than 1 I just mean almost everywhere so yeah thank you yes. And they kind of interpreted this in terms of a phase transition so we have a liquid phase, a solid phase and then it's unclear whether or not these masses are equal and whether or not there exists an intermediate phase on which you know the density has 1 on a set a positive measure but not full measure. So kind of surprisingly given the significant amount of numerical work that had gone into studying minimizers of the interaction energy without this additional constraint that they be set valued there's been absolutely no numerical work looking at minimizers of the set value minimizers of the interaction energy or minimizers of this kind of could heighten constrained interaction energy. And the main reason for that is some of the best numerical methods for studying minimizers of the interaction energy take a particle approximation of the density or in other words they approximate the density by some of the direct masses and it's hard to make sense of this sort of L infinity height constraint or a shape constraint for some of the direct masses. But when I learned about this problem from my collaborator recent Apollo glue it occurred to me that this is actually connected to something that Yao and Inwon and I had worked on. So at the same time that these groups were studying set value minimizers of the interaction energy in one now and I were looking at set value solutions of a constrained aggregation equation. So let me tell you a little bit about the connection there. So in one and now and I were interested in this this kind of model. So I have quotes around it because this is this is not really a PDE but I'll tell you kind of the sense in which it holds in a second but this gets across the main idea. So we were interested in studying it's a continuity equation a row row what's to evolve according to this velocity field and it will as long as its height is strictly less than one but we have the additional constraint that the height of row has to be less than or equal to one at all times. So our motivation for studying this kind of equation came from a couple of different directions. First there had been some previous work on a congested drift equation so replacing this non-local interaction term by just a fixed external potential V. This was already kind of mentioned earlier today and Pierre de Gaulle's talk. This was introduced by Marie Roudinasse-Shipon and San D'Aprosio and then Yao and one and David Alexander had also studied this model. So we were interested into what extent the work on the congested drift equation can be extended to non-local interactions. We were also interested in this because as we saw in Yao's talk before there's a lot of interesting, let's see, it's very mathematically interesting to see how these attractive forces from a non-local interaction term can compete with repulsive forces for example from diffusion. And so this was like looking at a new type of repulsive force, the sort of height constraint and seeing how it competes with this potentially non-attractive term depending on which type of interaction potential K you could use. You would get the same effect if you just include basically a one minus row term into the equation, right? Like a non-linear additional term but you don't want that, you want to have a special constraint. Yeah, and I'll tell you also that, so the reason why we wanted the special constraint maybe to, is that it also has this connection with aggregation diffusion equation. So heuristically this is a singular limit of an aggregation diffusion equation. So that's why we were interested in this specific model though as you say there are other ways you could sort of penalize the row exceeding height one. We were interested in this specific penalization for this reason. And so just to kind of explain heuristically the connection between these two, if you take this degenerate diffusion term like for example this would be a Keller-Siegel equation if K were a Newtonian, attractive Newtonian potential. If you look at this degenerate diffusion term you can think of this as just linear diffusion with the diffusion coefficient D that depends on the density. And as M goes to infinity if row were strictly greater than one this diffusion coefficient diverges to infinity and if row were strictly less than one it converges to zero. So heuristically in the M goes to infinity limit we have no diffusion wherever row is strictly less than one and infinitely fast diffusion wherever row peaks up above height one. Okay so this was another reason we were interested in this specific model is because at least formally it was a singular limit. So we succeeded in showing that this equation was well-posed as the Wasserstein gradient flow even for singular interaction kernels up to the Newtonian singularity. And then we also studied properties of this equation. So for example solutions with set-valued initial data remain set-valued and we could characterize them in terms of a Healy-Shaw type free boundary problem and then quantify their convergence to equilibrium. So how does this help us with the original problem that I told you about? Studying properties of set-value minimizers of the interaction energy. Well because this constrained aggregation equation is a Wasserstein gradient flow of the exact constrained interaction energy I should produce a few slides back. In order to study minimizers of this energy one could maybe numerically simulate solutions of this equation for a long time. Unfortunately the same obstacles that you encounter when trying to use particle methods to simulate minimizers of this energy with the height constraint you confront the same obstacles in trying to simulate solutions of this PDE with the height constraint. But if we can make rigorous this connection between aggregation diffusion equations and the constrained aggregation equation there exists lots of good numerical methods for simulating these. So the hope is is there a way that we could study these minimizers of the constrained interaction energy by simulating solutions of aggregation diffusion equation for large M and for large time. So that was easy and my goal was to first kind of prove this low diffusion limit that solutions of these aggregation diffusion equations converge to the constrained aggregation equation and then use this as kind of theoretical justification for applying numerical methods for aggregation diffusion equations to shed light on properties and minimizers of the constrained interaction energy. And I confess when we started off this project I think I maybe had low ambitions I thought we would only be able to show this under very nice assumptions on k where I'm going to lift this continuous or at least have nice convexity properties. But in the end we're actually able to show it for quite a range of k including either any of sort of these Newtonian or Ries potentials and also these attractive repulsive power law potentials. Okay so I'll begin I'll tell you first a little bit about this and then at the end we'll see some pretty pictures of what these minimizers of the constrained interaction energy look like. So our proof strongly uses the fact that these equations have a Wasserstein gradient flow structure and so I feel like I should at least tell you the definition of the Wasserstein gradient flow. So I'll say a curve in the space of probability measures is the Wasserstein gradient flow of an energy if this inequality holds where here the metric slope of the energy this is just a generalization of the modulus of the gradient of the energy and the here of the metric derivative this is just a generalization of the absolute value of the time derivative. So first this might seem kind of very unrelated to the maybe gradient flow you had encountered in Euclidean space but to see the connection if you had a gradient flow in Euclidean space that's just a solution of this ordinary differential equation and this equality is equivalent to just saying that the left side and the right side have equal magnitude and point in the opposite direction. So the fact that they point in the opposite direction can be encoded in this sort of chain rule and then these two equalities by Young's inequality are equivalent to this thing on the bottom which when I integrate just gives me that. So this really is an exact generalization of the gradient flow you might be familiar with from Euclidean or L2 space. And so oh yes I'm sorry I'm missing squares here and here and then here and then here thank you. Thank you. This is the first time I've given this talk so please let me know the type of this so I can fix them later. So our goal in terms of showing the solutions of aggregation diffusion equations converge to. Likewise my question is what does d row mean in the second question? Rho dx dx. No the next slide. You go. K can follow with rho d rho. Yeah. Oh yeah I think I'm committing a notational abuse there. dx. I think when I wrote it so at first I was writing K rho is K convolved with rho d rho I guess because I was initially wanting to emphasize the first time I wrote that interaction energy it makes sense for example if I just take rho d be a measure like a sum of direct masses but then I started committing the abuse of notation that I write the density and the measure using the same notation. So in fact you could just erase that d and then you could consider this dx. So thank you. So both of these equations have gradient flow structure the aggregation diffusion equation is a gradient flow of this interaction combination of interaction energy in the Runian entropy in this constrained aggregation equation like I said is the gradient flow of the constrained interaction energy. So if we want to show that this converges to this as m goes to infinity it's equivalent to show that of course gradient flows of this converge to gradient flows of the constrained interaction energy. And this is really one of the benefits of your equation having a gradient flow structure is it makes it gives you a really nice framework for setting these types of singular limits. So in Serfati I gave a nice kind of collected some nice sufficient conditions for a sequence of gradient flows of the sequence of energies converging to a gradient flow of limiting energy. In particular she showed that if you have some sequence row m of gradient flows of em so that they converge to some limit and the energies are well or the initial data is well prepared in the sense that if you evaluate the energies align the initial data they converge. Then as long as you can check these three criteria whatever that limit was must be a gradient flow of the limiting energy. And to see that these you know to see where these conditions come from they come directly from the definition of the gradient flow. Basically what these conditions ensure is that if you if I put m's through here so using that row m as a gradient flow of em these conditions ensure that in the limit the left hand side jumps down the right hand side jumps up so that the limiting function is a gradient flow of infinity. So these criteria just come directly from the definition. So for our problem of showing the aggregation diffusion converges to the constrained aggregation this first criteria was not too bad. In fact all we actually need was that to show that the we needed to see that the initial data was well prepared in this sense. If we had that we were able to get a uniform lower bound of the energies along the gradient flow and that gave us enough compactness to show that our gradient flows row m converged to something. So then all we need to check is that something is what we want it to be. It's actually the gradient flow of the limiting energy. So most of the effort goes into the checking the remaining three conditions. The second one is also more just follows directly from kind of properties of the Vosserstein gradient flow structure. So this also didn't depend too much on the our specific choices of the energy. It was really just the first one the last one. Okay so this is what I wanted to tell you about is how we were able to get these two limits. So if you go back and recall what the equations for our energies you can see that this first lower semi continuity property of the energy along gradient flows is actually just a consequence of interpolation at Lp norm. You can think of this ringy interbetorium is just an Lm norm to the power m. So this is what you see that as you send into infinity it vanishes if the limit has L infinity norm less than or equal to 1 and then diverges to plus infinity otherwise. So this first condition is not too bad. It's the second condition that was the tricky part because we lacked any sort of uniform convexity or even kind of generalizations of convexity. We didn't have uniform lambda convexity or even uniform omega convexity. We didn't have anything. To see why convexity would be so useful if we knew that these energies were uniformly convex we could use some nice expressions for the metric slope that allow us to sort of translate or transfer this question of lower semi continuity of the metric slope back to just lower semi continuity of the energy itself. But we don't have this. All we have is just this specific structure for the metric slope. So I think for lack of time I won't go too much further into how we kind of succeeded getting this lower convexity continuity. But the basic idea was we were able to show that we were able to control this term separately, the interaction term. We were able to get a uniform bound there. And so from a uniform bound there and then a uniform bound on the metric slopes themselves we were able to get a uniform bound on the second term. And then that gave us sort of regularity compactness we needed to pass to the limit. So let's close. So here with our theorems then we'll get to some pictures. So in summary we showed that if we have a sequence of gradient flows of E m so that the initial data is well prepared in the sense then in fact they converge the gradient flow of E infinity. And then kind of along the way we showed that if we have a sequence of minimizers of E m then up to a subsequence in translations they converge to a minimizer of the constrained energy. Okay so thus in order to gain intuition for properties of the minimizer of E infinity we're just going to simulate rho m for large m for a long time. And then the way that I'm simulating these and the pictures you're going to see in a second is just using recent work with Jose Antonio O'Currio and Francesco Padacchini on our deterministic particle method for aggregation diffusion equations. Okay so here's the pictures. So all of these simulations were mostly done yesterday. They're all going to be in one dimension. I'm going to be assuming Newtonian repulsion and here these are the different types of attraction that I'm looking at. These first simulations I'm going to show you are just simulating solutions of the aggregation diffusion equation with really large diffusion exponent m so that these are approaching solutions of the constrained aggregation equation. Here you see the different initial data. So they all have, they're all of a bare and black shape but I've rescaled the mass according to these attraction parameters to ensure that they all have large enough mass so that they approach the set value of the minimizer. In other words they approach a characteristic set. And then I did the same down here with characteristic function initial data. So when the attraction exponent is less than 2 we see the solution kind of hit the height constraint in the middle and then spread out. When the attraction constraint, attraction parameter equals 2, kind of it hits it all exactly at the same time in this case. And when the attraction constraint is bigger than 2 it actually hits the height constraint on the boundary first and then fills in. So this is just illustrating kind of the idea behind our approaches that we basically, you know, simulate solutions of this constrained aggregation equation for a long time. So here, yes. I thought your equation was mass preserving. So it is, but I'm looking at different choices of, yes. So each one is preserving mass? Yes. So, okay. And then here's kind of the snapshots. So here I, now all I'm plotting is the equilibrium behavior. So I've let these simulations run for a long time and I'm seeing how it varies for different choices of the mass. And what's interesting about this is we see the intermediate phase that leave and break conjecture. So for example, here we have the liquid phase for small enough mass that as the mass gets larger, it hits the height constraint and then sure enough, as long as the mass is sufficiently large that we hit the solid state. And then lastly we can look at how the critical mass scales with the repulsion. So thank you very much. Questions? Yeah. For these set value minimizers, what's known as a shape if they are ever not known? So there's a, whenever the energy, whenever the minimizers are unique, they're certainly unique, but there are some cases in which we don't have convexity of the energy and so we don't have uniqueness of minimizers and then it lessens now. Is there any conjecture? I think there probably always falls. Always, okay. For the set valued, let's see, hold on, let me think about that. So if for these power law interaction kernels with singularity up to including 20 potential, I'd say they're probably always false. There's a region which is unknown, but still you would say that they should be false. Maybe I have one about the, so this past diffusion or slow diffusion limit. It reminds me the incompressible limit for the tumor growth and also what they have passed as spetham and the ketone. This is very in a connection. Yes, and I think these are all kind of connected back to the Mesa problem for the force equation, kind of simmix, looking at that limit of degenerate diffusion as a... Right. So what is the... I'm getting more of that. Yeah, so I'd say the differences are when you have this non-local interaction term that's leading to attraction, then that sort of introduces new complications in terms of getting this limit because Rowe wants to build a concentration on the singularity. Okay, so any more questions? Thank you very much.
For a range of physical and biological processes—from dynamics of granular media to biological swarming—the evolution of a large number of interacting agents is modeled according to the competing effects of pairwise attraction and (possibly degenerate) diffusion. We prove that, in the slow diffusion limit, the degenerate diffusion becomes a hard height constraint on the density of the population, as arises in models of pedestrian crown motion. We then apply this to develop numerical insight for open conjectures in geometric optimization.
10.5446/59159 (DOI)
I'm delighted to be here. A little frightened of being the first speaker, I agreed to do it with the understanding that there would be a high level of impromptu in this talk. I volunteered to give only a short talk, and so this is only going to be a short talk. But my motto for a short talk, which I wrote here on the note, so I don't remember, is that in a short talk you can only say one thing. So it will appear that I'm repeating myself all the time, which of course I am, because I'm only going to say one thing, but my goal is, of course, that maybe tomorrow is to remember what that thing was for. So this is the pointer, and presumably if I press this, it goes to the first. So my topic is about the gradient flow for microstructure. Maybe there is one, and that's what I'd like to explore. And this is the sort of microstructure which we have in mind. This is a nickel, and it's from the Carnegie-Unexplained Material Science Department from their orientation imaging microscope. The collaborators have been a one-term project, and I'd like to name my collaborators. So Patrick Bargely, who was a student of Kajah Epstein, who was at Texas at the time. Now he is in industry. Katie Barmak is now at Columbia. She's a professor of material science, and she was formerly, she's a Phillips professor of material science or physics at Columbia. And we were all together at Carnegie Mellon when we did this work. Eva Egling, who is now in Graz. Maria Menonenko, who is in George Mason. Katja Epstein, who is in Utah, Shunyainlu, who I think is now at Lakehead, which was shocked at Microsoft and my mathematics co-conspirators, Slo-Mo Tazam, who is at Carnegie Mellon. What I'm going to discuss today is really primarily the work of, really, you know, quite innovative work of Katja Epstein and her student Bargely. And these are three primary references for the material, and I have all of them. I can email you all of them. And let's talk a little bit about material, microstructure, and texture. So we want to pass from the, let's say, the observational to the, through the, briefly through the idea of the theory, and then to the problem which we were to confront, which is about the, at least the analysis, if not the prediction, of material, of microstructural texture. So here's an example of aluminum thin film from Bargely's laboratory, that bar is 200 nanometers. This is the nickel, similar picture showing at, as the one on the first slide. And this is a conventional pole figure, which is a, which shows the distribution of cell boundaries. And however they make that, the important thing about this picture, although the bar, they've subsequently changed. So this is called cooler, because presumably these are lower energy. Although you would think it should be hotter because of more carbon, right? So they've changed that lately. And the important thing is it's not uniform. So cell boundaries in pictures like this, in this do not occur in a uniform way. And thus you can know, you can think right away, that if you're going to assume that, that curvature, driven growth, or grain growth, things like this, are governed by some kind of surface energy, that surface energy will have to be an isotropic. And here. So cellular structures like this are ubiquitous, and most materials, natural and engineered, are polycrystalline. They consist of a myriad of grains like that, separated by interfaces, which are the grain boundaries, are interested in the texture, which should be the distribution of these cells. The colors here indicate only crystallographically close orientations of the cells. But what is moving is the cell boundaries. So if we are going to be seeking a reliable statistic, we should cast away our intuitive notion that the cells are, the boundaries of the cells are meaningful. It's really the cell boundaries which are a little more difficult to benchkull. So microstructures course them according to thermodynamic rules with topological constraints. Energy is dissipated, some cells or grains expand and others disappear. This is, this is, these obvious things like this are the basis of our analysis. So the grain boundary character distribution, GBCD, is a portrayal of texture, and its presence, that is to say the picture like this, shows that the boundary network has some order. Okay, now, so all what we did was, well we said, well we'd simulate this. As a three-dimensional problem, this is impossible, or nearly impossible to do under the circumstances and financial constraints that we had, but we could do two-dimensional problem, well we think that was very challenging. That was also very challenging. We then harvest the GBCD statistics, and under this circumstance, which is what we're going to discuss today, the interfacial energy, depending on crystallography alone, this statistic which we propose, this GBCD, is a Boltzmann distribution. And that's why we want to study this statistic, because this is, you know, it's an extremely simple distribution. It's coming essentially from the thermodynamic description of this system, which everyone believes is a correct thermodynamic description. So it must be a real statistic, because it's so simple. And so the question is, why does a simplicity emerge from such complexity? So here I just have a gallery of some of these GBCDs in what we might call equilibrium. This is one we'll be talking about. The associated interfacial energy is, let's say, quadratic, and of course they are not quadratic, cubic, or quartic, because they have to be periodic, because the cells are, you know, it's basically a cubic lattice we're considering. But they resemble those. Okay, so that's a scrapbook of Boltzmann's. On the other hand, we can do any kind of simulation. Here is a simulation with all parameters involved, but still too deep of a shallow well potential, really, just a quartic, quite well. And a bimodal distribution, this is an experimental picture, completely independently derived. So in this situation here, we did the simulation, we collected the statistic, and we plotted the statistic, and we got that. In this situation here, they had a peculiar experimental situation where they were able to collect population statistics, which is the blue double bimodal plot, and in addition, they calculated the energy. They calculated the energy using a method that we had earlier devised for this purpose. Okay, so what are the variables on that plot that I can't quite read from here? Okay, all right, very good question. So here, in both cases, the red is simply a non-dimensionalized picture of the energy density, which separates to basically, which separates a grain boundary, which is the grain boundary energy, and then the blue is popular, so the blue is a probability distribution in both cases. And the end variable along the bottom. Okay, these are... Okay, so this is in this orientation angle. In both cases, this is in this orientation angle, and radians, this is in this orientation angle, and degrees. So since we have it, we are going to be upscaling some system, which occurs in an experimental, some kind of way, in which we don't know too much about, we're going to try to introduce a mass transport procedure. And we're going to also try to associate to it a gradient flow, as indicated in the title of the talk. So a gradient flow for Falker Planck based on the Georgie minimizing movements, where we studied the book, of course, of Ambrosio and Judy Salveret, and also this book by Sant'Ambrosio called Mass Transport for Applied Mathematicians, I think. And do you know that book? So I discovered almost by accident, because I got an act, it's available as an e-book. And so you can just download it from your institution as a subscription, do you know that? And I don't know, is your book available as an e-book? Yes, it is. Great. This is so handy. I don't have to carry stuff in my bag. Okay, so not everyone, perhaps, at this hour of the morning remembers what a gradient flow is in this context. So here is free energy, and for this free energy, we'd like to write an equation which is associated to it, and that would be this Falker Planck equation with some simple boundary conditions or periodic, and so for conventional gradient flow, of course, you simply have this differential system. And then you can write, according to going with a following to Georgie, the potential at a given time minus the potential at a slightly later time just by integrating the derivative and breaking it up by Young's inequality. And this will always be negative with equal to zero, only if C itself is actually the gradient flow. So we all know this. So this is essentially the only analytical tool of the known to get started. So for the Falker Planck equation, namely for the standard elementary free energy and the Falker Planck equation, the gradient flow is characterized by this expression. So here we have the dissipation part, and here we have the CDT squared part, which is governed also by continuity equation there. And that comes exactly by integrating this, and here we have with an ordinary some kind of velocity, and then doing it the usual way. So the feature here is that entropy itself, so this is a generalized entropy, does not characterize a gradient flow, as you know. As you know, for heat equation, for example, any complex function can serve as an entropy. And not everyone knows, but with you teaching this stuff, then you know that for a Markov chain, the ordinary Kulman-Kobach-Weidler relative free energy, or essentially this, is decreasing on every Markov chain, so there's an entropy for everything. So this will not characterize the solution. So the solution we would like, our theory would like to realize the solution of the equation and the gradient flow as implicit screen, which now everyone knows. You simply make an implicit scheme for the Wasserstein metric and that free energy, line them up and take a limit, and in this case, easy to show there's convergence. And for this, there's a discrete Euler equation. For that implicit scheme, there's a discrete Euler equation, where phi is the mass transport transfer function from, say, a starting row star to a terminal row, and it looks like this. And this condition says that the gradient flow condition is satisfied identically with the Wasserstein metric at the level of the implicit scheme. You just multiply this by row and square, or you square or multiply by row and integrate everything. These are all equal, and this is the Wasserstein metric and this is the dissipation. And these are the gradient flow conditions, which you would get in this way, which we write as two equations, right, and rather than the average one, because these two quantities here, I'm pointing to, are highly fluctuating quantities in the simulation, whereas the energies themselves are not. So it's easier to verify these two than one. Okay, so that's our theme, is that the collection of harvested statistics satisfies the discrete gradient flow conditions. That's amazing. We found this astonishing. And so the GBCD statistics arise as the iterates of a Wasserstein implicit scheme. So you go from this picture directly to here are the profiles of the solution. We found this, so the GBCD is, and then we claim a gradient flow, and therefore a solution of a Falker-Plank type PD, maybe not exactly a Falker-Plank equation. This verification is just astonishingly accurate. And so part of my theme today, that's the one thing I want to say, is that verification of the gradient flow condition is extremely accurate, and so there must be a better theory than the one I'm about to tell you. Yes, you should be able to verify directly the gradient flow conditions, rather than making the highly entropic complicated theory we did. Not that complicated. But so for a moment, I have to tell you, I personally am not looking at my clock. I should have tried. Yes, I'll have to wait a minute. So in 1951, Cyril Stanley Smith wrote a paper on microstructural coarsening, famous now to mathematicians because the von Neumann, and then later the von Neumann-Mullens, who came about as a footnote to it. There's a story for that, but I can only have time to tell you that story if you ask a question. So here's an example, so prosperous, I'd like to remember that even though Smith was a highly talented metallurgist, you know, problem with metals is that they're opaque, so you can't see what's going on very well. You can look at the surface, and you can look a little bit in certain kinds of microscopy. Otherwise it becomes kind of complicated. So here is soap broth, which is used in classrooms. Here is Lecaire's law, which was called the Kerslaw long after it was discovered, if it's discovered at all, if it is in fact a law about the prevalence of certain kinds of, the distribution of certain kinds of cells according to this number of edges. So the average number of facets per cell is 6. This is just a constraint on the Euler characteristic of the simplicity composition of the plane if you insist that all the junctions be critical junctions. And that was published, however, in annals of mathematics by W.G. Grousting, who was the chairman of the department at Harvard at the time. So, of course, according to Smith, it's just governed by two global features. The first is cell growth according to some local evolution law, like growth by curvature. And that's in competition with the space filling constraint. That space filling constraint, of course, means that some cells, if some cells grow, others have to shrink. And to test this without asking about what sort of defect relations are on the surfaces, what sort of poles are in the material, is there temperature variation or anything like that? To test these two features, we will do a simulation. That's why we did simulation. So here in this, see, abstraction is a concept of not representing objects as they are, but bringing them down to their basic shapes and colors. That's how I design those giant camel statues. That's what we're going to do. My only cartoon for today. So for evolving networks, I would reprise that briefly. So, the evolution law is just curvature-driven growth. Perhaps started by Burkart and Reed shortly after the war. We call this the Mullins theory with some boundary condition, which is perhaps 2 to a error and perhaps not. And so, we simply have these equations. The normal velocity is some kinetic factor times the energy density, C theta theta plus C times the curvature cap along each arc. And this is the boundary condition at triple junctions. It's a natural condition at equilibrium for this. And what we're going to do, there will be no, just C. And the spatial and constraint, you can see examples of it here, which are in fact from one of our simulations. So here you see an exchange of facets. This cell will start to disappear. Here is grain deletion, this cell disappears. The von Neumann-Mullins N-6 rule is that if a cell has N facets, then the rate of change of area is proportional to N-6. That's when the energy density is constant. And we're going to be saying that just by integrating the curvature around the curve, taking account of the jumps and faceting two changes and accounting for the very last one, since you had to go all the way around. So, recent results of the Mekfusen and Srollovich are much more complicated. It extends or they found the equivalent in arbitrary dimension. In terms of network, so the idea is that any boy and his dog, any child in their dog, excuse me, is a dog, can simulate the growth of a surface. This is just, and it's done all the time, sometimes by random techniques, by Pautse-Pollin-Montecarlo, sometimes by phase-trick models, sometimes by just solving the PDE. But networks are much more complicated. And so the first, whoops, result on networks like, I mean, I know of, it's due to Brunswick and Radish in the early 90s. Later had results with Schollm-Mew, and there are some other results. The main issue here is dissipation. There's a dissipation inequality. And the dissipation is dry school formula, and this is the formula which we have to upscale to obtain our theory. Okay. There's a lot of stochasticity in the network. The average area of five-sided grains here in the growth experiment, here in the simulation, here's the average area of all cell sides. The point is that five-n minus, five minus six is negative. So an individual five-sided grain, as issued by theory, have a decreasing area, but the ensemble of them have increasing area. That means, of course, that the cells you're looking at at this time, at earlier time, at more sides. So these rearrangement events are very important for the evolution of the system. And in fact, the boundary condition, without clarifying the boundary condition, we do not obtain reliable stochasticity. So we have a simpler model for that, which I'm not going to discuss, but which you can see in the references which I described. So here we have a, now an energy which depends on lattice or on misorientation. It's missing some things. Namely, to upscale this to the character distribution, we have to include some entropy because we're simply omitting lots of information. And then we will insist that at a given time, the energy is not more than at an initial time. And then how it gets there is we are going to claim that it is, it moves by this velocity. So that's competitive from Boston, I think, so if we want to find the smallest one, we'll use that. So this is the, in a nutshell, how we derive our theory. We talk a lot more about it, but that's basically the basic, those are the basic ingredients. So this introduces the idea of both a batch transport and of entropy to the system. So success will then mean that this grain-bounded character distribution, which is an empirical first order texture statistic, it's not part of it. It's not something we're simulating, it's simply something we're harvesting from the simulation or from an experiment, resembles a solution of a Fokker-Plake equation. And we're going to determine the number of parameters which corresponds to temperature, which we'll do by just writing out the CoBack line order, a relative free energy and minimizing this over possible temperatures here at Sea Land. So this turns out to be a maximum likelihood estimate, which you can find, believe it or not, in Feller's elementary, in fact, in Feller's. But we would recognize it as calculating the dual function of some function. So we're actually a little disappointed to find out that all these techniques which we were using were actually known for years and years. But on the other hand, that gives you a lot of confidence, like some confidence that you're doing the right thing. So here's a 2D-courcing result. This is the energy, here is a trial distribution, here is the CoBack library relative free energy, which tends to zero if lambda is the correct number called sigma here. And here is the entropy itself, which is increasing, theta will roll log, roll. That's not the CoBack library, energy by just the energy select. That's increasing, and so that means that the system is not tending to a homogenous state, and that's good. And here is the entropy picture, the red one is the one we selected, and this is the answer, this is the terminal distribution, the empirical distribution. These are 10 trials beginning with 20,000 initial cells. So the red is the Boltzmann, and the blue is the relative entropy conditional solution. As you can see, they match pretty well. Okay, so now we're going to check the gradient flow condition as before, as a promise. So there it is. So we're going to label the frames, row J, and we have here these two conditions, which I already told you about, and we can verify those conditions. And so our claim is that these frames arise as the iterates in implicit speed. There is one remaining problem, however, and I like to... which is appropriate to mention at this moment, and that is the time... when you do the simulation and you're just collecting some frames, some harvesting, some statistics, that may have nothing to do with the equation we're trying to identify. It's to recalibrate, and it's essential, in fact, to regard these simulations that are simply frames, as samples of an evolving process, and then... let's see, I think if I click this, yes, you can see here is the red movie thing. The ability to label that one red was printed out to us by 8am. So we have to establish a sequence of time intervals of frames by comparison with a computed solution of the PDE, which we have to do in a very careful way. And then this becomes an inverse problem. The machine time is not the same as the focal length time. This is true even for simple systems like the air infestern, which you may collect. You may be very accurate at the simulation of this well-known, simple, random process. But when you try to associate that to even the Markov shape, to which it's associated, it may not work if you have not correctly identified the time scale. Okay. So here's the answer. Here is the... the blue is the difference of the energies at the iterations of the histograms, that tends to zero as it should. That's the difference. That's not the energy, but the difference of the energies. And here is the... here is the dissipation condition. That's in magenta. You see, look, you see, it's nearly working. And these are distributions that jobs. So this is what I... so this is just in the start, at such a... We're very thrilled with this, but we don't know how to... how to obtain the gradient flow directly without passing through the theory. Here is again... so that was a simplified problem. And here again is our GPCD. So here it is. And it's even better than the simplified problem. We calculate the process time by using an old method, which is very slow, limited to one dimension because that's what we have, which Walkington and I did a long time ago. And here are the density plots, and they click very well. And to 20 to 80 percent of cells deleted. Here, just to connect, there's the correct, relatively entropy cloud with the identified diffusion coefficient. Here for work potential, we didn't do quite so well, as you can see. And... but we did get... we're able to identify with a time-dependent diffusion constant. So here are some... there are a lot of challenges. There are significant current interests. And so here is my summary. This should be really more like some time to know. So the GPCD, this is a relative character distribution. Consistency between... so I showed you at the beginning, it's consistency between experimental simulation for this and that one picture. And if the interfacial energy, it depends only on lattice disorientation, then this GPCD is a post-convolution. So we have a mass transport-based theory, which describes this evolution. And we claim that the harvested statistics are iterates of the implicit scheme. In other words, they're not approximations to the iterates of the implicit scheme. They're not some approximations to the solution of the equation. They seem to arise precisely as the iterates of the implicit scheme, which you would use to solve the equation. So we claim from this that this GPCD should be the solution of some kind of Fokker-Planck equation. Not always a simple one... not always a simple one-steam-elect kind of equation, but maybe with some very infusion constant, maybe with some other glitches I don't know. The gradient flow identification is its first use of mass transport in this context, which I know about. So, namely, we have an experimental system for which we have a thermodynamic model. From that thermodynamic model, we made a theory for the statistic, and that statistic satisfies the gradient flow. It's something, in some sense, which we discovered from nature. And so this... it must be possible in other systems like a... well, random walk. In a random walk, I can tell you... so here it comes, green... So it took me over a year to figure out, well, yes, of course, in a random walk. Right down the equation for the implicit scheme, or just an ordinary random walk, then just discretize it, and it's the random walk. It's... you obtain the random walk. So what you obtain is not that the solution, so to speak, is the random walk. You're doing this implicit scheme moving from state to state, and the continuity equation which arises is the random walk which you obtain, namely, the ordinary random walk markup change. So somehow, the point of view here is not to solve the equation, but then to see if it satisfies some... some statistic satisfies some gradient flow condition. But to somehow evolve a gradient flow, then derive the equation as its continuity equation. So now intellectually reverse this process. So that's it. Thank you very much.
A central problem of microstructure is to develop technologies capable of producing an arrangement, or ordering, of the material, in terms of mesoscopic parameters like geometry and crystallography, appropriate for a given application. Is there such an order in the first place? We describe very briefly the emergence of the grain boundary character distribution (GBCD), a statistic that details texture evolution, and illustrate why it should be considered a material property. Its identification as a gradient flow by our method is tantamount to exhibiting the harvested statistic as the iterates in a mass transport JKO implicit scheme, which we found astonishing. Consequently the GBCD is the solution, in some sense, of a Fokker-Planck Equation. The development exposes the question of how to understand the circumstances under which a harvested empirical statistic is a property of the underlying process. (joint work with P. Bardsley, K. Barmak, E. Eggeling, M. Emelianenko, Y. Epshteyn, X.-Y. Lu and S. Ta'asan).
10.5446/59176 (DOI)
Thank you very much. Thank you very much to the organizers for inviting me to this wonderful place with this great meeting here. Before I start, I would like to show you some, maybe a good reminder to what I'm going to talk about, what this is about. And this is a simulation which was done by my collaborator Federico Bonetta. So what you see here is on the left a bunch of particles, about 500. And on the right hand side what you see is a histogram for the x-component of the velocities of these particles. It's a little bit smoothed out because you will see then otherwise the fluctuations which are too big. But in any case, this is a little movie and you see on the right hand side how this thing moves, right? It fluctuates and eventually this thing moves towards a Gaussian, right? Isn't that sort of convincing? I mean my screen is not that great but you sort of see it, right? And of course this is the max-volume and to a large extent it's a miracle that this actually happens. I mean you can do this with many of the things that just happen, right? Your girl was getting to the max-volume. And this is a phenomenon which in my opinion is not really understood, right? Really what you would have to do is you would have to have a reasonable understanding of the Boltzmann equation and how it's derived from Hamiltonian mechanics. Now for Hartz-Fierce actually it is proved using 20 years of work from the Sinai school showing the Ergodicity of the Hartz-Fierce gas, right? With, I don't know how many trials, you know, every time they finally have approved somebody found a hole and I think now the holes have disappeared so we know that. But if you take the interaction, you modify it slightly, everything is off, okay? So I don't think that Ergodicity is really the explanation for this. I mean I have here another, for example, when you look at this picture here. So here you have the particles in this quarter here, right? Here's your distribution. What would you think should happen? Well at first this gas expands so what you would do is you would cool down, right? A little bit. So your distribution should sort of shrink a little bit, right? It moves to one side but then eventually because you have kinetic energy packed in this thing which is translational energy, I mean the center of mass, that center of mass energy gets converted into irregular energy so then you think it should broaden again. So let's see what we can see there. You see you move to the side, right? You see how it sort of things out a little bit, nicely cools down and then eventually it should broaden and come back, right? So you gosh, right? And you see, I mean what is nice about this, this takes some Federico did the simulations in two minutes but half an hour sort of write the code, right? And it is fantastic what you can do today, okay? So however, I cannot talk about this Hartz-Fiergas in the Hamiltonian connection so what I'm going to do is I'm going to talk about a simple model, the Katz model and ask about the entropy in this model. Now fortunately, what's the problem? Recording. It's already recording progress. I mean, so I turned it off and turned it on again. So fortunately, I was lucky that Maria Cavagio talked about this already yesterday. So here's this probabilistic model for n-colliding particles. Again, you know, we keep it simple just one dimensional, yeah? You pick out of this n particles a pair randomly with uniform probability. You pick randomly a scattering angle, you update the velocities. So this is your scattering law which can conserve kinetic energy. So what it is, it's really a random rotation in the IJ plane, right? And then you update these collisions. I mean, you do it, the collision times are exponentially distributed. And with that, you can actually cook up a stochastic process and the generator or the equation is then given in this way. And first of all, you see the energy is conserved, right? Because rotation is conserved in length. And what we're looking for is stochastic process and probability distributions. You only depend on B. So we describe in a spatially homogeneous situation. And the master equation is given by this expression. Here's the master equation. Ah, I can use that. It's the master equation and here is the solution of this master equation. And what is the Q? The Q is this operator here. The Rij are just this random, this average over collisions, right? Over collision angles. So this average rotation in the IJ plane, you sum this up and you average, you divide by n choose 2. So this is your system, right? And you know everything about this. So it's linear, you can solve it, God knows what, okay? Good. And here's your initial condition. Now, so Maria talked about various issues last time. Yeah, here, yeah, let me, let me ask, maybe make a point here. Look, you have here, when you fix a particle, I, say particle 1, and you ask yourself, what's the collision rate of these particles with the others? You see that you have n minus 1 terms, right? The particle 1 can collide with n minus 1 particles. And what you see is that the collision rate is fixed, it's independent of n. That's, that's what is usually called in context of Boltzmann equation, the grad condition. Yeah? So this is maybe good to know. And what we are interested in is the negative entropy. So the entropy for me is now always positive, right? And the particle entropy would have a minus sign in front, so allow me to use that. I think this is not a problem in this audience. With physicists, they always point this out. So anyway. Good. So now what we need, so what can we say about the entropy? How does it move along the flow? That's the question. And there has been some conjectures. For example, there's the George and Janus conjecture, which was originally formulated for the Boltzmann equation. And the hope is that you can get some kind of an exponential decay of the entropy with a constant which really doesn't depend on n. Now, one of the approaches one might take is to look at what is called entropy production, which was actually, I think, invented by McKinney from the Dron, and very much developed by Eric and Sao. So what we do is, you look at the entropy, you differentiate it along the flow at time t equals 0, relative to the entropy, so that's the expression which you get. You notice I put the minus sign in front, so this gamma, this expression is always positive. And then what you do is you take the worst possible case, you take the infimum of this ratio over all possible distributions. And so once you establish this, you have a handle on this gamma, you get, of course, this differential inequality, and you have exponential decay. So this is quite nice. The bad news, however, is that Cedric Villaniere actually proved the lower bound of the order 1 over n, so this makes, if you can complain, this is not very good. It turns out it actually is good, because in fact you can find an upper bound of the same order roughly, by Amid Einath, the student of mine. So this is bad news. In other words, when you start with any kind of initial condition, your chance that the entropy decays in the reasonable ways is very slim, it looks like it. Of course, I have to admit, this really doesn't prove that there is not exponential decay. I mean, it could very well be that you have a very, at the very beginning, again, it's very flat, right? And then suddenly the system sort of has recovered enough that it suddenly drops off nicely, exponentially. So what I would like to see, I would love to see an initial condition where you, for example, could prove something like that, right? A lower bound, say, of 1 minus t over n, that's a lower bound, right? But this is still, nobody has produced anything of that kind. Okay? So this isn't some sense bad news, right? And the question is, what are physically reasonable situations where we can expect exponential decay in entropy? So a partial answer was given yesterday by Sao, right? I mean, they characterize different states where you have reasonable entropy production. And what I do today, and this is joint work with Federico, but Bonetto, is, you see, when I say approach to equilibrium, suppose I take all the molecules of this room and stash them into one corner, take a few of them, output all the energy in them, and then let them bounce around. Who would expect it takes you forever on these equilibria, right? So in other words, the problem is, because with this phenological approach, you don't really know what are physical conditions, what are reasonable conditions. For example, in the simulation before, I gave you the initial condition had still a very nice distribution on the velocities. They were not crazy in any way, right? And then you can expect reasonable approach to equilibrium in a reasonable amount of time. So what we thought was, well, let's take the following situation. How can we control approach to equilibrium? We make small distortions within a big system. So we look at the following situation. And here is a first example, namely, you take a system interacting with a thermostat. So the thermostat, you have to imagine, is a heat reservoir, it's an ocean. And what we do, we look at the following on this master equation. Now the F is now the distribution of velocities for the system, not the ocean for the system. So that depends on M particles, M variables, velocity variables, okay? And then you have to, so this is the interaction of these particles among themselves. You know, they happily collide, they do all sorts of things. And then you have the interaction with the reservoir, that's this piece. Now what is this piece? I've written it down here. It looks a little bit forbidding, but it's actually quite simple, because what you do, you think of this thermostat as this huge reservoir, right? Infinite reservoir, as particles in thermal equilibrium. And then once in a while you pull one out, you collide it with a particle J, that's what you do here, right? Here's the Gaussian, here is the collision law, right? And then you average over the angle, and then you integrate overall the velocities of these particles which you pull out of the reservoir. You have done this collision, you throw away these particles from the reservoir because the reservoir doesn't change, right? It's infinite, it doesn't move. Okay, so now you can analyze this master equation. This is not really, not quite so hard. You have a little bit of a problem in the sense that the energy is not conserved, that's not a surprise, right? Because after all this heat flow, if the reservoir, if your initial condition has low temperature, it heats up, and if it has high temperature, it cools down, and you can easily compute the kinetic energy that's this expectation value, and you see this is Newton's law of cooling, right? It comes out, and you also figure out that the unique equilibrium state is a Gaussian with temperature beta minus 1, that's this Gaussian. That's the reason, right? It's not a big deal. And very much what you expect. So, a little bit harder is to study the relative entropy. So what we do, we take this expression here, so that's the Gaussian, right? That's your equilibrium state. You have your initial condition, so you look at this gadget. And then you can prove that this relative entropy decays indeed exponentially fast, and what you see here, it depends only on the interaction of your system with the reservoir. I should say thermostat. It's really infinite. And so, it's always nice to have very good students, so this was a Ranjini, my dear, not that she was a student, the Federico Bonetto, which actually showed that this rate is optimum. You cannot prove it. You can really find a function so that this is really the rate. Okay? Good. So, no surprise there. Now, one question which you could ask yourself is this. I mean, a thermostat is, of course, an idealization. We could now ask ourselves what happens when we take now the system of n particles, and we hook it up to a reservoir of n particles. Maybe it's not a good choice of words, right? n and m sound almost the same. So n particle and larger than m. Hopefully, I mean, much larger than m. And then you ask yourself, well, so when you couple the systems, what can you say? So here is a system interacting with the finite reservoir, finite reservoir now. So here you have the cat's evolution, if you like, the cat's generator of the system. So these are these particles which collide among each other. And then you have the reservoir. This is also a bunch of particles which collide among each other. And then you stick in an interaction. And here's the, this collision between the particles in the system and the particles in the reservoir. Now, you see, here's a point to be made. When you fix a particle in the system and you ask yourself, what's the collision rate with particles in the reservoir? Well, how many particles do you have? You have fixed one of these particles in the system. So you see that the collision rate is mu, which is independent of n. Okay? However, when you take a particle out of a reservoir and you ask yourself, what's the collision rate for that particle? Well, here you have only m particle in the system. So your collision rate is mu, m divided by n. So when n is very much larger than m, the collision rate is tiny. And what you would expect is that the reservoir shouldn't change very much over time. Okay? Which is perfectly reasonable, right? So, you see, in this system when I start in any state, the reservoir moves along, right? It interacts with your, here, with this interaction term. It interacts with the system and interacts and tears the reservoir. They all interact as a job. Okay? So here's the thing, right? We start now with initial conditions. And how do we choose them? We choose the initial conditions such that we put the reservoir into an equilibrium state. So all by itself, this guy wouldn't move. But now we put the system out of equilibrium, that's this f0 of v. And now we plug this in into our evolution equation as an initial condition. Of course, at the beginning, yes, you're in this situation, but eventually correlations will build up, right? And these correlations will start kicking the system, the reservoir out of equilibrium. What would you expect, of course, that's easy to prove for finite time when n goes to infinity, it will stay in equilibrium. That's trivial. Okay? But you can actually show a little bit more. Let me point this out. So the equilibrium state for the finite reservoir system is not a Gaussian. Why? Because the way you should think about this model is what you do. You average over rotations, right? I mean, that's what this system is doing. So now you take your initial condition here, and you average it over rotations, that's your equilibrium state. And of course, nobody says that this average over rotation should actually be a Gaussian. It's not. Of course, you would say when the n is very, very large, maybe it looks like a Gaussian, close, but it's not. Okay? So the equilibrium state here is not a Gaussian. Okay? That's my point. And so then you say, ah, so then I can really compare the two. There's maybe a problem. But still, you can do something. And for that, we use the Gabetta-Toskani-Wenberg metric. So what you do, you take a distribution f and the distribution g, you assume that's the first moment. And now you take the Fourier transform. You look at the difference, its magnitude, and you divide c squared. And then you take the supremum, and that's a nice distance. In fact, it's so nice that when you take an handful of tensor product of f and an handful of tensor product of g, these two things have the same norm, the same distance. It's an easy example. So, the tensilization is known, behaves very nicely. And now you take the solution of your finite reservoir system. That looks awfully complicated. And here's the solution of your thermostatic system. The initial conditions are here. Here is the initial condition for the finite reservoir system. You notice this is the Gaussian. And here's the initial condition of the thermostatic system. And you see this is the same function here and here. Okay? So now you're likely to evolve. In one, everything interacts with everybody. In the other system, the thermostat stays whatever it is. So here's the result. So here is the full system, the reservoir. And here, of course, in order to be able to compare, I stick here the g and beta, which is just, if you like, simulates the thermostat in equilibrium. I mean, you know, these functions should have the same variables. Otherwise, they cannot compare them. It turns out that this distance, the van-web distance between these two states is bounded above by this quantity here. So this is the initial condition. How far away from a Gaussian you are? Forget the constant here is actually the fourth moment of your initial condition. So you can forget it. So these remarkable here are two things. First of all, you get an n divided by n. Secondly, you get it uniformly in time because this is just a constant. And you notice when t goes to zero, this goes to zero, which is nice as it should be. And when t goes to infinity, this is, of course, beautifully bound. So you have these things are uniformly closed. And the fact that you can do this uniformly in time makes it not really. For finite time, it's easy. But uniformly in time, that's the work you have to stick in. The proof is longish. But this is already good news. So therefore, you might say, well, let's go back to the thermostatic system. No, I had it here. For the thermostatic system, we got this wonderful exponential decay. So you ask yourself, well, shouldn't you have that also for the one with the finite reservoir? And you see there it comes out in the idea you have the state whose initial condition most of the particles, if you like, are in equilibrium, if you are out of equilibrium, that state should have very good entropy production. Okay? Should have very good decay in entropy. So let's look at that. Until when do I have time? 10 more minutes. 10 more minutes. Okay. All right. So again, let me repeat. For the one with the thermostat, we had this wonderful exponential decay for the entropy, for the relative entropy. And for now, I'm going to choose beta equals 2 pi, because it has the advantage that they don't have to carry around this normalization factor. But this is a little bit easier. So now let's go back to the full system. So here take my initial condition, and then evolve it with the time evolution. And remember, this is the time evolution where the system interacts with itself, the reservoir interacts with itself, and the two interact with each other. Everything is done. So, and then of course you say, look, I'm not really interested in how the reservoir evolves. I'm interested how the system evolves. So what do I do? I integrate over all the variables in the reservoir. So you get this function f of vt. And here now I'm doing something really stupid. I'm looking at the entropy relative to the Gaussian. Now, why is this stupid? Because I should really look at the entropy relative to the equilibrium state. And I told you before, this is not the equilibrium state. The Gaussian is not the equilibrium state. The equilibrium state is not a Gaussian. You see, even when I integrate over w. So this is simply because I cannot do any better. I apologize. Good. So now also, let me also, it's nicer for notational purposes. Let me replace the uniform measure by a general measure, rho theta d theta. And the only condition which we assume is this funny condition here. This has something to do with the computations. It's not a big deal. It makes life simple. So again, let's go back. We had this wonderful estimate here, right? So what can you say about the full system for the entropy decay? And here is the result. The entropy decay, remember what we do? We take the solution of the master equation. We integrate over all the w's. This gives you your f. I stick it in here. This is the s, which you get. And it has a bunch of, if you like, satisfying properties. The mu of rho is this one. And notice, by the way, when you take the rho to be the uniform one, the uniform probability measure, you get mu divided by 2. So you get precisely mu divided by 2. When n, and you notice this factor here is of the order 1. So when n goes to infinity, you see this disappears. This goes towards 1. And you get precisely the result which we had for the thermostat. It fits perfectly well. And another satisfying property is when t equals 0, you get equality. So we haven't lost anything there. So this looks like a reasonable estimate. Moreover, this constant here is according to Ranjini, right? Because we know when n goes to infinity, it's in some sense sharp. You cannot improve that. So this is the good news. Now, it turns out that the proof is, and I would like to go a little bit into that, because I have still about 8 minutes, the proof is not so terrible difficult. It has a bunch of twists and so on. Let's present maybe some of the ideas. So one idea which will always use here in this business is this. That you see, I'm going to write the initial condition by factoring out the Gaussian. Now, why is this a good idea? Because you see, the initial condition then is in this form. And remember what the Katz does? The Katz model just acts on via rotations. Now, the Gaussian is of course completely rotation invariant, so this guy goes along for the right, he can pull it out of the evolution. It just isn't there. So then, what you do is this f of v of t instead of this form, where h is given here. And you notice you act just with e to the l t to this h0. Now, remember, the h0 is now only a function of v, but I have to here put both variables because the time evolution generates correlations. So now, in these new variables, it turns out that entropy has this very simple form, and this was your initial condition. So now, what do we do? Well, here's a bunch of things. We normalize things a little bit. Your time evolution can be written in this fashion, where the q is an average over these random rotations, if you like, with some coefficients lambda ij's and these coefficients are one value when you're system collision with system particles or reservoir particles. You know, they have these various values. I just abbreviate them as lambda ij. Okay? Good. And then, you just go on and you do a brute force calculation. So what you do? You say, well, I write down the power series. Here it is. And this q to the power k is an awful thing in a way. Namely, it's the sum over all these lambda alphas. Remember, you have now k, say, k is 5 million. So you have 5 million collisions, right? And you have to take all these products along for the right here. You have to take all these measures. And then you have to apply this to these h0 evaluated at this product of random rotations. So this is a total disaster, you might think. But it's not so bad. So I abbreviate this mess here, this guy here. After integrating over w, that's what I have to do, right? I denote it by this gadget here. It's just a variation. And you see, this alpha and theta, they encode all the collisions. You have to keep track of them. So you might think at this moment this is nuts, right? You shouldn't do something like that, OK? But once in a while one is lucky. So now, you see, because of the, you see, these are convex combinations, right? I mean, the sum of these guys times that is just one. So you can use convexity of the entropy. And you reduce everything to estimating the entropy of this gadget. Remember what this gadget was? It was this h0 evaluated at this product of rotations, multiple and multigaussian, right? That I have to install and integrating over the w. OK. So here we are. So this is our problem. We estimate that. And now, the point here is, remember, the key information which I have to use now is the following. The h0 is not a function of all the variables. It only is a function of the v variables. OK. The rotations, however, act on all the functions. So what you're going to do is, you're going to split this rotational matrix up into blocks. OK. Natural. You see that this gadget is just given by this expression, right? Because you see the function h has only, picks only the top variables. So this is what you get. So this is an m by m matrix. This is an m by n matrix. The other matrices you don't see. And all you have to remember is that this gadget is a rotation. So there are some relations, namely, a transpose plus bb transpose is the identity. So you stick it in. You do what is called an orthogonal single-valid composition. You end up with this thing here. It looks like a total mess. It's terrible. OK. But again, it's not so bad. Why? Because you see, when you work this out, what you get to get an expression of this type. And at that moment, everybody should perk up because we all know this is the Orchda-Nulnbeck process. Because these gambles are now all numbers between 0 and 1. And so you would think, aha, so we should have a chance of estimating those. Yeah. I mean, sure, we have very good results on this, right? And this is a, I write down here the Orchda-Nulnbeck just for one variable. And what is the famous theorem? I think it goes back to Nelson, if I'm not wrong. Namely, you take the entropy. You plug in the NaH at what you get to get this estimate. And you notice, I write out the second term. I'm not assuming that the H is a probability distribution. OK. I don't assume that. And that's important. And that's the headache. Why is this a headache? Let's take a simple example. I take two particles, right? AB, two variables. And then what you do is you iterate this, right? So here you first work off the Na. That's what you get here. And then you work off the NB. And you see, you cannot assume now that the marginals are probability measures, right? They just go along for the right. And you see what you get is this complicated expression where here is the entropy of H. Here you have the marginal with respect to the second variable. Here's the margin of the first variable and so on. So now when you go back to this expression here, you can imagine that's going to be a nightmare. OK. Indeed, it is. What am I doing? Yeah. This is what you get. So you have to sum overall subsets of 1 to M, right? You work this all out. Here's what you get. Now that moment you should give up or not. And now it's good that you have students. Right? So we didn't give up. So what did we do? You see, if you look at this gadget here, this looks like something. So here's the theorem. You can estimate this mess by this gadget. And you would wonder how does this come about? Well, here's a short wrap up. Namely, I'm using Bart's version of the brass completely in the quote. So you see, what you have here is in some sense when you go back to this expression here. Look at this expression. Here's your H0. Here's the logarithm of some marginal. Here's some matrix and then you integrate against the Gaussian. There is a wonderful theorem which actually also was pointed out. I mean, this was actually proved by Ehringen Kodere-Rauskin that you can estimate this entropy here in the following question. You need a bunch of Hilbert spaces in Rm. You have linear maps between these Hilbert spaces in Rm. And you have constants ACi so that this Bi transpose B satisfy this relation. Okay? And then you have this estimate. And all you have to do is now you have to go and take these things here, these matrices, take these constants. I mean, in other words, what you do is you make these correspondences. This looks very complicated, but you just do it. That's the reason why you have students, right? They really can handle these kind of things. My edge, you cannot handle this anymore. And so they do this and then you do a computation. And here's the computation. That this mass, which corresponds precisely to this mass here in the brass complete, you can compute this and you get a number. Here it is. Okay? And what you do is you plug this number back in into the exponential and here out pops this result. Okay? So this is just a glimpse. So what we do, we use hypercontractivity, right? And you use the brass complete, you put this whole thing together and that gives you the result. And it's clear. Okay? Thanks for your attention.
The Kac master equation models the behavior of a large number of randomly colliding particles. Due to its simplicity it allows, without too much pain, to investigate a number of issues. E.g., Mark Kac, who invented this model in 1956, used it to give a simple derivation of the spatially inhomogeneous Boltzmann equation. One important issue is the rate of approach to equilibrium, which can be analyzed in various ways, using, e.g., the gap or the entropy. Explicit entropy estimates will be discussed for a Kac type master equation modeling the interaction of a finite system with a large but finite reservoir. This is joint work with Federico Bonetto, Alissa Geisinger and Tobias Ried.
10.5446/13753 (DOI)
This is a wonderful workshop. So we have an experience of fast snow in this area. OK, so today I'd like to talk about the module space of a part of the connection and part of the bundle and the geometry of that. But after I hear the many talks here, I change my mind. So in the first three slides, I'd like to talk about Mandala of related module spaces. So the play it which you have to remember is a four-in-object for such a too many, I think. And I will explain later very explicitly. So just to remember. So the first one important thing is our base curve with distinct point, t1 to t2 tn. And that is the singularity of the hex bundle and also the connections. So that is the power of the case. And also there are extended cotangent line bundles on C which have a port at t1 to tn. That is necessary to deal with the case of the part of the case. So simple port. Simple port, yes. But of course we can extend it to the higher port. And G is the genus of the curve. Any number of points are in the length of the vector bundle. This is the degree of the vector. So you have to remember the numbers. And large m is the very important numbers. R squared G minus 1 plus R minus 1 over 2 times n plus 1, which is the, maybe I think I have a clue. This is the half of the dimension of the module spaces of the hex bundle or the power of the connection. In a simple case. And m-dram is the module space of the parallel connection. Sorry. This is the converse. m-dram is the module space of the parallel hex bundle. m-dram is the module space of the parallel connections. I'm very sorry. That's it. And p is the module space of the parallel bundle. And X is the module space of the generalized monotomic data, which is our sort of character value. And SNC is our cement, any cement product of the C. And if you consider the L, you have alignment of the C. And if you consider the total space, that is two-dimensional spacing. You know, okay? And the total space of L is called this, the volume L. The L is the helipad space of the large endpoints of the total space of L. So which is the two-end dimension. Do you understand? So L is the surface. Are you making surface? So helipad skew. So helipad skew of endpoints, which is known as a smooth, symmetric half, smooth two-end dimensional algorithm varieties. And which are the back to the SNC. Right. So these are the prayers, which you have to remember. Well, but then you have to close that. So please, remember the videos. Okay. So the ratio of prayers, which I can explain like before. So M-dram, this is the moniespace of Higgs bundle. So the Higgs bundle is just paired of the vector bundle of C and the parabolic Higgs field. So if you forget our Higgs field, you have a parabolic bundle. Okay. And M-dram, M-dorgo and M-dram, it's related by non-mabryon for celling. And essentially M-dram is the deformation of M-dorgo is the deformation of M-dram. And of course you have our forgetful map, just forget the connections, parabolic connections and just associate our parabolic bundle here. So you have to take it up here as a motorized stack because the civilization doesn't work. Yes. That is a very good point. So we use stability for these moniespaces to have smooth space. But then on the parabolic bundle may not be stable in the parabolic bundle. So that's a good point. So it may stack or in good cases that causes moniespace. And the X, the character of our idea associated this one. So then you have a natural moniespaces of the Riemann series that correspond to this, which associate our flat bundle. So you have, locally you have flat sections. So by using our analytic continuation of the flat sections you have representation of the fundamental group to the G-n. So that is X. And in the case of irregular singular case, you can also consider the generalized monodomy space, considering as the stock's data and also the usual monodomy and also the connection matrix, or in my neural terminology Z. So we can contract our generalized monodomy space, usually in the affine space. And these are all dimension 2n in our setting. And here we have any dimension of spaces here. And the map, if this is the so-called Lagrangian app. And also we can show that these have nice simplex structure. And this vibration is Lagrangian vibration with respect to that singular simplex structure. So this is one point. Riemann Hilbert correspondence is almost analytic isomorphism. And later we can prove that X, the characteristic X may have a singularity. So this may be or can be considered as an analytic result of singularities of this character. And also in M-drum we have a Hitch vibration over the basis. And by using the so-called apparent map theory, we can, for this Hitching-Tix bundle, we can associate our end point on C, that end point on C. That is apparent singularity. And also by using the spectral curve theory, we have a map from M-drum to this Hilbert scheme of N, total space L. That gives you the bi-national map of this and this. And also... What is that map? What is that map? This map? I will explain it later. I'm sorry. This is also a drum. In M-drum you also have a bi-national map from here to here, but it is not so natural like this. So again, you have dimension 2N here, and dimension N. So this map can be proved. This is also the Laudanian map. Okay, from here. So you have many maps like this. So in this Mandala, you can consider many interesting problems associated in Mojia spaces. And the first one is related to the Mojia space of the public connection. And by using this Riemann-Hülbert correspondence, you can consider a nice dynamic system changing the Mojia of the C and the position of T1 to Tn. And that gives you the nice isomodern deformation of the linear connection. And that usually gives you the differential equation of the panoramic type. And also if you consider the family of the Mojia, the drum Mojia spaces over the parameter space of C and T, you can consider the family of the Mojia space of the connections. And the type functions are living on that space, total space. And the parameters in the C and T, it gives you the usual type. If C is P1, you can just vary the location points. And that gives you a nice theory of the time function. And we have learned that that is related to some kind of our topological string theory. But I don't know about it. So just because a time variable moves on a curve? Yes, curves. And the top is a section of the line bundle of the total spaces of the family of the Mojia spaces. And also the extreme dimension of the key Mojia spaces and the Mojia space of the connections and the barbless bundle and the character variety, interesting. And for example, the mix of structure of the character variety chi is interested in. And so-called Simpson conjecture and also P equal double conjecture are interesting. And also we have two Lagrangian vibrations, M-dram to the bundle P and M-dram to these spaces. And this is two Lagrangian vibrations. And in some cases we can check that these Lagrangian vibrations are transversal to each other. Originally I believed that these are the same, but not true, they're completely different. And that has some information. And special case of geometry and some particular relation related to the Hitching Parabell case is as in the work of the Paragluria, Professor Gargaglia, that should be interesting, but not yet our work. But I'd like to work on this. And also maybe the geometry-dram runs give you the geometry-dram conjecture say that you have a derived category of the coordinate shift on the M-dram and that map to some D-module, and that's the category of the D-module on the stack of the Paragluria bundle. And in the special case, Arringing Mishenko established by using the Fulimkae transform. And of course there are many people about the geometry-dram run, and I believe that in some more special cases, we can expect Fulimkae transform gives you the geometry-dram run. And if time allows, I will explain about this. So we have these are members and the habit relations, and we have many examples, interesting problems about it. And in this lecture, I will talk about my joint work with the old joint work with the Parispandeput, which gives 10 character varieties associated to the Parabell equation, and one case which corresponded in the Parabell 3D8, the Parabell 3D3, that's your case, was wrong. And I will collect the result, and also the Cedar-Sabor gave a check at the P- or W-conductor by using the new list. I will explain about it. Actually, I will ask the sub-word that this case P-go-davoconject does not hold. So I checked our paper. Oh, this is a mistake. OK, such a mistake. OK, let me start the pandemic. So C is a non-singular projectable genus, G, any genus, and T, the set of anything point on C. And this is a divider associated with that. So MgN is a modus spacer of ordered any point capital, which is G, which is known to be a case-art projective variety. So the lambda connection for fixed lambda C is a given, can be defined as photo. So E lambda, E nabra is called a lambda connection. If E is the algebraic vector bundle of C, of rank R, and the degree D, and the nabra is a fixed map from E to E10 times L. So that is L. A logarithm of lambda connection, which satisfies this lambda-treated lightning rule as in the lecture of Korn-Sama-Chizuki. And so degree of L is given by 2G minus 2 plus N. And by some technical reason, I will assume that N is always greater than or equal to 1. So I avoid the case, no singular case. OK, so if lambda is non-zero, lambda connection means that just a usual connection. This is a usual connection. And also the locally near-identical TI, this nabra has poles like this, and that is locally, this can be written like this. And also in the lambda equal to zero case, this means that this is Higgs field, so nabra written in phi. And also you have this decryption near the poles. So you have this over... you have this R matrix with a formal entry. So the residues of this matrix, this nabra and phi can be defined like this. Just take the residues, and this... and then eigenvalues are independent of the choice of the local frame. So then we put the order by hand, and at the I you have a new zero to new R minus 1. So then totally you have N times R numbered as the residues of nabra, or phi, that is called the local exponents of nabra. And by using a residue formula, all of the sum of the residues should be equal to the minus lambda times the degree of E. So in the connection case, and for the Higgs case, you have the whole sum itself. So then the space of the local exponent of lambda connection can be written like this... N times R numbered with this D condition. And for the Higgs case, you take lambda for zero or something. And then our local exponent is... there is a notion of the generacy of our local exponent, such as... I would define that fn new is not generic. So new is called resonant equal to some i and j of j2, or this is lambda times integers. And new is called this way, if there is a subset of the new prime in new, whose sum is like this. So this means that if you have some sub-connection, so if you have E nabra, so then if you have sub-connection, f, so that f-rescript, nabra, also satisfies our Hux relation about that. And this is the possibility for that data. So if new prime is very... if new is very special, there are no relations like this, so that means that you don't... for such a new enabra has no sub-connection, and no Hux field for lambda. Okay, so then in good... and it is very easy for us to assume that new is generally... then you don't have to consider about our stability or something. And you can... the work is very... quite easy, but we also work that. So let's... in order to... basically we'd like to construct the good modular space of E nabra, but then for all new... okay. In order to do that, just considering the E nabra is not enough, so we have to add some extra geometric data, which is called a parallel structure, which is just our filtration at each fiber, at the singularities and the full flag, and which is related to the eigenvalues, residues and eigenvalues like that. Essentially, each quotient gives you the... so you have a residue matrix, and you have eigenvalues. So... and essentially each quotient gives you our eigenspecies for the... eigenvector for it. And if all eigenvalues are distinct, so this filtration uniquely depends as you run in linear algebra. But if... let us assume newji is all zero. Okay. Also, even if the residue is zero, okay, in that case you are sure. So then this... this condition is nothing. So just consider all the flag on the fiber. So this means that if the residue matrix is zero, the ordinary... the... the modern space is very bad at that point. But to make a resolution of singularities, you need this kind of information. That is the ideal. So the dimension matches if you consider the full flag. Yeah. Put it in there. That is the solution. Yeah. Okay. But you... but it's not so enough. So... so you have... even if you add these additional data, there are also some bad guys there. So then the idea of the geometry in the Manchurian, the Manchurian, just consider the good guys and take a quotient. Okay. That is a very clear idea. And how to determine the good guys and bad guys. So that is the idea of the stability. And in this case, we have to introduce some rational number of this kind of this. That is called weight and some genetic condition for the weight. And then our... we can... we consider the... this kind of... modified data. New... new invariance, something like this. So if you have... you don't have the perfect structure, you can consider just a degree E over the... that is a good criteria. But we need some contribution from the perfect structure. So just consider the... proposes of band F, which is stabilized by nabra. And you take... so then even if you have such things, this makes the power connection is alpha stable. If this... in quality satisfies. What's the length? So length is... length is determined. You have... you have our... flux at each point. So then our F is a sub bundle, so then you take our fibers there and just an intersection with the flag. That gives the length. So if every very... fit length of the flag, that is one. But if F is not fitting the flag, then this is zero. So zero. And this length is always one. Okay? So... So in general, this is always one. So this is alpha ji. So this is zero or one. And so the contribution of this appears in whether the length is one. But... Okay, anyway. This is the correct connection. So by using this stability condition, we can define our... Mojai space is of alpha stable parabolic connection. Just consider the alpha stable parabolic... new parabolic connection like this. And take the isomorphic curves. That is... give you the... Mojai spaces. And also you can consider our... alpha stable Higgs bundles in the same... So... So what we proved in 2006, and also there are enough by the 2013. So... MGN, till there is a little bit of a covering... a dark covering of MGN, where you have a universal family of the CNT. And you have the space of the new here. Yes. So... So over that space, we have a family of the Mojai spaces. Of the alpha stable... rank R... degree D, N eigenparade, N point singularity. And this gives you the relative fine Mojai scheme. And also this monochrome is smooth and case-side project. So if you take one point in here, and one point here, so CNT, if you fix CT and new, so you can take the fiber of that. And... the same mean that this fiber is the Mojai space of the alpha stable new part of the bundle. And it's pretty smooth with case-side project. And most cases it is smooth, so you can see the azure variety of dimension. This one. I see. Corollary follows from the theorem. You have to put it in the theorem. Yes. So... So this is the ideal... ideal theorem that you have a fine Mojai scheme over the relatively... over the bases. Interesting. Yes. So... So now you have the nice family of the Mojai spaces. And also that we can prove that the MR for CT, new R and D, are the natural algebraic simplex stuff. So this is to even dimension. But you can prove that on here, this is there with our algebraic simplex stuff. Where does it come from? Oh, there is our deformation theory. We can use the deformation theory and some more spectroscopic. I see. But is it like a quantum model again? Not really. Not really. And actually, we... to prove that there are simplex stuff, but we are using our... deformation theory. But then... in the Higgs case, you may have some other result, I think. Right. But anyway, in both cases, we can use our deformation theory. And also that our... since this is not... projective, not compact, so even if you have a simplex structure, it may not be decrowded. So decrowdedness should be proved different independently. And you never prove that decrowdness of that. And also that in the Higgs case, I think that this is a result of the boarding and the Yoko Kawa. You also have this one. You have... moving spaces for the Parabek X bundle, and the pretty smooth case of project algebra scheme. Also, of course, alpha should be taken very... generic. And if you vary the alpha, the moving space may change and may have similarities. And that is our phenomenon of the world-closing something like this. And also these are the natural synthesis. Okay, let me give you the simplest... non-trivial simplest example. Okay, so let's consider the P1, G equals 0, and rank is 2. N equals 4. D equals... not 0, by D equals minus 1. So this is very important. D equals minus 1 is very important. And take a generic nu. And... Also, half the length. Yes. And the four-point can be normalized like this, the 1, t, and infinity. And nu can be normalized as... plus minus nu, 1 plus minus nu, 2 plus minus nu, 3. And nu4, 1 minus nu4. So that our connection, you have two eigenvalues, because there are two at 0 and 1, t, and infinity. And at the 0 and 1, infinity, you have eigenvalues, plus minus nu, plus minus nu, 2 plus minus nu, 3. But because D equals minus 1, all of them should be the 1. But the point of nu4, t4, infinity, you should take nu4, 1 minus nu. Okay. Okay, so then the modular space, mt nu, is the algebraic surface, because the dimension can be calculated. And mt nu has a nice compactification, st nu, and... st nu is the eight point blow-up of the Hilfberg surface of degree 2, which is projected by the O plus O minus 2. And then our... So you have a linear reader F2, Hilfberg F2, and then you have an infinite section, which is an intersection number of minus 2. You have a positive zero section, and you have a two. So then you have four fibers over the zero and one t and infinity, and you blow up the two points corresponding to these eigenvalues. And not nu1 and minus nu1, but there are some overall constants there, but essentially you have something like this. And at the point of the infinity, that nu1 and one minus nu1, and nu4 and one minus nu4. So this is quite a little bit strange. And then you can see that our anti-canonical divider of the blowing up of this eight point should be the two times infinite section, and the proper transform of this y1, y2, y3, y4. Anti-canonical divider. So if you delete these anti-canonical divider, so that means that you have our simplected azioic surface, and that is nothing but the mojave spaces of the carbon connection in this case. And this corresponded to the found in six equations, surface of initial values. Let me consider the case of the parabolic Higgs bundle in this case, the same. So only the different condition is a local exponent, because all of the sum is zero. So you can arrange eigenvalues like this. And also it's the same, you brought this eight point, and delete anti-canonical divider. So then you can get the mojave space of the parabolic Higgs bundle in this case. Then what the difference of two? You see that by the local connection, this space and this space can be default to each other. What's the difference? Just this one here. And then in this case, you should have the hitching vibration. Because the dimension is two, the spectral curve and the Jacobian is same, and the genus is one. So in this case, hitching vibration is nothing but the electric curve, which corresponded to two tan sections, which pass through these eight points. So because of these eigenvalues, there is one parameter family of the electric curve passing through these eight points. And also this two times section and this proper transform is a degenerate file, which has to be considered as an infinite section over the hitching basis. Okay. So then, how about this? So by using the Riemann-Hirubel correspondence, we can see that this complement is analytic isomorphic to some affine surface. What does this mean? You don't have there no compact curves on that. Okay. So on the affine surface, you don't have a compact variety other than zero points. So if it is analytic isomorphic, no compact curves exist in this space. Actually, we have electric curve passing through seven points, but eight points must be minus new to new four. So that means that only the electric curve passing through these eight points is just a degenerate one. And there are no other ones. So that is a big difference. And also you can prove that in the complement, there are no non-constant algebraic functions on this complement. And on the other hand, you have many algebraic functions pulling back the basis. Okay. So these are the typical examples of the motor species. And we'd like to have this kind of a very clear description in our other cases. And then in the dimension two cases, we cannot expect our regular singular case. So the simple port case, we don't expect. So in order to obtain the motor species in the two-dimensional, we should have the irregular singular length two cases. Okay. But before that, then the next case is a four-dimensional and six-dimensional, eight-dimensional. So in general, the modular space of the connection has become very high, higher. And only the part of six cases, that case is the only example which comes from the simple port length two vector boundary case. Okay. So now, in general, I will explain about the Lima-Huberth correspondence. So we have a nice family of the modular space of the connection here, which is parameterized over the FGN. But let's take our... Okay. Now, the family of the... What is this space? Parameterized this one. And also, we have the family of the representation of Pi-1 here over the bases of the FGN. And also, there are characteristics for NOMIAR of the local exponent. That's one. So then, in 2006 and also in about 2013, we have the... We prove that... We have the Lima-Huberth correspondence. And at the EG, this ratio of the fiber is our... Analyzed... Proper, subjective, bimolemorphic analytic mode. And if there are these... Character varieties smooth, this gives you the analytic isomorphism between this modular space and this is... And... But if this is singular, and this gives you the analytic... Because this is smooth, we have an analytic resolution of singularity. Okay. So then, the Lima-Huberth... This type of Lima-Huberth correspondence gives you the nice picture of the isomorphic deformation of the linear equation, linear connection. And this is... It is easier to explain this. So here you have a family of... Let's fix nu. Okay. And the a is corresponding to nu. Let's fix nu. So you have a family of the... Carved and... Carved and points. You have a parameter space. Over that, you have a family of the... Modular space of the connections. Okay. And here you have a... Modular space of the... Representation. But you see that even if you are... There is a compressible... Carved and points, the fiber is locally trivial. And if you go to the... If you go to the universal coupling of the TN, this is essentially the product. Okay. And you have a Lima-Huberth corresponding at each fiber. At each fiber, you have an analytic isomorphism. If this is smooth. So then you have a constant section here over the bases. So that means that the monodormies are constant. And then you pull back the... This constant section by Lima-Huberth's correspondence to here. So you have a nice... Foriation here. So then... It is 100% clear that these fourations satisfy the so-called geometric function before the property. You see. So there are no ramifications of points and there are no essential solutions. And just the solutions are just... In this space. And the coordinate changes are just rational functions. So that means that the... Singularity might be just open. Just a port. So that is the... part of the property. So from this picture, there are the pullback of this, which is called isomorphic differential equation of the linear equation. That gives you a nice... dynamical system. And in the dimension two cases, and the part of the sixth case, nothing but the part of the sixth equation. So this is true on the time-merascise, right? You have to put it back. Today, it's a universal covering of the mojira. Yeah, so in order to make our argument simple, we can take our universal cover. But if you consider that and anything convention of the......cow, and then our... It's fine. Yeah, okay, so... And... But we have more nice geometric information. So in some cases, this character variety may have singularities, because we are using our categorical quotient of that. So in that case, we might give you the nice......singularity. If you have A1 singularity in the surface cases, so our resolution can be given by the one rational. So then you can get our more than me......isomonomic differential equation. But then this differential equation can reduce to the sum of differential equations over the boundary of the P1. So what is the differential equation over the P1? That is nothing but the Riccati equation. So then this is known by the old theory. Sometimes the P1 equation can be reduced to the Riccati equation. And there are classification of Riccati equations like that. But by our theory, we can classify all Riccati solutions by the classification of the singularity of the character. Like this. And this can be... This kind of result can be generalized to the case of the... unlamified irregular singularity case by Navan Saito. And the logarithmic connection, so the order of one port with fixed spectral type. So that means that sometimes......sometimes if you allow the multiplicity of the eigenvalues and fix the type, then our dimension of the motor space can be go down. And this case can be also approved. In the German singular case is... I think that the generic unlamified singular case is okay. I will... I will read a bit to give you the character variety, because I should correct my result with the output. So in the case of the......part of the six case, which means that......p1 minus 4 points to SL2C. And in this case, you take our invariant, like a trace of Mi. And in A1 to A4, you have an invariant. And these are......for IJK, the cyclic permutation 1, 2, 3, you can take the x-sides trace Mi, Mj, Mk. So then you can total in the seven parameters. Okay. So these invariants are generated......in the invariant ring of the spaces. So... But there is one relation, which is given by Fricke-Klein, Jimbo and Yuasaki. And actually, it is appeared in the old book of Fricke-Klein. And that is given by like this. So it is typical x1, x2, x3, this term, and this and row at home, like this. But you have four parameters. Sorry. And these parameters are given by like this. So you have... You have seven parameters and four parameters of parameters. And then with one relation. So this means that this x has a six-dimension. And it is fibered over the A1, A2, A3, A4 spaces. Okay. And this A is... It gives you the conjugacy class of the local exponents. So we fix it. Then we have family. And that is the character variety of the xA. And our Mojira-spaces MT-New, which is given by the 8-point graph of the F2, gives you the analytic, we saw in the singularity in this case. And you see, over the four-dimensional spaces, we have AI called two spaces, which means that our AI... Again, value is one. That is not a good case. And also, there is some discriminant basis. That gives you the reducible connection. Okay. So over this space, over this four-dimensional space, you have a family of the surface, cubic surfaces. Okay. Which gives you the character varieties. And each fiber is analytically isomorphic to the space of the connection, if it is smooth. And if it is singular, you have more resolution singularity with the spatial connection. You see? Okay. And also, there are... A4 is something like a... you can buy a cosine of this one. So this is an infinite cover of the... New... I give the infinite cover of the AI space. So it is something like an infinitely simultaneously dissolution of singularity like a sprinkler dissolution. So the infinite simultaneous resolution of this part. Okay. So next, let's consider the case of the ten... ten hundred cases. So, originally, people believe that there are... eight, six, only there are only six plan of equations, type of singularity, but as a result of the Sakai, we have eight. One. And also, and then, I and our multiband put with the formal result of the Gingu Miwa, that's corresponding to the isomodomy deformation of the rank two connection of P1, so the zero and one in T, with this type of singularity. So that type zero means that regular singularity. And type one means that the Panker rank one, all the two poles, one half means that... are lamified one. And this one. So you have, in this case, you have...in this case, you have a character variety in a cubic. And all other cases, by using our stocks data and the link and something, we can define a wide character variety very explicitly, and the result is looks like this. But this case is wrong. I should say that. So I... And so I already discussed about this case. P3D8 is a case of the lamified singular point at the Tlandy infinity. So then we have a formal solution at the Tlandy infinity. And then we have a link connecting to the connection matrix. And that connection matrix gives you the information and also the stock state. So then our calculation gives you that this kind of thing. But this is wrong. And firstly, by easily mistake, this minus should be plus. But we forgot. The choice of the local frame can be changed by the signature, plus or minus one. And that should be included. So in that case, if you see this equation, if you allow this to evolution, this surface is unchanged. X1 minus X2, that's all, the total is. Okay? So then you have to take the quotient this equation by this inversion. So then you have nice exercise. What is the result? The result is like this. So this is your curriculum. And then what is good? So then as Cesar Savard in Budapest is checking P equal W conjecture. And the P equal W conjecture essentially say that this word character variety, S3, have a mix of structure and a weight refrigeration. And the weight refrigeration of this one is equal to the perverse filtration of the heat gene system associated to this one. In the case of the P3D8, the perverse, corresponding perverse refrigeration should be zero. Yeah. So then weight refrigeration should be zero in this case. But only then, and we have a rank two from you. But after taking quotient, actually we have zero. So that is five. So P equal W conjecture, this is also checked. And then now Cesar Savard have a nice reference about that. I contribute to that just like this. Okay. I have no time to discuss about apparent similarity. But then this is really nice, this apparent similarity theory gives you a nice coordinate system on this Mojave spatial connection. And that is quite related to the spectral curve serving on the heat gene case. And in good case, the DRAM case can be reduced to the heat gene cases. But the original idea is just a case like this. So if you have a parabolic X-band off, and then we take a degree of G is RG minus one plus one. So then we can have also called OPAS structure, F, which is the sum of L minus G minus one. So O plus L minus one up to L R minus one. Okay. So this is a rank R vector mando, which is the direct sum of the L-bando. And that maps to the E. And the quotient is some more skyscraper sheath on C. The support is Q1 to QN. And the number is nothing but L half of the dimension of the Higgs bundle. And that gives you the half of the dimension, half of the coordinate. And so then you consider the spectral curve here. And there is a theory of the Robin-Rashamman-Labanan, which is like this. Okay. So the parallel Higgs bundle is given by this parallel bundle, a spectral curve, and also the line bundle on Cs. And such that the pi star lower Ds is E. And then actually Ocs is much to the F, something like this. And this is the equivalent of the geometric object. And then this divisor gives you the large end points over here. So, and pretty much to the Q1 to QN. So then the fiber coordinate is nothing but, what you want is nothing but P1, Q2 over Q2, P2. So that is essentially the dual parameter. So if you take such an end point generically, you can prove that there is only one spectral curve passing through that. So by using the Robin-Rashamman-Labanan, so then you can find this one. And then you can take this one. So this proved that Mojai space is a vibrational equivalent to the total space of this one. But the image is not a subject. And the image is just a complement of the anti-canonical device like that. So essentially you have a nice simple structure here and also here. And that is, so this is in some sense a symplectic vibrational something. Okay, that's it. And there is the bonus space of the vibrational bundle which I would like to explain, but I'm sorry, I have no time to explain. So I will skip. Let's talk. Thank you very much.
Moduli spaces of stable parabolic connections on curves are very interesting objects which are related to different area of mathematics like algebraic geometry, integrable systems, mathematical physics and Geometric Langlands conjecture. In this lecture, we will explain about an explicit geometry of the moduli spaces of stable parabolic connections on curves introduced and constructed by Inaba, Iwasaki and Saito and Inaba. Then we will review a work of Arinkin and Lysenko on a rank 2 connections on the projective line with 4 singular points, which is related to Geometric Langlands conjecture in this case. We then explain about the joint work on the moduli space of rank 2 parabolic bundles on the projective line with Simpson and Loray. If time permits, related works of Geometric Langlands conjecture in these cases may be discussed.
10.5446/59118 (DOI)
afternoon's session every Coleson's group from the University of Kansas. Hi, thank you guys for coming and thank you so much for inviting me here. It's my first time actually out of the country and it's been very pleasant experience so far even with the cold weather. So I really appreciate it. So today I'm going to introduce you guys to my current research and it's a surface moving mesh method that's based on equity distribution and alignment. Okay, this has been joint work with my advisor Dr. Wong. So as especially this group knows moving mesh methods are very important, however a lot of the literature that's out there is for bulk meshes. Okay, we have heard some talks on surface meshes on the sphere and whatnot the past two days. This one is a little bit of a different flavor. So we're going to be introducing you to this new method and I'll talk about some of the properties of method later but a nice thing about it is you can use it on any general surface. So it's not specific spheres or anything like that. We can actually do it on any surface. So let's start. So we are going to assume that we have a surface F and our goal is to improve the quality of the mesh using the moving mesh methods. Okay, so pictorially we can see our initial mesh in figure one. It's non-uniform as you can see, very skew and then when we apply our moving mesh methods we hope that it ends up uniform in the sense that the areas of the mesh are the same and the alignment is consistent throughout. Okay, so we are doing this with respect to the equidistribution and alignment conditions. So we're using that as our quality measures. So in order to do this the general idea of our mesh adaptation just to remind you guys in case you forgot is to formulate a meshing function that has certain properties you would like our mesh to satisfy and then with this functional we want to minimize it in order to get a linear trans, affi-navving or a linear coordinate transformation from a reference element to our physical element so that we can adapt this mesh and have it satisfy those properties that we incorporated in front of the mirror functional. Okay, so since we have gone through a lot of different mesh methods and a lot of different notations let's get some notation down for the next 30 minutes. So we're going to let us denote our surface in Rd, okay, where d is greater than or equal to 2 and we're going to assume that we have, we already have a mesh so we're not generating anything, we already have a mesh there, whether uniform or not it's already there. We're going to denote our surface complexes as k and these are d minus one dimensional complexes in a d dimensional space so we can think of them as take a piece of paper, cut out a triangle, raise it in three dimensions, okay, or align in two dimensional space but it's going to be a surface. We have our vertices for k denoted as x sub j, our edge matrix denoted as e sub k and we're going to let n be the total number of elements and n sub v be the total number of vertices, okay. We're going to also assume that we have a reference element k hat and this is going to be in our sub d or R to the d minus one, okay, and we're going to assume that it's equilateral and unitary. We're going to let the vertices of our reference element be denoted by c and our edge matrix be e hat, okay. So something important to notice, the big difference between the bulk case and the surface case is actually this matrix right here. So in the bulk mesh case that was a square matrix. In the surface mesh it's not square which makes the simplifications a little bit harder, some of the analysis a little bit harder to deal with but we can still do very similar things that we did in the bulk mesh case, okay. So we need this affine mapping, let's denoted as s sub k where we're going from our reference element to our physical element. With affine mapping properties we can rewrite this transformation as equation one where s sub k prime is Jacobian matrix, okay. Writing it in terms of the nodes we get equation two and finally we can write in terms of our edge matrices where I do want to note that this e hat inverse, we know this exists because our reference element is not degenerate, okay. So we have this as our Jacobian representation of our coordinate transformation, okay. And yet again the big difference is this is not a square, it's not a square matrix, okay. So in order to formulate that distribution and alignment conditions we need the volume or the area for our physical element, okay. And then we can come up with the formulation as we did similarly in the bulk mesh sets. So it can be proven equation four that the area of our physical mesh is dependent on the Jacobian, the determinant of our Jacobian along with our reference element area, okay. So now that we have this we can actually formulate our extra equidistribution and alignment conditions for surfaces. And remember our whole goal is to make our mesh uniform so in that sense we have two conditions that actually completely characterize a uniform mesh. So the two things that you need for a uniform mesh is that all of it, the elements have the same size and all of them are similar to a reference element, okay. So we're going to take care of that first property of all having the same size. And as we've seen in many different forms this week so far, the equidistribution can be written as equation five, okay. Using our formula for our area of our physical element we can rewrite that condition in terms of equation six where we're considering this as our equidistribution condition. And we've seen very similar things in the bulk mesh thing so if you guys know. However, yet again we cannot simplify this. So the bulk mesh case there is a lot of simplification going on because we have square matrices. In this case we cannot simplify it very much more. Okay, so for the alignment condition we need that all of our elements need to be similar to a reference element which we know that two elements are similar if and only if they're composed. The affine mapping between them are composed by dilation, rotation, and translation, okay, which just says that the Jacobian matrix accounts for the dilation and the rotation, okay. So we can rewrite our Jacobian matrix of our affine mapping as equation seven where this u and this v are orthogonal matrices taking care of the dilation and rotation of our or similarity, okay. Using a lot of algebra, trace, and determinants we can rewrite equation seven as equation eight which we're considering as the alignment condition. Okay, like I said this is very similar to the bulk mesh sense. In the bulk mesh we have a d instead of a d minus one, okay, and we can simplify the matrices on the inside because they are square. In this case we can't and we have that d minus one happening. Okay, so now with the equidistribution and the alignment condition this has been all in terms of the Euclidean metric. However, we know that we want to incorporate other metric tensors into our moving mesh adaptation especially since we can consider any non-uniform mesh uniform and some metric tensor. So we're going to take everything we've done so far and incorporate some metric tensor into our analysis, okay. So we're going to consider some symmetric uniformly positive definite metric tensor and recall that the Riemannian metric, the distance in the Riemannian metric can actually be written in terms of the Euclidean metric, okay. So this implies that the geometric properties of k in the metric m can actually be obtained from those of m to the one half x or k in the Euclidean metric. So we can actually relate these two into a very similar analysis that we just did to get that equidistribution and alignment condition just solely based on this property right here, okay. So in order to formulate the equidistribution and alignment with respect to a metric tensor like we did before we need an equation for the area of our physical element, okay. So this can, equation ten gives the area for our physical element with respect to the metric tensor and they can be proven by a direct application of our first area equation, okay. So with this you can run through the exact same process and we can end up with the equidistribution and alignment conditions with respect to our metric tensor, okay. So moving on, we can take our equidistribution and our alignment condition and reformulate them into two energy functions as you guys have probably seen before. Taking our equidistribution condition and using holders inequality, applying some constants, we end up with the equidistribution function given in 13 where if we minimize this function it's going to result in a mesh that satisfies equidistribution condition. And similarly we can use the arithmetic and geometric mean inequality to come up with the alignment energy function for which minimizing this function the mesh satisfies the alignment condition. So taking a theta average of the two multiplying the equidistribution by theta, the alignment by one minus theta, we can combine them into a single energy functional that kind of balances that equidistribution condition and alignment condition on surfaces, okay. So if you guys have seen this is very similar to the bulk mesh sense, it's just that we need to be very careful with dimensions and with the dimensions of our matrices, okay. Okay, so as we know this can be minimized, our whole goal is to minimize this functional. However due to the high non-linearity we're going to apply the moving mesh PDE method that you guys have seen a lot so far. Okay, so the moving mesh PDE method is defined as the modified gradient system given in 16 where we have seen PI is a positive scalar that's used to make the equation have some invariance properties. tau is a constant parameter that's used to adjust some time scales. Omega sub i is the patch of elements that have x sub i as a vertices and i sub k and v sub i to the k are local index and velocities of our node. Okay, and we should note that this velocity formula is available and is different in the surface mesh case but there are a lot of details I didn't have time to put into a 30 minute presentation but if you guys want to see the details of actually the analytical form for the v sub the v is you're more than welcome to talk to me after. Okay, so we have this MM PDE. However when we're working on a surface it's very important that when we move nodes they stay on the surface. Okay, that's a very important property when you're working on surfaces. So what we're going to do we also need to prove some theoretical properties about our moving about our functional. So the first property that we want to make sure is that there's our mesh is non singular. Okay, there's no overlapping there's no crossing we can remain a good mesh without any tingling happening. Okay, so in this paper they state that if a functional is querisive and your initial mesh is the volume of your initial elements are positive they're going to remain positive for all time. So the proof in this paper depends on the fact that we have this energy decreasing property of our MM PDE. Since we're working on surfaces it's not guaranteed so we need to make sure that this happens in order to ensure this non singularity property of our mesh. So we're going to consider our functional just in the two dimensional case everything I do is just going to be in 2D we can always extend it to 3D out of all the details if you guys want to see it. So we're going to assume that we have our functional in two dimensions and we're going to assume just that y is the function of x. Okay, we can assume the other way around and everything is going to work out just as nicely. So with this we can just apply the chain rule and formulate the derivative of our functional with respect to t and rewrite our MM PDE as here. Okay, so it's a little bit different but when we look at the derivative with respect to time we can rewrite it, factor out this derivative and rewrite it as this sum here. Okay, we're clearly this is positive so with this negative sign in front we have that energy decreasing property in the surface as well. So with that and the fact that our functional is quercive we actually have that non singularity mesh property that is definitely important when we're dealing with meshes on surfaces to make sure we're not tangling. So it should be noted though that although this method satisfies the energy decreasing property unlike the bulk mesh methods we're not using the fastest gradient direction because we do need to ensure that those nodes stay on the surface. Okay, so we're going to actually use a projection method to ensure the nodes stay on the surface and to do that like I said I'm just going to use a two-dimensional case. So we're going to consider our functional and we're going to our goal in this moving mesh method is to minimize our functional and we need our nodes to stay on the surface. So what we can do is if we assume that y is the function of x we can rewrite our minimization problem in terms of just one variable. Okay, so to solve this we're going to look at the part the derivative partial derivative of our functional with respect to our node x of i where we should note that we do have two of these values already. So the goal now is just to find this partial of i's of h with respect to y. Okay, these other two can be calculated by other analysis that we have done. Okay, so in order to find this partial of i with respect to yi we can take the derivative of our surface with respect to i and rewrite the partial of y with respect to i in terms of the partial derivatives of our surface function. So phi here is our surface function as long as we have it explicitly we can now find the value that we are missing in order to calculate this. So that means in order to solve the single variable minimization problem with our MMPDE we can rewrite our x dot as this and then change that term that we were missing the value for to the phi sub xi over phi sub yi and then you can do a similar process with this yi dot. So we are going to use these two projections in order to keep the nodes on the surface when we're moving. Okay, so as I said before the same process can be done assuming that x is a function of y and it can be done in the three dimensional case it's all very very similar. Okay, so some very good properties about this moving mesh method. The first is that this functional has been proven to be corrosive it's very similar to the functional in the bulk mesh case so it's not too surprising that this is corrosive as well. The mesh is non-singular as we talked about since we do have coercivity and we have that decreasing energy function. This can work on any surface so as long as you have your surface explicitly written out in terms of x and y, r, x, y, and z we can actually perform this moving mesh method and finally you can use any solver so od45 or od15s on this method to solve the moving mesh PDE. Okay, so I'm going to present some numerical examples the first three that we're going to go through is using the Euclidean metric so just uniform with respect to what you're looking at with respect to your eye and so we want that the nodes are equidistant apart in all of these examples or that the areas of the triangle that we're looking at to be the same for every single one. For examples four through six we're actually going to be we implemented a metric tensor that's based on the mean curvature and so these as we've seen in the talks earlier today we want a higher concentration of nodes where the element where the region has a large curvature. So the first example is a sine curve and these are just some preliminary results. I do have some mesh quality measures that I implemented but I did not put them in here but we were looking at distances and whatnot and micro distribution quality measure. They're not perfect yet so I don't want to show them to you guys yet but as you can see the sine curve given as our initial mesh to our final mesh after implementing the moving mesh method the nodes in our initial mesh are very are definitely not equidistant apart whereas in our final mesh they are. If we zoom in you can see it a little bit better. Like I said this is with respect to the Euclidean norm so we're not trying to get more nodes where the curvature is high so these distances are all very very very close to each other. The second example is for the torus. This is similar to the example we saw at the beginning. We see that this our initial mesh is definitely not uniform compared to our final mesh where we have very similar areas throughout all the nodes and very similar shapes and alignment. Another view for the torus can be seen here. The third example which is an important example especially when I talk about my future research is the cylinder. So we notice that our initial mesh as all of the initial meshes is definitely not uniform. However our final mesh doesn't look as uniform as the other examples. The reason is because the nodes on the top and the bottom I made them stay there. They did not move. It's a little bit more difficult to actually be able to move those nodes on the boundary. That's why we see this kind of spew area right here and right here because those nodes cannot move. However in the future I would like to implement a boundary moving method so that these can move around on the boundary and actually become more uniform than it is right now. So another angle to this cylinder is here. Okay so now implementing the metric tensor for our sine curve. We can clearly see our final mesh looks different than the one we looked last time where the nodes are more concentrated around the places with higher curvature and the nodes definitely moved in a way to make it much more uniform. And looking at it a little bit more closely we see that although the nodes are not equidistant in our final mesh they are approximating the curve better because they are in places with higher curvature. And with more nodes it runs just as fast so there's not really too much of a difference in this 2D case. Okay we can also look at an ellipse. Same thing happens. I did a little bit more nodes. It's a little bit harder to see but the nodes are being more concentrated towards the places with higher curvature and they've definitely become more equidistant apart with respect to the metric tensor and further curvature. And I feel I should have done a little, a few less nodes but they definitely are concentrating more towards that part just as it did in the sine curve. Okay and finally a 3D example. This one's a little bit hard to see but we're looking at the sine curve in three dimensions and I have another picture. But what I want you to notice is that our initial mesh is very uniform with respect to the Euclidean metric and after we run our program we have these lines right here that are more concentrated mesh elements right there, the purple and the yellow. And those are the spots where we have that large curvature in our three dimensional sine curve. So this was the nodes are accumulating right here with that high concentration and at the top where the yellow is. Okay so this was just the top view of it so we can see where the nodes are actually accumulating. Okay so in conclusion we have come up with a moving mesh method for surfaces and a general surface as long as we have it explicitly using the equidistribution and alignment condition and the functional that we have come up with that's similar to the bulk mesh, the bulk mesh case. Some advantages that we've gone through is that there's no mesh closing, our mesh does not become non-singular, our mesh is non-singular, the method does work on any surface and our function is creative. Like I said my current research is actually to implement a few more examples using our metric tensor in the 3D case. I would also like to finish the implementation of the mesh quality measure so I just don't have pictures to show you guys, I actually have some values to show you guys the graphs to ensure that our mesh does not become singular. I want to work with more approximated surfaces so instead of having an explicit function for our surface, approximating the surface and applying the same method. Like I said before moving the boundary nodes and then maybe at some point even a moving surface with respect to time. Okay so if you guys have any questions I will take them now. Thank you for listening. I'm sorry what? Like an estimated surface? Yeah so somehow approximate the surface with the given using a method, I think using the spline method or something in order to get a general idea of the surface. Yeah I mean I haven't looked into that yet, we're just trying to get this to work. I have a few ideas but not really ones that I can implement yet. So yeah in this we definitely need an explicit function for the surface which is kind of a downfall but in the future after we get this implemented I would like to read into it a little bit more and look into when you don't have an explicit function for the surface because this does rely on a lot of the partial derivatives of the surface and that's definitely a downfall of this but if I can somehow approximate the surface in some way the method might change a little bit but using the ideas still apply it to it. I have only worked with a few examples I haven't looked at the Q. I don't have any assumptions on it I've just been trying different examples. The hardest part so far I've been finding examples that I can actually get a generated surface mesh on it and then move it from there so that's been the difficulty I have thought about working on a Q but like I said you have different functions for each side. I feel like you could do it as long as you took those cases into consideration. Can you say a bit more about this metric tensor? So say you have a 2D surface in 3D it sounds like your metric tensor is a 3 by 3 matrix. It's a value. I take the curvature at each node but yeah so it is a more general ease. Yes so what I'm doing is I'm calculating the curvature at each node for each element and then averaging them out and multiplying that to the identity. So in that example you were just using a scalar monitor but in general you had? Yes and I'm working on implementing a little bit more of a more complicated metric tensor that deals with actually accumulating more nodes towards the places with higher curvature in a different way by adding or subtracting another value to my metric tensor that I have right now but right now yes it's just a scalar multiple identity. So you talked about how your method uses projections right but since you're not moving the nodes like on the boundary I wasn't sure why you needed protection I must have missed the details somewhere. So for like on a sine curve and the MMPDE tells us to move this way that's not going to be on the sine curve anymore so we projected back using that formulation of the derivatives that I have. So for like into your boundaries? Yes. Okay. Yeah I'm keeping the boundaries stationary right now I'm not moving one yet. You're including like the surface into higher dimension such as you can also understand surface like the manifold. In that case you have a map of local coordinates. As in the manifold though I've read into some papers that deal with it a little bit. I definitely am not an expert on the subject yet but I have thought about looking into that sort of area. Okay thank you very much. Thank you. Thank you.
Given a mesh on a surface, our goal is to improve the quality of the mesh using a moving mesh method. To this goal, we will construct a surface moving mesh method based on mesh equidistribution and alignment conditions. We will then discuss several proven advantages of this surface moving mesh approach. Finally, we will study various numerical examples using both the Euclidean metric and a Riemannian metric.
10.5446/58952 (DOI)
I've been here several times. And each time I like it, this is much as the previous time. So as the title says, I'm going to talk about the transmission eigenvalue problem. And I anticipate that most of you don't know what it is or what use it is or anything about it. So part of my attempt here is to try to get you interested in it and hopefully get you new ideas, which the area very much needs. The transmission eigenvalue problem kind of came to the forefront of scattering theory in particular inverse scattering theory about 10 years ago, I'd say. So then before then, it became more significant recently. And basically what it is is that there's an eigenvalue problem, and you can detect the spectral data from scattering data. So you can send signals in, and you get the scattering data back. And from that, you expect spectral data. And from that spectral data, you'd like to find information about the scatterer. Now, the difference from what most are interested in probably is usually in this context, you know the shape. There are a lot of methods of determining the shape of a scatter without knowing anything else about it. So the real problem here is determining the material properties of the scatterer, which basically means you have a differential equation and you want to determine something about a coefficient. And the main problem with it is a non-self-adjoint problem. And it's very unusual and bizarre in a number of ways, which I think will become evident when I talk a little bit about the spectral theory. So I'm going to have my talk as going to be 53 sections. The first one, I want to spend some little while and just tell you where the problem comes from. You see why, you know, where it appears. Then I want to concentrate for the rest of the lecture on the simplest case, where you have a surrhythmic stratified media and a ball, basically. So basically that means you reduce it to a one-dimensional problem. And I want to talk about that problem, because that's the easiest one to talk about. And you'll get some idea what the problems are. And I hope you'll get some kind of flavor of unusual problems in some aspects. And then the third part of the talk, I want to talk about the in-respectal problem associated with that. This is similar in some ways to the in-respectal problem for the sternum-leval problem, which is, of course, a classic problem. But that's typically a self-adjoint problem. This is a non-self-adjoint. And strange things happen. And then the last two slides, I want to give a music connection between transmission eigenvalues and the Riemann hypothesis. In other words, if you know something about where the transmission eigenvalues are, then the Riemann hypothesis is true. And so I'll notice I made no claim that I'm going to prove the Riemann hypothesis. Just to make sure we're on the same page here. OK. So to start off, this is the basic scattering problem. I mean, we're a complicated one, but this is basically what I want to talk about. Here's a scattering medium. You sit in some incident field, which I call UI. Outside the medium, you satisfy the reducer wave equation. We factored out a term e to the minus i omega t. So we're assuming everything's time or money. Inside the medium, it's characterized by this coefficient n, which is called the index refaction, that has to do with the material properties of the medium. That's what you'd like to find information about. So again, we're assuming we know d. We'd like to find out, what is this medium medium? What's going on? Is it plastic? Or is it metal? And so in general, it's a function of the position. And these are just continuity conditions across the boundary. This is radiation condition. This makes the whole problem well posed. k here is the wave number, which is the frequency over the speed of sound. And again, as I said, this is an incident wave. So this is a basic simplified scattering problem in the acoustic case. There are analog results for all of this for elastic waves. And of course, as you notice, I'm sponsored by the Air Force. They're interested in Maxwell's equations. And so that's a concern to that also. And so this is the basic scattering problem I want to focus on. So to formulate the problem, and see where the transmission eigenvalues come from, we'll assume we have an incident wave e to the ikx.d, where d is the unit vector. And remember, we factored out e to the minus i omega t. So this is a plane wave moving in direction d. And it pushes upon the object. And then you have a scattered field. So for those of you not in scattered theory, the image to keep in your mind, although it's not fizzling too hard for it, is a beach ball and a hose. You turn the hose on the beach ball. That's the incident wave. And the water scatters out. And so this radiation condition means the water's going outwards, not into the ball. And a long ways away, the scattered field looks like a spherical wave. So that's e to the ikx, boidex, which in future are just called r, divided by r. And an amplitude, which depends on the observation direction, x-tap. That's x over the length. The direction d in the incident wave and the wave number. And the whole idea of inverse scattering theory is you measure this, you know what you're sent in, and you'd like to find something about n of x. That's the idea. And again, we're assuming that the shape's no. And so this thing is essentially inverse scattering theory. Of course, has a name. It's called the far-fleded plane. And for those of you who may be more in something like quantum mechanics, i minus the dose C per unit of closure related to the kernel of the scattering operator. So in particular, here's the far-fleded operator. Use this amplitude of the scattered wave as a kernel. This is called the far-fleded operator. And i minus 1 over 2 pi f is a scattering operator. So I'm just going to talk about the far-fleded operator. And the far-fleded operator corresponds to the scattered field corresponding to the incident which is weighted by g, just like this far-fleded pad is weighted by g. That's simply due to the fact that the scattering problems when you were inspected you. And this thing is known as the Hervdoss wave function, although it won't appear much in what is in the future. So this far-fleded operator is something that's intrinsic and basic to the whole scattering problem. That's the operator we're looking at. And from this, we're going to be able to determine a certain eigenvalue problem and certain eigenvalues, which is going to be the theme of this talk. So the basic theorem here, transmission eigenvalue problem in the general case, if you look at this far-fleded operator f, it's a mapping from the unisphere l2 into a cell. It's an objective. It has dense range. If and only if it does not exist a non-trivial solution to the transmission eigenvalue problem. So here's the transmission eigenvalue problem. Such that v happens to be a Hervdoss wave function. So for right now, don't worry about v being a Hervdoss wave function, because when I talk about spherical stratified problem, that's always going to be the case. Just focus on the nature of the problem. You have two elliptic equations, both defined in D, satisfied the same Cauchy data. Now in general, you can be shown that under appropriate conditions. In general, that simply means that v and w are 0. In fact, there's an old paper behind it many years ago, more generally. And when did the two elliptic equations, when can these here the same Cauchy data? And in general, you're not going to do it. So in a certain circumstance, there will be certain values of k for which there are non-trivial solutions, and those are called transmission eigenvalues. And because of this theorem, those transmission eigenvalues correspond to the place where the far-field operates on an objective. That means in principle, you can determine that. You have the operator in hand. You can do that because the kernel is a far-field pattern which you measure. And from the knowledge of the far-field operator, you find out where it's going to inject them, and there are ways to do that. That means you can determine the eigenvalues. And from that knowledge, the idea is I want to find out information about the indexed fraction. Now, there's been quite a lot of work done on this problem. There's a lot of information, for example. I took a mention of Professor Saccone, Professor Sunyear, both of whom had this problem. We know that the eigenvalues are discrete and they're appropriate assumptions, and that they exist, of course. And there's vials laws, accounting function, people have looked at. There's been a number of things they've looked at. But some of the most basic questions are unknown. For example, this problem in the appropriate spatial look at is not a self-atonite problem. And so it raises the question, are the complex eigenvalues or not? And if there are complex eigenvalues, what the hell do they mean? What are they? What have they to do with anything? And so I want to focus on a special case where we have n is a function only of the radial position and d is a ball. So that's the simplest case you could have with this problem and tell you about that. And then I think you get some flavor of what this problem is like. Again, here, the values of k, which may or may not be complex, we have to find out. Right now, for the general case, no one knows whether the complex eigenvalues exist or not, for the non-circulatory stratified case. It's an open problem. And I think that's one of the main problems that is now the story of, where are they? How do you compute them? And so forth. OK. Let's see. I'll just start. So I want to consider the simplest case, as I've been promised to you, in a case where n is a function only of r, where r is an absolute value of x, d is a ball radius a. In this case, you can show k is a transmission eigenvalue that in v is always a hertz wave function. So we don't have to worry about that. And you can further assume, since we have this nice circuit stratified medium, we can write w of r as y of r over r. The v of r is the y of 0 of r of r. Remember with the, let's go back here. Remember with the v of w r. Let us leave the link here this way. We can do this. And we can normalize it. We've got an eigenvalue problem. Normalize it so that y equals y not of 0 is 0. And y prime is 0 equals y not of 0 is 1. So we just normalize it. And then we're going to go away. And so in this case, we have that y just satisfies this problem. And basically separation of variables and elementary calculations. So you have a transmission eigenvalue of y of r and the solution of this, which of course is just a sine function. They're just constant c1 and c2. So this is true. This takes care of the Cauchy data at green. Remember we had u equals v on the boundary. And partial nu, partial nu is the unit outward normal. I forgot to mention that. It's 0 on the boundary. That's just those conditions. And so everything comes down to this. So in other words, case of transmission eigenvalue in a special case, if and only if this determinant is 0. So that looks like that should be a pretty simple thing to deal with. We have a problem in ordinary differential equations. And what I want to show you in a short minute, even in the case of n bar, just the constant, which you would think of it about the simplest case you could possibly imagine. You have two ordinary differential equations with constant coefficients, second order. You shouldn't be able to say everything in the world about them. But you always can, but it's not so trivial as you might think. And the main tool we're going to use here, since we're looking at this determinant, is the entire function theory, the entire function of a complex variable. Because you can easily show that y and y prime are entire functions of k. So we have an entire function here. And we should be able to use the results of the entire functions to set it up. And so one of the things I'm going to do, again, my assumptions here are a, the torture before. No one knows very much about transmission eigenvalues. But I'm also not too sure how much everyone knows about 30 of entire functions. So I'm going to, good. I'm glad you shook your heads. I mean, I won't be wasting my time. So I'm going to, the appropriate terms and entire functions I'm going to mention as we need them. So they'll get some flavor of entire functions. For me, I think entire functions are the most beautiful area of mathematics. And to me, in many ways, this is just an excuse to work an entire function. Although I didn't tell the Air Force that. Of course. That's another story. Do you have cameras on you? So here, so d of k first, that's an entire function of k. There's a real for real k. It's bound on the real axis. Then there's a theorem from the entire function 30 that says if dk is not identity zero, there exists a cannibalistic input set of transmission eigenvalues. So we know that, as long as dk is not zero, there are lots of eigenvalues. And there's a theorem by Akassin, Degeneres, and Papenikulow that if dk is zero, that means n of r is one. That means there's no scattering. So in this strictly stratified case, the fact that there exists an infinite number of eigenvalues pretty easy in these entire functions. In the general case, when you have n of x, that's not true. In fact, transmission eigenvalues were invented around 1988. And existence was proven in 2008, I think. Because about 20 years, a lot of people didn't even know whether these things existed. Although they talked about it a lot. But it always made, and I was among those people who talked about it, and always made me uncomfortable talking about something which I didn't know existed, as you can imagine. So it was very comforting. That was particularly resultant. We all have a second that existed in general. It was very comforting. But in the strictly stratified case, it's pretty easy. And this, again, is one of the reasons I want to talk about the simplest case so we can talk about some flavor without going through some detailed and rather technical proofs. So let's assume, not if you want. There's a scatter there. And if n of a is 1 and n prime of a is 0, that means the indexed fraction is varying smoothly across the boundary, which physically is not the most reasonable thing. If you have a medium, you expect there's a jump in the material properties. But does not worry. We'll come to that soon. Then you can do an elementary asymptotic analysis that shows that this determinant looks like this. I say elementary in the sense all you need is a little bit of asymptotic analysis of ordinary differential equations, which appear in the determinant. And you can be right away and get this. And so you can easily see if n of r is between 0 and 1, for example, n of r bigger than 1, you have infinite number of positive transmission eigenvalues. Just from what I thought was obvious there. Everything's real. But I made the assumption n of x is real. And n of r is real. The case when n of r is complex, which is a very important case, corresponds to the absorbent media, that's a whole other story, which we won't get into here. So it exists infinite number of positive transmission eigenvalues. And you can also show that's true if you don't make these assumptions. But to show this, you have to use a little bit of the theory of almost periodic functions, which I wanted to avoid here. So in one slide here, you can show there exists an infinite number of transmission eigenvalues. This takes quite a few pages in the general case. OK, so far? Now, first theorem, I want to talk about the case when n of r is a constant. If you're going to talk about transmission eigenvalue problem and a spherulite stratified case, the simplest thing you can imagine is when n of r equals n not, a constant. And I'm going to show you, that's all obviously you may think, the theorem I'm going to use is Lagrange theorem, which is one of my favorite theorems in the entire function theory, it follows the pattern of archfactorization theorem. It says that if you have an entire function of order less than 2, that means that e to the a r rho where rho is less than 2, in growth. And it's real for real z, and it's only real 0s. And the 0s of f prime are also real, and are separate each other from those of ethic. So it's a Rohl's theorem for n-line functions, basically. And just to give you an example of flavor, it's not true that you have order 2. Here's a function of order 2. And you look at this, you take its derivative, you find out that 0s of f prime of z are complex, even though the 0s of this are 0s. Obviously, it's not working. On the other hand, if you have this, the 0s here are at plus and minus 2. When you take a derivative, the 0s are real, but not done in delays. So the fact that it has to be order 2 is crucial. This does not much to do anything to the top. I couldn't resist talking about the garrison simply because I like it. I thought I'd show you that it's pretty sharp. So here's the result. It is as simple as possible to taste. And it's here to convince you that things are not so simple as you might think. If n of r, n not square, right, that's a positive constant, not equal to 1. If it was equal to 1, we're in trouble. Then if n not is an integer, or a simple integer, all the transmission eigenvalues are real. On the other hand, if it's not an integer, or a simple integer, then there are infinitely many real, infinitely many complex eigenvalues. Now that's pretty unusual behavior for a set of two ordinary differential equations, second order constant coefficients. I mean, I was very impressed by this theorem because although it says Colton-Leone, why do you lose a colleague of mine at Delaware? I can read, write, or join paper, and this is his theorem. So he did it. In fact, he did quite a lot of us here. And so I want to be sure he gets his appropriate praise as we go along. So just to give you an idea of what's happening, let's just take the case n not the integer, just to give you some idea of how the Gaertz theorem comes in here. Then it's easy enough, y of r is that. You can solve the OD right away. And as integer, the non-zero roots of d of k equals 0 are critical points in this entire function. Now, it's just the m of z. And it's in its entire because n is not an integer. That's the crucial part of n not being an integer. And it clearly has only real zeros. And so it's all set up for the Gaertz theorem. You take its derivative and you get the d of k. And so right away, you see that as 0 is a d, k must be real. So it takes a big hammer in a certain way to handle the simplest case when you have this constant coefficients. So let me move on. Now, just to give you some examples, if n not equals 1 half, so it's reciprocal of an integer, and then you have d of k exist, you can see there's a real zero complex. And n not equals 2 thirds. You have this, you have complex one. So just to convince you the theorem's actually true in this special case anyway, it's not some thing that you can do it simply. Now, what happens when you have n of r? I'm not sure how my hand got there anyway. That's supposed to be a delta there. So we have delta not equal to a. The square root of n of rho is between these bounds. There's an infinitely many real and infinitely many complex transmission eigenvalues. So here's the case where n on the boundary is not going to be 1. And if n were a constant, that would violate this situation here. And so this shows you that at least in these cases, we can say something about the case where n of r is not constant. What happens in between these things? I don't know. If I were given open problems, that would be open problem number 1. And you can also show that any n based on n of 1, and all the complex eigenvalues exist, the all line is just parallel to the real axis. Now this is what we will be relevant a little later on. Not this particular theorem, but the idea when I mentioned about the Riemann hypothesis, because this will have to do with where the transmission eigenvalues lie. So the best we can say about this point is they lie in a strip. And this is corresponding not to the case when n of r is going to have a jump. And n of a is not equal to 1 on the boundary. Now just recently, this year, in fact, is not spherically stratified. And Vodaf has shown that if complex eigenvalues exist, they all lie in a strip. So that's true now in the general case. And so what's been happening is a lot of the results we've had for spherically stratified media has given people motivation to try to say, is that true more generally, and try to prove it in the general case, which you'd like. And this is one consequence of that effort. So now let's go back to the spherically stratified case and assume that we actually have the case when things very smoothly across the boundary. So in the case when you have a jump across the boundary, we have those thirds here. Things are going to look different here. So what it's saying is the spectral theory of transmission eigenvalues depends on how the index of refraction behaves on the boundary. That's the point that you should take home here. So just to again give you a little bit about some entire functions, if you can talk about the order by this slip-soup condition, that means the function grows like e to the a r to the rho. So if a function or an entire function of order 2, rho would be 2. And this function of order rho equals 1 is called a function exponential type, and this is tau. So that means it grows like e to the tau mod z. And there's a whole faster entire function of exponential type, which is, again, in my opinion, the most beautiful area of mathematics, arguably, I guess, but in my view, that's the case. And you'll see how many one theorem we'll use to help with the ditch of that. It's coming up right now. So here's the number of 0s in the entire function in the right half plane. Then an entire function f of z, an exponential type, so this is true. So in particular, if it happened that f is bounded on the bound and on the real axis, then that certainly would be satisfied. And you suppose that this growth along the imaginary axis is tau. Then the number of 0s of rho r is tau over pi. And that works for any entire function that satisfies these two conditions, the number of 0s in the right half plane is the same as the type divided by pi. That's a hard theorem. I mean, that took a lot of work by Cartwright and Leibniz and to get. So this is not some simple consequence of atom r or anything else. And this is the number of power of pi. It's called the density of 0s in the right half plane. Now, I want to get a result here on showing that if you have atom r very smoothly across the boundary, that you have an infinite number of complex eigenvalues and real eigenvalues. There are no situations, and no values of n such like we had before where n's an integer. When you have an atom r, it had a jump across the boundary. There were some real ones. You always have real eigenvalues and complex eigenvalues. If you do that, I need a linear operator. And this is a classical Galphan-Leviton operator. So a lot of you know that. And so I'm just going to go over it rather quickly, because it's just an inner operator mapping the sine functions onto the solutions of y to a prouveless k squared n of y equals 0. So first you make the Louisville transformation, and you get this function z, where z is just this. And p is just the density function, which you don't have to worry about. And then z satisfies this condition. And therefore, z is represented in this form here, where this k satisfies this Gursau problem. The important thing to notice here is k does not depend on k. Big k does not depend on little k. There's a statement for you. k does not depend on k. I'm glad no student of mine said that. Big k does not depend on little k. And so once we have this representation to z, then of course we have a representation for y. And so now we have this determinant. d of k again. I'll give you four memories. Delta was just this n adult and so would a. And there's this infinite number of positive real zeros. And since the tides are delta minus a, the density delta minus a over pi. That does how many zeros there are in the right half plane? n of r over r, the limited stat. On the other hand, you take that integral representation and just integrate by part. So this is a cheap way of getting asymptotic expansions once you have that integral representation. So you integrate a couple times. And you look at this thing here and you see n double prime a is not 0. You have this sign delta plus a here. So what happens? This factor here generates an infinite of positive real zeros with this density, as we just said. And n double prime is not 0. This factor here gives a density in the right half plane of delta plus a over pi. It takes a little work to show that. This is just an asymptotic on the real axis. But you can do a little bit of a relation show that's true for the right half plane for complex k2. And therefore, that means that this exponential tie delta plus a, and then for by the Cartesian 11th theorem. So this part was just not to go into the details. This shows you how this representation gives you a quick asymptotic expansion and how crucial what the coefficient is. And so the theorem is that if n of a is 1, n prime a is 0, and delta is not equal to a, and the second derivative a just equals 0, there is infinitely many real and infinitely many complex zeros. So that's opposed to when it has a jump. Sometimes, for example, if n is a constant, you have only real zeros. That's not the case here. And you can show that they do not lie in the strip here of the axis. In fact, Godef again is shown they lie inside a parabola fact region. So here again, depending on how n of r behaves on the boundary, you have a very different distribution of the eigenvalues. In one case, it was a jump in a strip. In another case, in a parabola, it was not a strip. Now, I'm now ready for the last part of the talk. I'm just going to summarize. The first part said, here's a transmission eigenvalue. The problem why is important. You measure spectrodynamic from the far field operator, and that should hopefully give you information about n of r. So far, I've said nothing about information about n of r. I'm going to do that now. The second part of the question is, what is the value of that now? The second part of the talk was designed to tell you things that kind of funny the eigenvalue problem. Sometimes they're real zeros. Sometimes they're complex zeros. Where the distributor depends on how coefficient behaves on the boundary. This reflects it's not only not self-adjoint, it's a pretty bizarre problem. So if you got that message, that's the message you're supposed to get. OK. So let's talk about whether they determine n of r. Now, we know the general complex eigenvalues. So the question is, do all the eigenvalues, real and complex, determine n of r? So the first, there are many results. But the most recent ones are Ackerson, James Pappenichel, again, say that in this case, if n of a is 1, n prime a is 0. So no jump. And if n of r is less than 1, the transmission eigenvalues uniquely determine n of r. Remember, the transmission eigenvalues, again, are determined from the scattering data. Now, this condition, based on this, seems pretty important. There have been a number of papers written saying this results true for n of r greater than 1. All the papers being written have mistakes in them. No one's found any counter examples. So at the moment, the bin is necessary. However, these conditions are, presumably, you'd like to get rid of those. And so I'm going to tell you how to get rid of both these by a trickier idea that a student of mine has, who's working with me right now, and would still just only use the spectral data, the far field pattern. That's the data we have. But we're going to modify the far field of it. So the aim here is to avoid the restriction on n of r. So this head, here is a scattering problem. Don't worry about the actual details. It's a scattering problem. D, you remember, is the unit of law. So you can do this by separation of variables. This is a problem you can just sit down, do to undergraduates to, and then separate variables, and do it. Just make sure you got that message. Here it is again. And then this is a scattering problem. It also has a far field. It has some kind of behavior. It has a far field pattern. So that's it. All we've done, you don't need a computer anything. You just separate variables and do it. You write it out. And so instead of considering our old far field pattern, you can see there's a modified one, where u infinity is again. This is just, I just copied down the first slide for you. This is an original scattering problem. So this is the data you measure. This is the data you're physically interested in. Now you look at this new script F far field pattern. And you worry about where that's injected, instead of the old one. And it turns out, as you might expect, the far field script F is injected with a dense range. If and only if there's not a solution to this modified transmission problem. Where now you see this eta slipped in there. That other problem, the auxiliary problem, had an eta in there. Right, just eta there. And the result is that F is injected with dense range. If and only if this. So it's the same as the transmission eigenvalue problem. Now we have an eta there. So the whole purpose of introducing this modified transmission problem was to get this eta stuck in there. It was one before for the original one. And therefore, the transmission eigenvalue is like before. You ask this determinant. But you have a slide. You have an eta there. So the way you get that, again, is by looking at this modified operator instead of the original one. But you're still using the same data, U infinity. That's given to you. But you're modified by subtracting off this artificial far field pattern. And the first term is that if n of r is not necessarily one, but it's eta squared, there's just infinitely many real and infinitely many complex modified transmission eigenvalues. So instead of having the one here, now we have an eta. And then we now have, if you choose eta so it's bigger than n of r, so you need to know a priori of some upper bound for the coefficient, then the modified transmission eigenvalues through the multiplicity uniquely determine n of r. So note again, there's no restriction on what the data is on the boundary. And there's no condition n of rb1 because we've introduced this auxiliary operator. So that says that if you know these eigenvalues, you're in good shape. Now you're in bad shape because, now I have to tell you some bad news because you can't have only good news. The bad news is very difficult to measure complex eigenvalues from scattering experiments. The k is the wave number omega over c. You measure it. That means there's some noise on the data. So how do you get these complex eigenvalues? Well, there have been various ways to continue functions in exact data, but it's not anything anyone would want to do willingly. So that's a problem that has to be looked at. OK, another case. So in other words, the real complex eigenvalues, if you know them all, you can determine n of r. That's basically what the message is so far. There's nothing like this available for the case. It was not spurious to identify. There are various monotonous problems that your albos have developed and so forth that give you estimates for the industry fraction, but there's all kinds of stuff to be done for higher dimensions. OK, so now let me, in my last two slides, in my last two minutes, probably, I mean, the following advice I've always given my graduate students. No talk is too simple and no talk is too short. And so whatever you do when you go give a talk for a job, do not go over the top. No one's as interested in the subject as you are. So this is due to Fr. Albusaconi and her colleague, Danilo. I guess I have to say, if you have any serious questions about this, Fr. Albus right there, and she'll answer all of us. So you can also consider the idea of transmission eigenvias. You can have to scatter the theory for automorphous solutions of the wave equation and the hyperbolic plane with isometry corresponding to the modular group. So that's a well-known area of complex analysis. And this kind of scatter results in an interactive, an instant field with the boundary of the fundamental domain. So in fact, there's a whole book by Lax and Phillips that you may know about, scattered by automorphic forms, for example. And what comes out of that, they're mainly concerned with the poles of the scattered matrix. But if you want to see more details and so forth, this is a paper which has been submitted. It hasn't come out yet. But I'm sure you can get access to it. And the theorem is the following. This is slide last slide. So in this context, the remuner potholes is equivalent to the statement that all transmission eigenvias lie on this parabola, this red parabola here, except for trivial eigenvias, 0 and 1 quarter. So what's happening in this particular, now, here's, in general, the fact that it lie inside here, is the fact that the zeros of the remuner zeta function lie in a critical strip. So that's analogous to the zeros here lying inside a parabola. That's what. Now, if you knew somehow they were all this red strip, then the remuner potholes is equivalent. Now, so all you need to prove the remuner potholes is to find some, the way this is done, you explicitly exhibit the solution, you say, ah, that's what it has to be. So what you need is some other method to compute where the eigenvias lie, transmission eigenvias lie in a special case. And this other method shows you that the olive island's parabola, you've proven the remuner potholes. Now, I'd like to say for Alvin, I just finished up a paper verse, unfortunately not true. And now I'm not true. I have the foggiest idea of how to make it true. I do recall, do what I said earlier on in the talk. I did show, for example, that in the case I consider with scurvy stratified media, that all the eigenvias lie in a strip. So at least I said I had some result. And it's possible to show where eigenvias lie. It's not a hopeless type of situation. But in this context, it showed me that this particular parabola, that's more than I can say. So thank you very much for your attention. And that's what I have to say. Thank you. Thank you. So you mentioned that small changes in n near the boundary of the object can cause a very large changes in the far field behavior. The inverse problem, how stable is that? Like let's say eigenvalues are real, right? But you still have some noise in your measurement. Good question. Disaster is a very almost problem. The mapping from the index of refraction to the far field pattern, it takes something, let's say, C2 to C infinity. It's very smooth. So when you try to invert it, you have a very severely almost problem. So the main point is that probably you're not going to be able to measure all these eigenvalues at the beginning. You'll be lucky to measure three or four of them maybe. And you hope that that would say something. So the question is not so much maybe, do all the eigenvalues uniquely determine a var, but do the first three or four eigenvalues have something to say? So for example, for those of you who know some of us scattering the resonance, the first eigenvalues say something to volume, the second one says something. Well, in fact, you have vials, that type of idea. You'd like to have something that says the coefficients. There is a vials type theorem. You'd like to say something that coefficient has some physical meaning. That is not there. If you have real and non-real eigenvalues, sort of for an engineering situation, do you expect the components eigenvalues to dominate? Let's say you take the ball and the components play, and then you count. I don't know. Because for the general, let's say, for scattering of a splinter effect, you know roughly what a circle where you start, you have approximately constant times r-square. And you can count some real axis of them. And you can get something like that, how certain dimension of the limits that are like that. Can you identify it? But in general, I don't know what it is. I think a Voders' result in the paper, he has a counting function. And the first coefficient, he has a counting function with the first count in the asymptotes. And the coefficient involves n of x. So if you know asymptotically how it behaves, for example, counting function inside the ball, then you can, you know the first coefficient in the asymptotic that you can get, so in each row that involves n. It's in that paper. OK. In general, as I recall, the papers that are written as counting function include all the eigenvalues together. Inside the ball. Inside the ball, right. Inside the ball. Right. I guess it's a remark, a little bit more less slight, but I think the scattering pools for some look for the transmission eigenvalues, but they are known to be, let's say, 1 half zeros of zeta. So for more, you look for people. Yes. For more, for more. And for some other cases, which is kind of, that's what I did have time to show you. I took two notes also. Modular and I took two. Yeah, yes. So you have zeros of zeta appearing, and then, of course, you can formulate whatever question you ask about zero and zero zeta, you can formulate. Yeah. Yeah. It's not much more. So how does that sound similarly explicitly here? Yeah. So it's the same. But it's just for resonances. Did there a direct relation to resonances? Could you show an engineering case, perhaps, with the transmission eigenvalues, and you have resonances once? No, but what if you mentioned resonances and transmission eigenvalues? For example, if you look at a very simple case, for example, some say you imagine, say in R3, like I would have considered here, and you look at the scattering matrix correspond to that problem, the transmission eigenvalues correspond to the zeros of the scattering matrix, you use it properly by and on a project. The resonances correspond to the poles. So there's a connection between resonances and states in that sense. Then from the period of practical point, if you want just to do a computation for particular given time, you should be at the same level of difficulty as the bit, rather, basically need to find zeros of complex eigenvalues. And as usual, there are differences when zeros correspond to poles and so on. But the difference here is important. The scattering ranges, you know, they're all out of the lower half plane. You only know the scattering data for real take. So determining the scattering resonances, you have to, and they continue with inexact data to the lower half plane to find out where they are. Now for the transmission eigenvalues, remember, there are points where the partial output is no longer objective. So you have to find out where is that output going on objective. And in the case I've been considering, when any of our is real, these transmission eigenvalues all land on the, there are some transmission eigenvalues in the real axis, so you can hopefully be able to measure them. Now the problem in general is you don't know the complex eigenvalues exist and how to measure them. But the idea that I said before in the entire dimension, the hope would be that you could get specific geometric properties associated with each coefficient with each transmission eigenvalue in some way. For example, in scattering resonances and the asymptotic transmission of scattering resonances, the wild type law, each coefficient there corresponds to some geometric property. We don't have something like that for transmission eigenvalues. Now, it'd be nice to have it, but I don't know, but it doesn't exist at this moment. No, I'm just talking about computation. So given, and could you make a arrival over, it can be defined to a sufficient estimate. Is there a, I'm not sure. Yeah, in general, or for the, I mean, I have no doubt that, but there is a way, it's not stable, that there is a way to measure numerically, to compute numerically the transmission eigenvalues from the fact that the operator. And also there is also another connection with scattering metrics. So there are some strange behaviors of the eigenvalues of the scattering metrics if you are at the transmission eigenvalues. So you can catch that it's missionizing the numbers to measure that, yeah. But for general, for even a non-injective that work pretty reasonably well, that has to be stabilized. It's an ill-post problem, it has to be stabilized. The computation is really the only valid, I think for the first few or four probably, you know, trying to calculate more and that's probably not a good idea. So that's the situation I understand. All right, so, both groups, maybe some other people can example, so, Ioscali, the surfaces are kind of potentials which are different. So I wonder if you could back up such examples, you could get examples of the spectral transmission eigenvalue problem. So you would need to read the address next. Go, go. But it's, it's getting, so, examples like in scattering theory are also, you've got to go with the shape of a drop, so. Right, and it's a similar thing here. So, I think there's a very, one of the reasons I had a great deal of pleasure coming here is I think there's a lot of questions on transmission eigenvalues that I don't have a clue on how to think properly about, whereas people here who are more familiar with other types of spectral problems, particularly geometric spectral problems, are going to take, why don't you try filling some, or why don't you look at these situations. So, I agree. I know your answer. I'm sorry. I'm sorry. So, you can talk about either eigenvalues or some, who's that, actually. Okay, so it's a philosophical question. So, if we have problems with, say, real eigenvalues, like you said, Laplace-O'Cray, or if we talk about resonances, then there are often, is such thing as trace work. Right, we are on one hand, to some, say, Fourier transform of some function, of all eigenvalues, all resonances, and then on the right hand side, we get some of values of distribution, and something like, of those geodesics or whatever. So, are there some types of transformable, like this for transmission eigenvalues? No, I know. No, I don't. No, no, no. No, no, no. You have to remember, the serious study of transmission eigenvalues really only started, I think, roughly, with the Ralph's paper in 2010. So, this isn't something that people have been working a lot on. So, that's why I like to appeal to people here, that there's new ideas, though you can transfer some other aspect of structural theory to this, to the possibilities, that's what's needed, I think. I'm stuck in scattering theory, basically. It's a new idea, the outside. Well, sometimes, as we'll say, we'll go for some sort of question, like, what if the right hand is people? For example, Voda, for example, I mean, he made his living for a long time on scattering residence and this kind of thing. But what if it's just what she is? Or doesn't it need to be the right side,
The transmission eigenvalue problem plays a central role in inverse scattering theory. This is a non-selfadjoint problem for a coupled pair of partial differential equations in a bounded domain corresponding to the support of the scattering object. Unfortunately, relatively little is known about the spectrum of this problem. In this talk I will consider the simplest case of the transmission eigenvalue problem for which the domain and eigenfunctions are spherically symmetric. In this case the transmission eigenvalue problem reduces to an eigenvalue problem for ordinary differential equations. Through the use of the theory of entire functions of a complex variable, I will show that there is a remarkable diversity in the behavior of the spectrum of this problem depending on the behavior of the refractive index near the boundary. Included in my talk will be results on the existence of complex eigenvalues, the inverse spectral problem and a remarkable connection (due to Fioralba Cakoni and Sagun Chanillo) between the location of transmission eigenvalues for automorphic solutions of the wave equation in the hyperbolic plane and the Riemann hypothesis.
10.5446/59248 (DOI)
Thank you very much to the organizers for giving me the opportunity to speak at this interesting workshop. So my talk is mainly going to be based on an article that's not quite available on Archive yet, but I put a copy on my webpage in case you want to look at it. And it's very much inspired and motivated by earlier joint work with Akron Meinrank and Yanisang. Oh great, yeah. Okay, yeah, so my talk is mainly going to be based on the... Yeah, so the first thing listed here, but as I said, it's very much motivated by joint work with Yanisang and Akron Meinrank. Right. Ah, okay. Right, so my talk is related to the Freight Hopkins Tellment Theorem, so I'm going to start with a very brief introduction to that theorem. And part of the Freight Hopkins Tellment Theorem is a map going from representation theory of the loop group, LG, of some compact lead group, to twisted K theory, or twisted phenomenology of G. And so one thing I'm going to do in the talk is describe a map going in the opposite direction, going from twisted geomology to representation theory. And I'm going to be especially interested in what that map does to what are called d-cycles. This is the terminology of Baumkering-Wang. So d-cycles are sort of a nice geometric package that gives you cycles in twisted geomology. And then time permitting, I'll talk about sort of the original motivation that I had for thinking about this stuff, which has to do with Hamiltonian-LG space. Okay, so let me give you kind of a light mean overview of the loop group. So throughout my talk, G is going to be a compact, simply connected, simple loop group. And I'm going to fix the maximum torsity inside G. LG is going to be the loop group, so the maps from the circle into the lead group. And there are some subgroups of the loop group that are going to come up in my talk. So base loop groups, so these are loops which begin and end at the density element. And then sort of complementary to that, there's a copy of G sitting inside the loop group as constant loops. And another subgroup that will come up is the integral lattice. So you look at the kernel of the exponential map for this maximal torus. This is a lattice which we call it pi. And this naturally sits inside the loop group as the exponential loops formula that we did. And the loop group has a famous U1 central extension. And so I put the formula for the Lie algebra cosec of the central extension. And it involves this, V is the so-called basic inner product on the Lie algebra. It's an invariant inner product on the Lie algebra with a particularly nice normalization. And yeah, so one other thing I should mention about this central extension. It has a symmetry. So as one acts on the loop group by reprametrization, by rigid rotation of loops. And this symmetry lifts to the central extension. I am confused by this high definition. So that's the natural map from T, I guess it's going to, from the Lie algebra of T to T itself. Right. So how's the kernel of that inside omega G? So I'm identifying it with a subset of omega G. So if you have some elements in the kernel, you take, I guess this should be XT. Look at the loops that you get by exponentially. The one parameter subgroup generated by that element. So maybe instead of inclusion, I should put like an error, an inclusion error. Okay, and so let me tell you a little bit about the representations. So the group has an interesting family of projective representations called positive energy representations. And these are representations of the central extension with some conditions. So the representation should extend to an action of this semi-direct product group by this rotation circle. And the weights of the action of this rotation circle should be bounded below my zero. That's the positive energy condition. And then we say that one of these representations is at level K if the central circle. So the other circle acts with weight K. K is some integer. So these are positive energy representations. And they have a nice theory, very much parallel to the representation theory of G, the finite dimensionality group. So in particular, irreducible positive energy representations are parameterized by their highest weights. And another interesting thing that happens is that if you fix a level, so if you look at the fixed level K, it turns out there are only finitely many of these irreducible positive energy representations. And they're parameterized by the so-called level K weights. So my notation for this is pi K star. So pi star is the weight lattice, the dual of the integral lattice. And pi K star, you take the intersection of pi star with the K-fold inflation of the fundamental alpha. So I should also mention here and throughout my talk, I'm going to be using the basic inner product to identify G with its dual. So the alcove sits naturally inside the algebra of the maximum torus, but I'm thinking of it as a subset of the dual space. Right. And then so the verlimeter ring is sort of the analog of the representation ring for the loop group. So the level K verlimeter ring. So as a group, you can think of it as the phyabilian group generated by these level K weights, generated by these irreducible positive energy representations at level K. And it also has some other descriptions. So it has a nice description as a quotient of the ordinary representation ring of G by the so-called verlimeter idea. And this shows you its ring structure. And it has another description that's going to be important for my talk. And yeah, so I apologize, the notation is a bit unearthly. So it has another description as certain formal characters for the torus, for the maximum torus. So formal infinite sums of irreducible characters of the torus with a certain anti-symmetry property under the action of the affine valgur. So to understand this, it's maybe helpful to think by analogy with the case of a compact loop group. So if you have irreducible representation of a compact loop group and you restrict it to the torus, you get a formula for it, the val character formula. And it's given by some numerator which is alternating under a certain action of the val group, divided by some universal denominator that doesn't depend on the representation. There's a similar story for the loop group. They also have characters given by something that looks like the val character formula. And again, it takes a form of a numerator which is alternating under a certain action of the affine val groups. This is sort of the loop group, analog of the val group, divided by some universal denominator that doesn't depend on the representation. So that's this isomorphism here. Okay, then I have a picture to kind of maybe give you a bit of a feel for what this affine valgur action looks like. By the way, this K plus H check has to do with, so this is describing what this action is. The H check that appears there is the dual coaxial number. And this is quite important in the free-throwing. So maybe I'll spend a minute to talk about it. By the way, this image I have to think accurate and David Lee Bland for letting use this image. So this is a picture for SU3 with K plus 3. So this level K is this 3. And the level K weights are the black dots inside this red triangle. So there's finally many from here. And so the green hyperplanes, this affine valgur action is generated by reflections and all of these green hyperplanes. So an element of this group of formal characters, it has a multiplicity function, which is a function from the weight lattice to the integers. And it should be alternating under reflections in all of these hyperplanes. So notice that the fundamental domain for this action, this reflection action, it's a bit bigger. So this is where the dual coaxial number comes in. So the fundamental domain for this action is a bit larger than the set of level K weights. So maybe you can see what this group is. Okay, so that's what I wanted to say about loop groups. The other side of the free talk in the theorem is twisted k-homology or k-theory. So for me, the twist of k-homology is going to be what's called the Dixie-Mute-Rede bundle. So it's a bundle of c-star algebras over your space x that you're interested in, with fibers equal to the compact operators on some Hilbert space. So locally trivial bundle, locally it just looks like an open set times compact operators, but globally there can be some interesting twisting. And so these bundles are classified up to isomorphism by something called the Dixie-Mute-Rede invariant, which is a certain k-homology class that has a component in degree three, which is maybe the most important, and then there can also be a component in degree one for some extra grading information. And so in the equivalent situation where we have a group acting, these are equivalent k-homology classes. So the compact operator is on a z-quadered Hilbert space here, right? Yes, yes. And... I'm going to say something else at the moment. Right, so I should mention, so different bundles will give you different groups. So we're talking about sort of very sparsible groups than you could get. So for me, I'm going to use the analytic definition of the twisted k-homology group. It's known to be equivalent to definitions of topology. So for me, the twisted k-theory of space x is going to be the k-homology, the analytic k-homology of the algebra of continuous sections of this bundle, vanishing at infinity if you're in a non-compact space. So again, I should emphasize, you get different groups maybe from different Dixme de Rady bundles, different C-style bundles. And just to remind you what the cycles in this group look like. So they look like triples h-row f, where h is some z-quadered Hilbert space. The row is a representation of your algebra on this space. And f is, you can think of as sort of an abstract, zeroth order, liptic operator. And what that really means is that if you look at these three operators, they should be compact. So this first condition maybe you can think of as sort of locality condition. And this says that f is close to being self-adjoint, and this is sort of a compact, resolvent type condition. So that's sort of an abstract definition. The fundamental example is coming from the Clifford-Algebra bundle. So if you take any even dimensional Riemannian metaphor, you can look at the Clifford-Algebra bundle. This is a finite dimensional Dixme-Druity bundle. This Dixme-Druity invariant is the third integral Stiefel-Wittney class. So this is the abstraction to the existence of a spin-z structure on M. And there's a class in this group that you can think of as being like the fundamental class of the manifold M. And it's basically the Durand-Durac operator acting on differential formulas. To make it exactly match this definition, you need to do something like apply a functional calculus to get a bounded operator. It's basically the Durand-Durac operator. You're assuming that was right. You're assuming that was right. Right. Yeah. I should have put it back. Right. So don't you need to assume this is spin-z structure in order to get a fundamental class of k-theta? So this is going to live in twisted k-theta. I guess that's part of the idea. So this won't live in the ordinary k-theta of M, but it will live in the a-theta of k-theta of M. So this is maybe one useful aspect of twisted k-theta. Okay. So now I'm going to describe a nice geometric package for generating cycles in this group. So these are called d-cycles or Baum-Carry-Wine-Covins d-cycles. And these are a twisted analog of Baum-Douglas cycles in geometric technology, which we've seen that before. So a d-cycle is a four-tuple consisting of a compact-oriented even dimensional Riemannian manifold. I'm just describing an even case. There's a similar version for the odd case. And then you have a vector bundle on M, a continuous map from your manifold into your space X. And the last piece of data is maybe the most interesting. So you have a marita bimodule linking a Clifford algebra bundle under your Riemannian manifold with the pullback of this twisting bundle, the pullback of this Dixme-Duriddy bundle. And so if A is trivial, if A is just the complex numbers, then this S is equivalent to a spin-C structure on M. In that way you recover the ordinary Baum-Douglas cycles. But yeah, so more generally, so if A is some general Dixme-Duriddy bundle, the existence of S implies that you have this relation. So the pullback of the Dixme-Duriddy invariant of A is this third integral C4-Widme class. And the way you go from a d-cycle to a cycle in this Tristic K-Emolegy group is fairly simple. So you take the fundamental class, you can compare with this vector bundle, and then the pair consisting of this map phi and the bimodule S you define a push-forward map from the Tristic K-Emolegy of M to the Tristic K-Emolegy of your space X. And so you can push forward to get a class, which shouldn't it be the search-geek-widney class of the normal bundle of M and X or something for this push-forward to work that way? So we should use the normal bundle. Not sure. I'm not sure. I'm not sure. So now I'll talk a bit about the free Hopkins-Tellerman theorem. I should say I'm only talking about a very special case of the theorem, a very general theorem for arbitrary compactly groups. I'm just going to be talking about the case where G is simply connected, simple and connected. So we take the space X to be the group itself, and so there's an action of G by conjugation. And so it's known in this case that third aquavirin homology is just isomorphic to the integers, and the lower homology is vanishing, which means that the Dixmeid-Durradi invariant is just an integer. So I'm going to use this notation A superscript L to just mean the Dixmeid-Durradi bundle with invariant L. And so the free Hopkins-Tellerman theorem is an isomorphism between the Rlindering and some positive-level K. And the twisted K homology, there's a version for K theory too, as stated by the Poincare duality, but here I've written it in terms of K homology. And the Dixmeid-Durradi invariant is shifted a little bit, so it's shifted by this dual coccidant number. And yeah, so more than this, so free Hopkins-Tellerman construct a very interesting map going from representations to twisted K theory. So given a positive energy representation, they explain how to construct a cycle for twisted K theory. So this is a very interesting part of their construction. But going in the opposite direction is maybe a little bit less clear. Given some cycle for twisted K homology, say D cycle, is there some nice description of the corresponding positive energy representation? So I'm going to be talking about this. So I'm going to bring things up into two main results. So first I'm going to describe a map. I'm going to construct a map for you from twisted K homology to the verlandering. And it'll be this particular avatar of the verlandering, so these formal alternating characters. And then the second thing is a more explicit cycle-level description of what this map does to D cycles. And it's going to turn out that this map is given by basically the index of a certain elliptic operator on a certain space. And what I mean by index here, so the operator will be on some non-compact manifold. So the kernel and the co-cernel will be infinite dimensional, but the multiplicities of the T-action, the tourist action on these Hilbert spaces will be finite. So it will make sense to talk about the index of this operator as a formal character of T. Okay, so I need to tell you briefly how to construct these bundles over a group using loop groups. So PG is going to, I should say this particular construction is not due to me, I think it's probably well known. So PG is going to denote the, for me it's going to denote the cosy periodic paths in G. So these are maps from the real line to G such that this product is some constant independent of S, S is the parameter in R. Is it still for any constant or is it fixed for that new PG? For any constant, the constant can be any element of G, but it's independent of S. Yeah, so different gammas will be sent to a different constant. Okay, yeah. Yeah, but and this gives you a map, so this gives you a map from PG to G. So you can take this cosy periodic path gamma and map it to the corresponding constant that appears on the right side of equals. And this makes, it's not too hard to see that this makes PG into a principal LG bundle over G. So you've got a principal bundle over G. And then you can build a Dixie 3D bundle by doing an associated bundle construction. You take a positive energy representation. So LG won't act on V, only some central extension of LG will act. But when you take compound operators, the action will descend to LG and so you can build this associated bundle. And yeah, so it's known that the Dixie 3D invariant of this is L. That's your start with a level L positive energy representation. Okay, right. So the next thing I want to this, it's not quite motivated yet, but I'll motivate it on the next slide. So this, yeah, so I sort of, I think of this original Dixie 3D bundle as being very, very big. The silvered spaces that you start with are very, very large. So what I want to describe is much smaller model for the Dixie 3D bundle over the maximum torus. It's just going to work over the maximum torus, but it's sort of much smaller and more manageable in some way. So I'm going to describe that. This is going to be an important part of the map. So, yeah, so remember the maximum torus and this lattice pi, they both sit inside LG and subgroups. So of course you can pull back the central extension and you get some central extension of this group. And then I'm going to take a certain representation of the central extension. So the way that you build it, I think this is kind of the nicest way to build it, you start with the regular representation of pi hat, the central extension of this lattice. And then you restrict to the sub space where the central circle acts with a certain weight. The weight should be minus L if you want to Dixie 3D bundle with, so if you want the numbers to work out, it should be weight minus L. So this is a certain sub space of this slightly larger Hilbert space. And yeah, so this carries a representation of this central extension. And then you can build a Dixie 3D bundle in the same way, using the associated bundle. So T here is the algebra of the maximum torus. And yeah, so we're building a Dixie 3D bundle in essentially the same way, using the associated bundle. And yeah, so it's not too hard to argue that when you take the Dixie 3D bundle over the whole group and restrict to the torus, these become rate isomorphic when you restrict to the torus. And this is essentially because you use the same central extension to build both of them. And you can write down, yeah, so the rate of morphism will be given by a similar associated bundle. So I like to think of this as sort of a small model for the bundle over the torus. OK, so now briefly tell you what this map is. The first couple of steps are basically restricting to the maximum torus. We restrict to some, so maybe one unfortunate aspect of this construction is we have to restrict to the neighborhood of the maximum torus. It's not ideal, but anyway. So the first step is we restrict to a tubular neighborhood, U, the maximum torus, and then you apply tomics and morphism to go to the maximum torus. Then we use this Morita isomorphism that I just talked about to switch to this smaller model for the DD bundle. And then we're going to use some tools from analytic k-homology. So it turns out, so one very nice feature of this DD bundle is that it has kind of an alternate description as a cross product algebra. You're already confused. Pi was the lattice. Yeah, Pi hat is the central expansion of the lattice that you get by. So it's not commutative anymore? No, it's not quite commutative anymore. Almost commutative. Right, yeah, so yeah, a nice feature of this small model for the DD bundle is when you look at this C star algebra, it has this alternate description as a cross product. And this is mixed available to you some tools from KK through. And in particular, you end up getting an element in certain aqua variant k-homology groups, k-homology just of a vector space, just of the algebra of t. We're now talking about a very simple space with respect to this group, so this central extension group that's playing a big role. And once you're in this group, there's another tool from k-homology called the analytic assembly map. This is kind of an integration over a T map, sort of a generalized index map going from this group into, so it goes into the k theory of the C star algebra of the group. And you can sort of think of this as being roughly like a representation ring for this group. Being something like a representation ring for this group. Yeah, so as I said, we use some, I guess, much loved tools in analytic k-homology as part of the construction. Right, and this is basically the description of the map. So the last thing I should mention is that with a little bit of extra effort, you can keep track of what happens to the val group symmetry when you apply this map. And actually all of the steps are equivalent under the normalizer of t, except for one. So this tom isomorphism used way up here, not quite equivalent, but has an anti-symmetry property under the normalizer. So if you keep track of the val symmetry, you can show that the range of this map is contained in the subgroup of this sort of generalized representation ring. That's isomorphic to formal characters with this anti-symmetry property. So going from here to here, we kind of are forgetting the pi action and just retaining the t action. So that's the description of the map. Right, and then, right, so for d-cycles, it gets somewhat more concrete. So if your cycle is a d-cycle, so it's given by a four-tuple like this, so a manifold, mapping into your space, this marita bimodule. So it gets a lot more concrete. So first of all, what the corresponding analytics cycle looks like. So the Hilbert space looks like L2 sections of a certain Hilbert bundle over your manifold. And the Hilbert bundle has a Clifford module structure. And this operator F is sort of a bounded version of a direct type operator, acting on smooth sections of this bundle. So that's roughly what the cycle looks like. And, okay, so the first nice thing happens when you apply this marita morphism to the small model of the Schmidt-Riedi bundle. What happens, roughly speaking, is you replace these kind of big Hilbert spaces with these much smaller Hilbert spaces, which basically look like L2 of a lattice. And then there's, yeah, so it tends to be something finite dimensional. And then there's this nice correspondence between differential operators that act on the course of Hilbert spaces that look like L2 of a lattice, and operators on covering spaces, on ordinary finite dimensional manifolds. So in this step, when you pass to this smaller model, you end up getting something that looks like an ordinary direct type operator, so on a finite dimensional bundle, but on a covering space. Right, and then the last thing is you apply the analytic assembly map, and this ends up taking the index of the operator. Yeah, in this kind of generalized sense that I talked about. This is not quite immediate because the assembly map is defined in sort of a more abstract way, but if you do a little bit of work, you can show that it boils down to this in this special case. Okay, I'm not sure how much, probably, because I have a little bit of fun with that. So yeah, so with the remaining time, I want to tell you what my initial motivation was for thinking about these things. So initially I was motivated by an application to Hamiltonian loop group spaces. Yeah, so let me tell you very briefly what a Hamiltonian loop group space is. So the dual of the leogic of the loop group, we can think of it as a connection. So the algebra of value one forms on the circle, and we can think of those as connections on the trivial G bundle over the circle. And yeah, so the loop group acts on this space by gauge transformation. That's the action we want to consider. And then a Hamiltonian LG space is, so it's going to be an LG manifold, so smooth, barric manifold with an action of LG with a symplectic form and a proper moment map from M into LG star. And this moment map should be equivariated for the action that we talked about up here. So that's a Hamiltonian LG space. And so in this work with Yan Li Song and Akert, we were thinking about quantizing Hamiltonian LG spaces. How to quantize Hamiltonian LG spaces. And one maybe naive thing that you might try is to take some finite dimensional sub-manifold of the Hamiltonian LG space, try to quantize it with sort of more usual means, you do it with derap operators or something, and then get a positive energy representation by some kind of induction procedure. That's maybe sort of naively how you might go about it. So one space you might try is you might try to take inverse image of the Liegeber of the maximum torrents. So because this subspace is finite dimensional, the subset that you end up with, so if it's smooth, it will be some finite dimensional sub-manifold of M. But the problem, well one problem with this is that it could be singular. But we found a way around this, so we, some of this earlier work, we showed that you can always find a small thickening of this singular subset that's still finite dimensional, but it's smooth, and moreover we showed how to build the canonical spinacy structure. So our thinking was to look at the Dirac operator on this space. You can twist by a prequantum line bubble. And then essentially you look at the index of this operator, except not quite. So because we passed through a thickening of this space, you should, instead of looking at just the index, you should take an index pairing with a k-theory class that plays the role of a pointer-a dual to this x. Yeah, but there's some nice index pairing to study, and you prove that it gives you an, yeah, so when you take this index pairing, it gives you an element of this group. So again the kernel, core kernel, are infinite dimensional, but the multiplicities of the torus are finite, and in fact you get a character with this anti-symmetry property under the affine value. So you could define the quantization of this Hamiltonian loop group space as the unique element of the Rindring corresponding to this formal character. And one attractive feature of this approach is that you can do the wind deformation with it. This was really original motivation for doing this. And there's another approach that's a few years older to mostly to Eckert and also some collaborators. So you could, so this is another approach based more on, so it's based more on twisted k-homology. So you could, so the base loop group, this is sort of a, yeah, so I mentioned this is a subgroup of algae, so it turns out to act freely on n. So it acts freely on algae star and by equivalence it also acts freely on n. And you can take the quotient of m by this subgroup and you get a finite dimensional manifold, finite dimensional compact manifold called the quasi-Hematonian g-space. And this actually gives, there's actually a one to one correspondence between these things. If you put appropriate data on m, there's a one to one correspondence between these things. So you could try to define the quantization of the loop group space in terms of some appropriate quantization of m, whatever that should be. And Eckert and collaborators found a nice way of quantizing quasi-Hematonian spaces. So they noticed that these quasi-Hematonian spaces, they don't always have spincy structures, but they do always give rise to certain canonical d-segals, or certain canonical, yeah, so there's always this canonical rate of bimodular, this d-segal, at level the dual-coxeter number. So they called this a twisted spincy structure. And they're in their earlier papers. And yeah, so in a fairly natural way, if you have a pre-quantum line bundle in your loop group space, this also has sort of analog downstairs on the quasi-Hematonian space. So it takes the form of a marita morphism between a trivial line bundle and, sorry, trivial d-d bundle and the pullback of, yeah, so the corresponding d-d bundle on g. You start with a level k pre-quantum line bundle, you get this sort of marita morphism. And then, so Eckert gave a definition of the quantization of m as, well, basically the class of this d-segal, yeah, so you take the tensor product of this, so this is the analog of the spincy structure, and you take the tensor product with the analog of the pre-quantum line bundle, and this gives you a d-segal in this group. So you could, right, yeah, so you could then define the quantization of the loop group space as the corresponding element in the Verlind ring that Friedhawk can tell him if he gives you. I'm not sure I said that in the most clear way. Feel free to ask me about it. Okay, so now, right, so now I can, yeah, so the initial question that I wanted to understand was how these two pictures are related to each other. Initially, it seemed quite different to me, but I thought there should be some simple relationship between the two. So now I want to explain how that works. So it's almost an immediate consequence of what I've told you so far. So let X be this push forward of this d-segal. Yeah, so this is the quantization of the q-Hamiltonian space, the quasi-Hamiltonian space. And then, so according to the first theorem I told you, when I apply this map I, that's this complicated map that I told you about when you apply this map, you get some element, some formal character of t. And yeah, so this first theorem I told you about says that this element equals the image of X under the free Hopkins' element isomorphism. And then the second theorem I mentioned, so this was the explicit description in the case that you have a d-segal. So the second theorem tells you that I of X is given by the index of some first order elliptic operator on some space. And it turns out that this operator is exactly the operator that you get when you look at this index pair. So it's exactly the same operator. And then the last step, oh yeah, I didn't read it, but the last step is just when you apply the analytic assembly map, this is what connects the index of this operator to I of X, to the image of this under the free Hopkins' element isomorphism. Okay, yeah, and that's sort of the end of the proof that these two approaches give you the same thing. Take off. Thank you. Thank you. That's a good point. No, it's not. No, unfortunately, sorry, sorry. Yes, yeah, yeah, yeah, yeah, yeah, unfortunately I can't give an alternate proof. Yeah, at least not yet. I would love to be able to do this. So the free token cinema isomorphism is about GV covariance. Yeah, yeah, I was sort of implicitly using that, but I didn't say that. So everything in the description should be GAC overranged. So the manifold should have a G action. This vector model should be GAC overranged. The bimodule should have a G action. Yeah, and this map should be GAC overranged. Is it clear that like classes in GV covariance and K theory can always be realized versus the notion of insight? So I don't think it's known for sure yet. So the latest thing I heard was that, so maybe if Thomas Schick is here, we can answer. I don't know, but the latest thing I heard is that, so there are four authors including Thomas Schick have announced the proof that you can do this at least in the non-equivariant case. Yeah, I don't know about the equivalent case. So as far as I know, it's not, for my construction, I don't need it. So for my construction, I really just use it as a nice geometric package for producing cycle. It exists. Certainly for the groups I was talking about, it exists. I'm not sure about it in China. Is there any more questions? Let's give this to you again. Thank you.
I will describe a map from `D-cycles' for the twisted K-homology of a compact, connected, simply connected Lie group to the Verlinde ring. The induced map on K-homology is inverse to the Freed-Hopkins-Teleman isomorphism. An application is to show that two options for `quantizing' a Hamiltonian loop group space are compatible with each other. This talk is partly based on joint work with Eckhard Meinrenken and Yanli Song.
10.5446/59250 (DOI)
This in the coffee room. So I think it's too much. Maybe it's a little fast. Sorry. All right. Right, so first let me take the opportunity to thank the organizers for inviting me to this conference, giving me the opportunity to escape the breezy, warm, from sunshine of South Florida through the snowy mountains. But actually I'm not joking. I grew up in the Rocky Mountains, so it feels like home, actually. So yeah, I want to talk about some joint work that's in progress with Richard Melrose. Sort of a nice theory of higher gerbs. So first I'll talk a little bit about review of gerbs and bundle gerbs, in particular to Murray, and talk about these higher versions, and then last some relation to loop spaces and some of the things that Conrad was talking about in the last talk. So first, you know, the idea of gerbs is they're supposed to sit in analogy to line bundles. So line bundles, you know, categorify set in three to co-amology in the sense that every complex line bundle has a churn class, we know, which is natural, satisfies these properties, and classifies them up to isomorphism, right? So two line bundles are isomorphic, if and only if they have the same churn class. That should be C1, by the way. Already off to a good start. Right, and then of course the way to see this explicitly in check co-amology is you take a covering of your manifold, or space, actually all the spaces in this talk will just be topological spaces. You can return their manifolds, but the level of topological spaces. So take a covering over which your bundles are trivial, and then of course on overlaps, you have the chain of trivialization, that gives you a check class, so explicitly a co-cycle, and it's unique up to a co-boundary, so that's your explicit representation of the churn class in check co-amology, Degree 1, which of course then the boxed-in homomorphism is isomorphic to Degree 2 in a jurid co-amology. Right, so let's think about gerbs, which are supposed to be in relation to H3, various versions of gerbs going back to Garot, I think originally, and then developed by Berlinski, there's a version of gerbs due to H1 and Chattergy, and then to Murray, this is kind of roughly in chronological order, I'm probably also missing some people, and also roughly in order of least to most geometric, so in flavor. So we're going to talk about Murray's version of bundle gerbs. So bundle gerbs consist of the following data. So first you pick a map that sits over your base space X, and in nice cases you might like this, if X is a manifold, you might like this to be a fiber bundle of some type, and all that's necessary is that this is a locally split map, meaning that it's surjective and admits local sections. So in particular this might be a fiber bundle, or it might be, you could take the total space of an open cover of X, the set of, you know, the disjoint unions of the open sets in the cover, you can consider as your total space. So that picture is part of this as well. And then you fix a complex line bundle over the fiber product, and I'll use this notation, Y squared in brackets for the fiber product of Y with itself over X. And then as Conrad mentioned, there's this gerb product that whenever you have, you know, three elements sitting over the same point in X, there should be a product of the fiber of L over Y1, Y2, with the fiber over Y2, Y3, into the fiber over Y1, Y3, and this product should be associative whenever you have four points, that's sort of the condition over Y4. And then it's well known that this data, you can associate to this data, Dix-Mey-Douard-Declass, D-Mee-A, I remember, is it, do I pronounce the X? D-Mee-A, Dix-Mee-A? Dix-Mee-A, Dix-Mee-A, OK. The Dix-Mee-Douard-Declass in Degree 3, integer and co-amology. And I'll give it, sort of an alternate construction of that in a second. Right, so what are some properties of Gerb? So there's the data, again, of course there's two maps from Y squared back to Y, just for getting either factor. So a trivialization of a Gerb is an isomorphism, well, so if we have a line bundle that actually sits over Y, not over Y squared, and we pull it back in two different ways by the two maps and take the alternating tensor product, if L is isomorphic to such a gadget, then we say it's, the Gerb is trivial, that's a trivialization of a Gerb. There's a notion of inverses, products and pullbacks in kind of an obvious sense. The inverse is just, take the inverse of the line bundle at the top, products, you can take the fiber product of your fiber space to support the tensor product of the two bundles over the fiber product of this fiber product with itself. And then pullback, you can just, if you have a map from X to X prime, you can, I guess, X prime to X in this picture, you can just pullback all the data over X by that map. And then the Dixme adduity class is natural with respect to all these things, so it's trivial only if, if and only if the Gerb is trivial in this sense. And inverses give you the negative class, it's additive with respect to products, natural with respect to pullbacks. And then, you know, this is a question in this business is, this is supposed to, the equivalence of the classes in co-amology is supposed to classify these things up to isomorphism. And originally the notion of morphism for bundle Gerb is a little bit strong, and so that wasn't quite the right notion of isomorphism. So the one way to say it is that the Dixme adduity classes of two bundle Gerbs are equal if and only if they're stably isomorphic, which is to say that they're isomorphic in this strong sense after tensoring them with some trivial Gerbs. This is kind of a cheat, it's more or less saying the same thing as that the product of the one with the negative of the other is trivial, which is then obvious if they have the same class. There's a better way to do this, there's sort of a more refined notion of morphisms developed by Conrad, wherein you can say that they're, they have the same class if and only if there is a one isomorphism in his sense that I'm not going to outline exactly. Right, so typical examples. And it's seductive to. What's that? And it's seductive. Which be seductive. You didn't even have that. Yes, that's true, right. Okay, yeah, so indeed every, well, yeah, I'll talk about universality in a second. So yeah, for every three class you can think of, you can cook up a bundle Gerb using the path space as the fiverr space. I'll mention that in a moment. So one example of bundle Gerbs as Conrad also mentioned is the notion of lifting bundle Gerbs. So if you've got a principal G bundle and G admits a central extension by the circle, then of course you can cook up a complex line bundle over G that's associated to the central extension. And if the principal one was called E over E squared, so pairs of points in the fiber, of course there's this difference map that maps to G. There's a unique G that maps the one point of the fiber to the other. And so you can pull back this line bundle and this gives you the data of a bundle Gerb. And then the characteristic class of this bundle Gerb is precisely the obstruction to lifting this principal bundle to the central extension. So lifting it to a principal bundle that has structured group G hat, the central extension. So that's well known. I want to give sort of an alternative point of view on bundle Gerbs. So if we write down this sequence of fiber products, so starting at the left with X and then Y sitting over X and then Y squared, Y cubed and so forth, then there are all these forgetful maps off of the products, the lower products. This data defines a simplicial space over X. The numerology is sort of off by one. So in a traditional way that you would enumerate simplicial spaces, this should be Y zero, this would be Y one, Y two and so forth. But this just means that we've got the data of these maps from the higher degree spaces of the lower degree spaces. And there's also degeneracy maps going the other way that are not important for this theory that satisfy the relations of simplices, right? The same relations that face and degeneracy maps of simplices satisfy. Or in more fancy language you can say a functor from the simplex category into spaces. So I guess, yes, so there I've relabeled things with the more abstract simplicial space notation. And then there's a notion of a simplicial line bundle if you have a simplicial space. I guess I quoted Brilinsky and McLaughlin here. I think this probably actually goes back to growth and data. But a simplicial line bundle if you have a simplicial space is the data of a line bundle sitting over Y one. And then there's these differentials. I guess I used this notation on previous slide, but so given these three maps or these two maps, if you've got a line bundle in one of these spaces you can pull it back by all of the maps from a higher degree space and take the alternating tensor product of those things. And that's kind of some notion of a differential on line bundles. So if you've got a line bundle over Y one, then there's a differential line bundle over Y two, which is just this alternating tensor product. A simplicial line bundle is this data of a line bundle here and then a trivialization of the line bundle here such that it induces the canonical trivialization of D squared of L. So if you apply this differential map two times, you'll see that all the elements in the tensor product over here sort of cancel and pair, so this is canonically trivial. So this should admit a trivialization that induces the canonical trivialization over there. And in the case that all of these spaces in your simplicial space are the fiber products of the Y zero over X, this data recovers precisely the data of a bundle ger. So the trivialization of the data over this space, which is Y cubed, is equivalent to the bundle ger product and the fact that this induces the canonical trivialization over Y to the fourth is equivalent to the associativity of that product. So there's better of a central extension of the simplicial space rather than a line bundle. You know, I want to go with the terminology here. Well, this is what Brulinski and McLaughlin refer to as a simplicial line bundle. I'm not going to give you one. Right, so as I said, that recovers precisely the notion of a bundle ger. And this will be useful to us later. Right, so now I want to describe a little bit how to associate the Dixme adduity class to a bundle ger. So this is a well-known way to do this in Czech homology, but let me give you a different way to do that, as far as I can tell is new. So I want to write down a double complex given this simplicial space and Czech form. So write down this double complex. So I've got Czech co-chains as all my objects and going off to the right, I've got the Czech differential. And then in the vertical direction, we've got this tower of spaces. And I'll call these, again, a differential. It's the same operation of pulling back and taking the alternating product by however many maps. I claim you get a complex this way. And if your bottom pair of maps is locally split, then all of these vertical differentials, the complexes of all these vertical differentials are exact. So in particular, if you want to take the total cohomology of this double complex, well, just degenerate it immediately and gives you the cohomology of X down here. It just degenerates at the first page of the spectlcy. Already you have the cohomology of X with coefficients in C star, and then by box team isomorphic to integer cohomology. So you could rightly ask, you know, what's going on in this complex, what do I mean here? You usually don't see maps between Czech co-cycles of different spaces. So there's some details that are being a little bit swept under the rug here. These Czech chains are with respect to certain pairs of covers of our spaces. We call them miscible covers. They're just the covers of X and Y with respect to which the locally split map and its local sections are adapted and also when you pass to higher intersections. So some kind of special covers. And then if you take fiber products of those covers, that gives you covers of all these fiber product spaces. So there's a nice set of covers where you can actually map co-chains from one space into another. And then it's the local sections of this locally split map that induce actually chain homotopy contractions. So not only are these things exact, but there's a sort of reasonably explicit, depending on your choices of sections, chain homotopy contraction in the vertical direction of all these vertical complexes. And so you can either just fix covers if you like and work with a fixed pair of covers, where you can take the direct limit over all admissible pairs of covers. And you can prove that over X and Y, that's equivalent to taking the direct limit over all covers. And so if you do that, then you can work with, at the end of the day, you're going to get actually ordinary homology of X. So, you know, taking the direct limit before you take homology and direct limit functioners commute. So you can take the direct limit on the chain level and then take homology if you like. So in terms of Y and X, we are going to compute just ordinary homology at the end of the day. So it's quite the strange what's going on here, but that's not going to be that important. Right, so then as I said, one way to think of this is, so we've got our line moment over here that has this simplitial trivialization in the normal, in the vertical direction. So you can cook up the usual representative of this thing in check theory, the way that you usually do. So we get a co-cycle here, so something that goes to zero to the right. And then the simplitial triviality or the gerb condition on it means that you can arrange this representative so that it goes to zero above as well. So it's really, it's a pure co-cycle in this double complex. And then you can say that the Deacome-Aduity class of your bundle gerb is just the image in check H2 with C star coefficients, equivalently degree 3 integer homology in X of not L, but the class minus L for various reasons there's an introduce a minus sign at this point. So if you want to see it explicitly, right, this is just a zigzag. I mean, these vertical complexes are exact, so L you can pull back to something here and then take D and then pull it back and then alpha is your representative of your Deacome-Aduity class. So I haven't seen this before and this is sort of a nice way to view things. In particular, from this point of view you actually get something which, I don't know, maybe it's known before but I hadn't seen it in the literature so I'd be happy if somebody could point me to a reference. But it's also clear from this picture that if you just have, if you take a 3 class on your base space and you'd like to represent it as a bundle gerb and you have some candidate space Y that you would like to support your bundle gerb, well this gives you an answer to the question if what, is Y able to support a bundle gerb with a given class? It is if and only if the class down here pulls up to be trivial in the homology of Y and that's just sort of the media from this zigzag picture. So in particular if you take Y to be a space with no homology like a contractable space such as the path space then that will support any 3 class that you like. I think that's just the consequence of the fact that gerbs form a stack. There you go. Alright, perfect. Alright, so what about higher gerbs? So, you know, two gerbs should be some kind of geometric objects that represent H4 and there's a few different versions of these that have been written down. One due to Danny Stevenson, a little outline here. So you can say well gerbs have pullbacks, trivializations, morphisms and so forth so let's just play the same game again. So we'll take this simplicial space coming from fiber products and then instead of putting a line bundle over Y squared let's just put a bundle gerb over Y squared. So I'll denote that by blackboard bold L so that's the data of a space Z sitting over Y squared and a line bundle sitting over Z squared with all of the gerb data and so forth. And then it should come with a trivialization of this pulled back gerb over Y cubed, this alternating product over Y cubed. And then, okay, some more stuff. So you can't just say the two trivializations are the same. I didn't mention that gerbs have two morphisms but it's a two category so you can't just compare trivializations on the nose. The correct thing to say is that there should be a two morphism that relates the induced trivialization of the trivialization coming from here induces a trivialization here and you should compare that with the canonical one by some two morphism and then there's some coherency condition on the pulled back two morphism over Y5, some kind of associator construction. And I don't know, maybe this is the wrong audience to complain to about these things but if you're like me, this kind of thing inspires you to sort of take a long walk or go get a sandwich or something. This is not the kind of thing that I digest very easily. Fine, so, but it works. So this kind of data cooks up a well-defined characteristic class in degree four co-amology with major coefficients. If you want to do this and if you want to go higher and higher at gerbs, there's going to be higher and more complicated coherency conditions. You have to go sort of farther out along this horizontal condition and figure out what the right conditions are. So that's one complaint about this version of two gerbs. Another complaint that you might pose is that the roles of Y and Z are very asymmetric. I mean, Y sits over X but Z is very much farther from X in this picture. Z is something that's sitting over Y squared and so it's sort of far removed from X. And really the whole point of this talk is that you can do this in a way that makes the roles of Y and Z symmetric and interchangeable. Z? Z is going to be a low, it's just some space with a, with a surjective map to Y squared admitting local sections. Just some space, some space that sits over Y squared and then if you take the fiber product of Z with itself over Y squared, so that would be called Z squared, you put a line bundle over that. So Z is to Y squared as Y is to X. That's right. Yeah. But this is not the generalization I expected. I'd like to just move up or by one step through the previous picture. Yeah, that's kind of the same question too, but that doesn't, that doesn't quite do, that doesn't quite do what you expect. It's not clear that you can get down to a degree four class on X that way. At least not clear to me. Right, so let me describe what we call bi-jerbs. So this is a different notion of two jerbs in which the spaces Y and Z are in equal footing. So we start with just two maps, Y and Z sitting over X, both locally split maps. And then we'll pick a third space, call it W, that sits over Y and Z, so also emitting locally split maps to Y and Z in such a way that the square commutes. So we just have a locally split square, commuted square. The minimal thing to take is you could just take the fiber product of Y and Z, but typically you're going to take it to be some larger space. And then the point is we can start to fill out this diagram by fiber product. So I can take the fiber products of Z with itself over X vertically and Y with itself over X horizontally. Then we can fill out the rest of this diagram with fiber product. So if I take W, I can take fiber products with itself over Y and I get these spaces vertically. I can take fiber products of W with itself over Z and get these spaces. And then up here, the idea is it sort of commutes. If I take fiber products of this W12 space with itself over Z squared, that's the same as taking fiber product of W21 with itself over Y squared. So there's just a well-defined way to fill out this diagram with fiber products. It's not hard to do. You just sit down and think about it for a second. And you get, you know, so this little square just generates a whole quadrant of spaces. Admitting all of these are locally subjective maps and, you know, a certain number of them where you're just sort of forgetting factors. So we just take, truncate off the Z's and the Y's and the X's and look at the W spaces. This data is the data of a bismplitial space, and a bismplitial space over X, because of course all the spaces come with max X. So the idea is that a bi-jerb, just as a bundle-jerb is a simplicial line bundle, a bi-jerb should be a bismplitial line bundle. That should just mean that over the 2, 2 space we erect a line bundle. And then we've got two different differentials now associated to the horizontal and vertical directions. So the data is a line bundle L equipped with trivializations of the horizontal and vertical differentials of L in such a way that it induces the canonical trivializations of D0 squared L and D1 squared L up there. Again, those are canonically trivial, and such that these trivializations agree on this space. So D0, D1L and D1D0L are canonically isomorphic, so you can compare these trivializations over W33. So just compatibility of the trivializations as well as agreeing with the canonical trivializations there. That's the data. That's a bismplitial line bundle. And the claim is that this does the right thing. So it's sort of straightforward to define inverses. Again, we can just replace L by its inverse. That would give you an inverse bismplitial line bundle or bi-jerb. Products, you can just take fiber products. You take your locally split square. If you have two locally split squares over X, you can just take fiber products of all the spaces together and get a new locally split square. And then at the top, you can tensor product of the L's. Pullbacks are also straightforward to define. The right notion of trivialization, it turns out, is that if L comes from one step to the left or one step down, they're equivalent. If it comes from the left, then it also comes from down. So if there's a line bundle over W12 or W21, such that L is isomorphic to the differential in either way, that's what you would call a trivialization of a bi-jerb. And then the claim is that this has a well-defined characteristic class in degree four co-homology, which is natural with respect to these operations, and vanishes if and only if L is trivial. And you can say that the classes for two bi-jerbs agree if and only if L and L are primers, stably isomorphic. Again, that's kind of a cheat because it's basically just saying, again, the product of one with the inverse of the other is trivial, which is, again, obvious if their classes agree. And then I claim that this generalizes in a very straightforward manner to higher degrees. I believe it is an exercise to the audience. But the point is that it's not a difficult exercise. Once you've seen how to do it in degree two, you know how to do it in degree n. I mean, you just start instead of with a square over x, you start with a cube over x. Can't go out of cube. There we go. So here's x. And at the corners of these cubes, you have spaces, you know, see, they're all directed, maybe I'll get the right way, even. And it should be a commutative cube, and all the maps should be locally split. And then you can fill out by fiber products sitting above this thing. And over the 222 space, you would put a line result and equip it with trivializations of its three differentials that are compatible in the obvious way. So that defines a bundle multijerb, and then I can just change the word in the theorem to change the theorem to say that a bundle multijerb of degree n has a well-defined characteristic class in degree 2 plus n with these same desirable properties. So you don't have to think too hard about the higher and higher categorical coherency conditions to define those arguments. So that's the definition, and then you might well object that maybe these things don't exist. So are there any bijerbs or multijerbs? Oh, I said before I get to the existence question, let me just again say a word about how to get to the thing in check co-homology. I'm just not realizing how awful this slide looks. But the point is that you could do essentially the same thing. So you can take check co-homology of all these spaces and in this case get a triple complex. So not pictured is the check differential direction coming out in a third dimension here. But we've got, again, because of the simplitial structures, we've got these two horizontal and vertical differentials. And then again, as I said, the check differential coming out in another direction. So we have a big triple complex of Avelian groups here. And the claim is that the complex is in these horizontal and vertical directions because of the locally split structure. These are just exact. And in fact, there are, I can say more, there are commuting homotopy chain contractions of the horizontal and vertical sequences. They commute with each other. They don't commute with the check differential, which is kind of important for the theory, but they commute with each other. So again, the total co-homology of this triple complex, if you try to write it down, degenerates immediately and it just computes the ordinary co-homology of X. And then of course, we've got a line bundle sitting here. And the data, the bisonplitial structure of this line bundle means that its check differential is trivial and that its simplitial differentials in the horizontal and vertical directions are trivial. So there's some pure class in this triple complex sitting at this level and the characteristic class of this bi-derb is just the image in, well, H3 of X with C star coefficients or equivalently integer H4. It's just the image of that pure pro-cycle in homology of X. Or if you want, you can just explicitly zigzag this thing down however you like. You can zigzag it down this way, this way, or this way, or this way, or you can kind of stair step down the middle. Right, so what about existence? And then this will relate to loop spaces. So just getting back to ordinary bundle germs for a second. I should walk in front of the screen. So let's suppose X is connected and then take for our fiber space the base path space. So as I said, since this is contractable, you can represent any class in H3 by a bundle germ with this fiber space. And of course, if we take the fiber product, right, so the map here of course is evaluation of the far end point. So the paths all start at some fixed base point and then the map to X is just evaluation of the far point. So if we take the fiber product of that with itself, and we get two maps that start at the same base point and end at the same point, so we get a loop. So we can identify that with the base loop space of X. So what we get is a line bundle on the loop space of X that has some, you know, germ conditions, some simplical condition. And as Conrad talked about in the last talk, the germ product on the loop space, viewing the loop space as a germ, or as the path space is your fiber space for a germ, that germ product becomes the fusion product, right, in a sense originally defined by Schultz-Tekler. And develop further developed by Volver. So we can say L is a fusion line bundle on loop space. And you can do the same thing. So if X is simply connected and you take both Y and Z to be the base path space, and then W is the paths in the paths, right? So the paths in path space, or equivalently that's just maps of the square, right, based maps of the square into X that map one corner of the square to your base point. So of course that's a contractable space. All these spaces are contractable. Oh, I didn't write it down, but there's some comological conditions you can write down on Y and Z and W. And then answer the question, if I have a four class on X, you know, can I represent it as a bundle by germ using spaces Y, Z and W? And in particular, if they're all contractable, then yes, you can. And then the two space in this case, so if you take the fiber product, the appropriate fiber products lead to just the double, the iterated base space of X, so the double loop space of X. So you could say then that every class in H4, integer H4 is represented by a bundle by germ with all this data, so that X and the Y and Z are the paths and the W is the path path space. Or equivalently, it's a doubly fusion line bundle over the double loop space, so it has fusion products with respect to the two different loop factors. And versions of two gerbs over the double loop space sort of satisfying these conditions were actually already written down by Kerry Johnson, Murray Stevenson, and Wang in this big paper there. And then the proposition is you can do this, you know, as much as you like, so if you have a K connected base space, then you can represent any class in a degree 3 plus K by some multi-gerb or equivalently a fusion line bundle on the loop space iterated 2 plus K times. So that's existence, but it's a little bit, you know, it's a little bit annoying that you have to assume this connectivity of X. And, you know, base loop spaces are okay, but we all really like the free loop spaces better, so what can you say about free loop spaces? Alternatively, you can take your space not to be the base path space, but just the free path space. Now it doesn't fiber over X, but it fibers over X squared, right, because I take the map from both endpoints, and so I get two points in X. But if you take the fiber product of the free path space with itself over X squared, well then you have pairs of arbitrary paths that end up at, that fetch up at both points, and so you have the free loop space, you can identify that with the free loop space. And the claim is that you can represent every class in H3 regardless of whether or not the base is connected by a bundle gerb. Yeah, so this is just the bundle gerb question, so H3 is represented by a bundle gerb on the free loop space. So the gerb product again is the fusion condition, and then you need some additional condition. One way to say it is what we call a figure of eight product on loop space. So this is another sort of product associated when you have, so fusion is associated to this picture, and you have loops related that way. So if I have two loops, and one of them starts halfway through the second loop, then there's a third loop in this picture, which is where I take the figure of eight loop. And there's sort of an additional figure of eight product where the fiber of the line bundle over this loop times the fiber of the line bundle over this loop should be isomorphic to the fiber of the line bundle over the figure of eight. And so if you equip it with that additional condition, then you can represent H3 classes by line bundles with those, with that structure. So that figure of eight condition is just yet another simplicial condition where all of this data is now sitting over this additional simplicial space. So the data is sitting over x squared, and we really want to get the class in x. And the figure of eight condition is just saying that when you pull back the whole thing that's sitting over x squared to x cubed, there's a triviality condition that lets you know that actually everything actually just came from x to begin. So I won't go too far into that. And then, you know, you can do this to higher degrees if you like as well. So again, without the connectivity hypotheses on x, you can represent any integer class on x by a multi-simplicial and multi-figure of eight line bundle on the iterated free-loop space. Right. And then I want to relate this a little bit again to transgression, which Conrad mentioned in the previous talk. So just going back again to the JURB case, as Conrad mentioned, the three class down on x, if you, if you, right. So if you have a JURB with Fiber space, the path space of x, or quote-unquote a fusion line bundle on the loop space, then the train class of the line bundle on the loop space is represented, is related to the Dixme-Aduity class of the JURB down on the base by the transgression map. Right. And in general, the transgression map is just this map on from co-amology of x to co-amology one degree lower on the loop space where you pull it back to, just by the evaluation map to the product of the circle in the loop space and then integrate over the circle. So that kills one degree. So this transgression map on co-amology, of course, loses information since it forgets the simplejial properties about this map in general is neither injective nor surjective. So just, just talking about the churn class of the line bundle upstairs is, as Conrad was talking about, not enough data to, to keep track of the fact that it actually came from something on x. Now there's a way of enhancing your co-amology of the loop space, sort of building in this fusion and figure eight conditions at the level of co-amology so that the co-amology classes themselves remember that they came from x. So in other words, you can say that there is a loop fusion co-amology such that transgression factors through it as an isomorphism. So it's isomorphic to the co-amology of x one degree less and then transgression factors through it. So this goes back to previous work that Richard and I did. So it's basically just the idea of replacing line bundles by just co-amology classes in general but then adorning them with fusion products or the, if you like, equivalent, the equivalent of the gerb product structure or the, this implicit triviality structure. And so this bi-jerb or this multi-jerb kind of machinery allows us to iterate this and say that on the iterated loop space there's a well-defined loop fusion co-amology where the classes have a, kind of a fusion product with respect to all the loop factors and a, and a figure of eight product with respect to all the loop factors. Again, to which the transgression, you can transgress as many times as you want, well, I guess up to the, what's that? Yeah, k has to be bigger than n. So, yeah, k has to be bigger than n. So, yeah, k has to be bigger than n. So, at the level of the loop space, the loops of the total space at the principal bundle is a loop g bundle over the loop space of x. And then, of course, the loop space has this u1 central extension, as is well known. There's a lifting bundle-jerb on the loop space of x. And it's, of course, the obstruction to lifting the loop g structure over loop x to the central extension. So, we're lifting the loop principal bundle to a bundle with structure group, the central extension of the loop group. And as Conrad talked about in the previous talk, in the case that x is spin, an enhancement of the structure bundle to the central extension is the right notion of a spin structure on loop space, going back to an idea of Atea and a laboratory of it. So, I claim that this data, this lifting bundle-jerb at the loop space is really a bundle-by-jerb that's associated to the bison-plusional space where we take x, well, and since I'm using free loops instead of base loops, it's going to be x squared. So, in the one direction, we have got the free path space of x, vibrating over x squared. And then, in the other direction, we just take two copies of the principal bundle. And then, sitting over here and mapping above px and e squared is just the paths on e. So, if you fill this out by fiber products, as in that previous slide, the thing that will support the line bundle here is just the loops on e, just the free loop space of e. And the point is that the lifting bundle-jerb of the free loop space on e, or the line bundle you get on the free loop space of, sorry, not the free loop space of e, two copies of the free loop space of e, because this is going to get looped twice. Yeah, so the right thing that sits up there is going to be, it's going to be the fiber product of the loops on e with itself over the loop space of x. And so that's the space that, over which sits the, whatever I called it before, I don't know, the pole, you know, so this maps to the loop group, not the myself in the space. And over the loop group, we have the line bundle coming from the central extensions. You pull back, you pull that back, and that's the lifting bundle over the loop space. But it sits very nicely and naturally in this bundle-jerb picture, and then of course the point is that it's fixed medi-duity classes, of course the four class on x, the one half p1 of e, right, so the half plundering class, which is well known to be the obstruction to this lifting problem that transgresses up on the loop space to the three class on the loop space. And if this van, so in the case that this is the principle spin bundle for a spin manifold x, this is the obstruction to x being so-called string, right, we say x is a string class if this co-mology class vanishes and then we can solve this lifting problem. And then this goes back to various, various authors. So what we'd like to do in the future, well, so I didn't say anything about connection structures, so I was just talking about topological spaces, but if you want to pretend that all your space is inside your manifolds, then certainly you can equip the line bundles with connection structures and you can zigzag down and get all the forms that you would expect in all the right places. And then you could represent these objects, but these objects would have differential co-mology characteristics of classes and not just an ordinary co-mology. Right, I said that the characteristic classes classify these things up to stabilize morphism, which is really kind of, not that satisfactory, it's kind of a cheat. So I'd like to really understand what's the right notion of morphism for multijerves that generalizes Conrad's notion of morphisms for bubble jerves, so maybe we can talk about that this week. Other further directions that would be interesting to explore in relation to this fusion structure on the loop space or the iterated loop space. So, you know, these bundles, these fusion bundles on loop space, as was mentioned in previous talk, there's a lot more structure to them. In fact, you know, in particular the action of the diffeomorphism group, orientation preserving diffeomorphisms of the circle, and indeed its central extension, would be interesting to understand how, you know, what's the theory of fusion bundles that, with this difference, with this equivalence with respect to this group. And then of course you can also ask about loop fusion, k theory of the loop space or iterated loop spaces, which is something that Richard in particular has been thinking about. So that's all. Thank you very much. Thank you very much. I have a question. Yeah, there's a, yeah, I hesitate to give you a direct answer right now, but I can think about that. I mean, it's possible that the conditions are simple enough that you don't have to go too far in the non-Belian direction, and so things might make sense, but I haven't thought about it. Yeah. So suppose that the semicircular form is the integer class, and the square of the two forms is a h form with z coefficient. So does your two-jerk give rise to some kind of quantization, or maybe even a three-jerk or a four-jerk, or does kind of a sequence of kind of power of quantization? Well, I mean, the one thing that I can say is that there is a multijerk using the path spaces, and I don't know if that… Yeah, you certainly have a line mungle over the loop space, the iterate loop space. Yeah, because the path spaces are universal for supporting multiviruses. Yeah, that's another good question. How does this item say, give you a line mungle? Yeah. At the moment, I've just talked about line mungles over topological spaces, but yeah, that's a good question. There's something transgressive complex structure in the loop space. Well, these are all questions we can address this week at this end. I'm glad they worked on that. Bye, bye, bye. Thank you.
Complex line bundles are classified naturally up to isomorphism by degree two integer cohomology H2, and it is of interest to find geometric objects which are similarly associated to higher degree cohomology. Gerbes (of which there are various versions, due respectively to Giraud, Brylinski, Hitchin and Chattergee, and Murray) provide a such theory associated to H3. Various notions of"higher gerbes" have also been defined, though these tend to run into technicalities and complicted bookkeeping associated with higher categories. We propose a new geometric version of higher gerbes in the form of "multi simplicial line bundles", a pleasantly concrete theory which avoids many of the higher categorical difficulties, yet still captures key examples including the string (aka loop spin) obstruction associated to 12 p1 in H4. In fact, every integral cohomology class is represented by one of these objects in the guise of a line bundle on the iterated free loop space equipped with a "fusion product" (as defined by Stolz and Teichner and further developed by Waldorf) for each loop factor.
10.5446/59252 (DOI)
This is a survey talk on the things you just heard. The focus is something which wasn't mentioned, which is certain isomorphism, which occurs in k-theory and representation theory. The subject is conjectured by Kahn and Kasparov. It's no longer a conjecture, so we'll just call it the Kahn-Kasparov-Ashasimovism. It has a lot to do with, it was inspired by discrete series, the study of discrete series representations, and in particular, relationship between discrete series and Dirac operators. And mostly I'll talk about discrete series and Dirac operators. If you happen to know about the discrete series, I mean you all know about the Dirac operators, so that's a given. If you happen to know about the discrete series, well, I'm sorry, because I'm going to tell it to you again anyway. There will be C star algebras in this talk, and I'm sorry to Richard. I'm not that easily offended. Actually, only two C star algebras, and one is the C star algebra of compact operators. I mean, that's like a puppy. Nobody can know. It's dead subspecies, nice. In fact, as you see, that's a very relevant comment, so we'll get to that in a moment. So what this thing says is that two Abellion groups are the same, and one of them belongs to the world of topologies. It's quite easy to explain. There are going to be more than one K in this talk. Sorry for that. Looking at the equivariant... Oh, really? K is going to the group? Oh, no, it's already... There will be a league of G very soon, and it will have a compact subgroup K, and you can form G modulo K, which is a vector space, a quotient of the algebras, and you can take the equivariant K theory. And it's an Abellion group. That's all I want to think about. It's an Abellion group, and these gentlemen up here, particularly the second one, conjectured that by means of a process which involves Dirac indices, this is the same, as yet another K theory, again, of some convolutional algebra made out of G. And the idea is that on the left-hand side, it's very simple. It's just topology, because we have the but periodistic theorem. We can really write down exactly what this is very easily. We will write down exactly what this is very easily. On the other hand, on the right-hand side, it's rather mysterious what's going on, and what... to understand this K theory of this C-strategy, to know quite a bit of representation theory. In principle... What are the fonts? The fonts? What letters did you write? C and... C-subar. There's a subscript there, but it's all in Roman italic. So in principle, you need to know a great deal about this thing. This is a statement to write down what this map is, and say what the two terms is, is a relatively simple matter. That's a matter of opinion, of course, but compared to the competition, which is representation theory, the classification of tempered representations, the work of Paris Chandra, compared to all of that, a simple description of pattern for representation theory in terms of K theory in these terms is not like anything in the final detailed classification of representations, which is very complicated. So it's relatively simple. It appears... So you can get to work and try to understand what this right-hand side means. I'll say, unfortunately, only a little about that. This is only a little amount of time, but it appears to involve very subtle, combinatorial issues in representation theory. But on the other hand, this isomorphism can, in fact, be established, due to that. In a way which is very difficult to interpret representation theoretically, and the suggestion I want to make is, well, this is a beautiful isomorphism, which deserves to be admired on its own terms, but I think it's trying to tell us something about representation theory, which we do not yet know about the way the representation theory and the Dirac operator interact. We know about discrete series. This is something worked out by Parthasarathian and then by Théoreme Schmitt. We know about that, but we don't know about the rest of the story when it comes to the Dirac operator. So there should be an interesting story to tell. Just... I'm not going to talk about this right now in the introduction, but I want to mention it because it bolsters my argument that one should study this and derive from this somehow more detail than informative statement about representation theory. This is rather beautiful and sound right, but what we're looking at here are just two freeability groups generated by a set of discrete parameters. It's by far from the whole story in representation theory. But studying this type of map and the fact that this is a nice morphism led to the following rather interesting story being told. And so these are the groups that I'm going to be studying, reductively groups, but what I'm about to show you, none of the following... specializations are really relevant, but they make it simpler to tell the story. Let's say connected and linear. Later on, even in a second parenthesis, we'll talk about the equal range, which will tell you what that means later. So the key which appears over here is the maximum compact subgroup. And this statement, let's call it star, this is a point made by Alain Combe a long time ago, which is the star is equivalent to a nice morphism which you might like less because it involves two C-star algebras instead of one. So I like actually two C-star algebras here. They're basically the same thing. Here is the C-star algebra of G, which I've yet to really tell you about. And here's the C-star algebra of another group, which is a much easier group from the point of view of harmonic analysis. So this is just a semi-direct product of K acting by the adjoint action on this vectorly group, this vector space G mod K. We just close up the parenthesis. What's the sub R? Just put me out of my misery. The sub R is there for your benefit, Richard. And what it means is that this is very concrete C-star algebra of operators on the Hilbert space. This is the C-star algebra generated by the left regular. So it's a good thing that the other is there. I'll tell you, get to that in a moment. So what Mackie said a long time ago in a work which was not so well regarded, I guess in representation theories or other, but it's a beautiful, fascinating possibility, which is that if you look at the unitary, irreducible unitary representations of G, they should be the same as the irreducible unitary representations of this much simpler group. Mackie's famous for determining the irreducible unitary representations of semi-direct product groups. And so thanks to Mackie and thanks to the Fourier transform. This group is easy to understand and its irreducible representations very easy to understand. It's no more difficult to understand in a way what the representations of this are than to understand what the representations of compact groups are. I guess we have to thank Kevin Baill for that. And then to understand Fourier analysis. And what Mackie suggested is that these two spaces should be the same, more or less at least at the level of measure theory, give or take a set of measures zero. On the other hand, here's a statement where it's, you know, if you're talking about k-theories, it's rather important not to throw away a point. You throw away a point from the space, you can change this k-theory dramatically. So putting these two things together, the isomorphism of Cohn and Caspar of this suggests an exact bijection. And I'll stick in a word or two here. We're only looking at what are called tempered unitary representations. We'll get to that in a moment. So this was something which Mackie suggested in 1975, and I think he was not warmly greeted by the representation theorists. I don't know if you were at this conference, Michel, when you wrote about this. I guess I was actually guilty of not having trust that this was true. Yes, well, so Mackie had the last left because... That's quite correct. But who Mackie is? It's too late. George and I think George and I are very misguided. This is in fact correct. At the level of sets, every single representation over here corresponds to a representation over here. I'm talking about tempered unitary, irreducible representations, whatever they are. And this is a beautiful fact, and the final vindication is due to Alexander Akustidis. So I'll stick his name on the point here as well. There are beautiful and simple statements in representation, in ways of organizing the information that the representation theorists have catalogued. Of course, this is still as complicated as it was before, but it's conceptualized thanks to this isomorphism. So the hope is that in the same way there's a second conceptualization of using a Dirac operator, using the sorts of mathematics that I'll be talking about now. I'm not going to say any more about this, and the reason I'm not going to say any more about this is that we do not understand this at all from a geometric point of view. I have nothing geometric to say about this. We know it's correct, but we don't understand anything geometric about it. That's a problem for... Well, wait a minute. The next part, the Mekki correspondence? Yeah, we understand how to make this a bijection, because we understand thanks to the representation theorists this exactly, the tempered jewel exactly. This is relatively elementary. So you can see with a little bit of inspiration, which is what Alexander Aftastudis supplied, you can see what's coming on, and it's an exact bijection. Even a representation of the semi-derived product you can't construct a representative of. You can. There's an exact... there's a formula which we can write on the back of a postcard, a small postcard, and send it to the Dousers. There is an exact formula. But it's not geometric. It relies on algebra. It relies on work at David Hogan. It's quite... if you unpack the formula, it becomes very difficult to verify. Thanks, Mr. Stokhman. There is something between some of the pieces of that formula. The one piece of... I mean, the starting point is very easy. The desirable geometry is this. There's a one... I don't want to talk about this, but you guys are... There's a deformation to the normal construction which glues together. A whole bunch of copies of G with copies of this thing here. And so that's a continuous, smoothly varying family of groups. On the other hand, to go from this, it suggests, I suppose, that the duels of these groups are smoothly varying, but to create some rigidity so that they're not smoothly varying at all. They're all exactly the same as one another. That's a very complicated... that's a very complicated and interesting thing. It's a... worthwhile to look again at representation theory, particularly the very complicated, but it's not a very complicated thing. It's not really the very complicated, but, you know, magisterial works of Harris-Chandler and try to understand that work from a more conceptual point of view. This is one argument in favor of that. This is one very beautiful theorem which comes out of these ideas. There's a direct link from this to this, thanks to observations made by Alan Gond. I have small parts to play in this. All right. Great. So now let's continue. Thank you. So apart from trying to explain this talk a little bit, I hope to cover a little bit of preliminary ground for some of the afternoon speakers for Hang and Yan Li. Talk about some of these things and maybe talk about some of the complicated bits of this story. I'm going to tell you about the easy bits of the story because I get to go first, but I get to do the easy bits. So we're talking about irreducible unitary representations. Actually, not always irreducible because we're, like Harris-Chandler, we're interested in this particular unitary representation of a group G and in particular, we're interested in how it decomposes into irreducible pieces. The irreducible constituents of this representation are what are called by Harris-Chandler the temperature representations, the temperature irreducible representations of unitary irreducible representations. And we're going to approach these through a certain language, which is this language of C-star algebras because the language of C-star algebras reveals some elegant statements. So here's this thing that I was mentioning before. The R stands for reduced, which is not very informative. It doesn't stand for regular, but it ought to stand for regular. It's associated to the regular representation, and it's just an algebra of bounded operators on L2 of G. And it's generated by the smooth, compacted, supported functions on G acting by convolution. Let's say on the left. So this is the thing we want to study. In the world's simplest case, the group is Abelian, and in that case this C-star algebra turns out to be isomorphic to the continuous functions on the space of temperature representations, which is the Pontjuardian dual. So in that case, it seems that the C-star algebra is a reasonable thing to do. If you want to study this collection of irreducible representations as a topological space, that's the basic idea. The work of Mackie and Gapkastidas that I'm erasing suggests that K, the maximal compact, should play an important role in understanding representations of G, and let me start by describing, telling you again, of course we all know this, about G. When G is the representations of G, when G is compact. But from the C-star algebra, the Pontjuardian dual. I need two preliminaries, either of which is very complicated. First of all, if you have a compact group, and you find a Hilvers space which carries a unitary representation, not irreducible, just some unitary representation, then just like in the theory of Fourier series, it has an iso-tifical decomposition. I'm just writing this down so we have the notation at hand. It breaks up into pieces. There's an obvious map, as you'll see, I've written all of this down. On the left hand side, to the right hand side, and it's an iso-moster, it's an iso-typical decomposition. And for later purposes, let me just label the summands here as H-upper-sigma, like that. It's just the ordinary Fourier decomposition, if you like, of a Hilvers space, or any space under a representation of a compact group. And of course, by Peter and Viall, we know exactly how this works. In the case of L2 and G, this is just a direct sum of irreducible representations. I guess I'm calling it H-sigma. H-sigma means the Hilvers space that carries the irreducible representation sigma. As for this multiplicity space, it turns out to be another copy of H-sigma, as you know. And in a similar way, the c-ster algebra that we're discussing here is nothing very complicated, it's just the same. The direct sum, oops, I forgot to tell you, that's in number two, I guess so. It's just a direct sum of what are in effect matrix algebras, these things, K of H, case transfer compact operators. So on any Hilvers space, K of H means the algebra of all compact operators. The compact operators are the ones generated by very simple maps like this. This is still the compact G, I'll call it something else. This is finite dimension. This is in this situation here, this is a finite dimensional Hilvers space. We shall be venturing into infinite dimensions in just a moment, and this is what compact operators are in general. So let me call this thing, this operator, which is a right-hand operator. And this isomorphism here is very, very simple to explain. In one direction, if I have a function on G, and I want to get out of it to an operator, of course I can just apply a signature, which just means the integral of F of G, the sign of G, dG. And there's a beautiful map in the other direction too, as you know, I'm sure, which tells you in the other direction if I have a v-sigma, I'm sure there'll be a sigma star. I should just take it and make out of it a gigantic sum. There's an anvilising factor, which does that basically just to be the dimension of the representation. And here I'll just put in the matrix coefficient function. Well, this is the theory of Schoenweil-Petersburg. All right. I'm telling you all of these things because, of course, I want to talk about discrete series, and a similar story is going to play out, so I'm just reminding you of these elementary things first. Here's the first tiny, epsilon piece of geometry. If you have some kernel function, which is smooth and compactly supported on some manifold times a manifold, then the standard formula, which makes an operator out of a kernel function, which is something like this, is given by smooth kernel, smooth compactly supported kernel side of our compact. So in particular, in the compact situation, if you think about what this formula really means, in the case of the regular representation, you see the c-stra algebra of the sub-algebra of the compact operators. Compact operators are just like matrices, as you can tell from these elementary formulas involving micronominers. And of course, Hermann-Bauer very famously worked out what the parameter set sigma is, and we'll get to that in just a moment. But first, let's discuss the discrete series. So these representations, this is not exactly Harris-Chandler's definition, as you'll see, but it's close enough. First of all, pi is tampered. It belongs to L2 of g. What that means is it can be converted into a representation of the c-stra algebra, and the main condition is this. Well, let me write this down. This is sort of optional because it's, it was actually automatic. I'll put it in any way, but in the representation pi, the c-stra algebra acting, as it does through this integral formula, here we go, is acting through compact operators. And here's the big deal. If you take the c-stra algebra of g, it decomposes as the kernel of pi, made into a representation of the c-stra algebra, plus the annihilator ideal. So that means in the second sum, everything in the c-stra algebra, which was product with something in the ideal is zero. And there's a direct sum decomposition. These are orthogonal ideals. The thing just falls apart, like you see here. Of course, if you divide out by the kernel, then you get the image here. So what this means is that the c-stra algebra looks like a sum and a compact operators, plus the part of the c-stra algebra which is not seen by the representation. And again, it calls to mind this description here. You could make this definition for any group in the world. In order for that to be, this series, the center of g has to be compact. It was one of the things I wanted to add to my list of assumptions. So there's no point in talking about these unless the center of g is compact or unless you modify the definition. Talk about discrete series modulus center. Secondly, the problem of studying these representations becomes the most interesting in the case of reductive groups. I'm actually only discussing reductive groups here. And here's a little more geometry, which to my mind makes these representations all the more mysterious. I might want to study these. Well, here's a way in which they further mimic the compact case. Let's describe the following thing, define the following thing, the compact ideal inside of this c-stra algebra. I just mean elements in the c-stra algebra. With the following property, when I make a lambda stands for the left regular representation, when I make f into an operator, as in fact it already is, on L2 of g, it acts as a compact operator. Well, not on L2 of g. If you think of lambda of f as a convolution operator, its support is far from compact, so we're far from this situation here. No element of c-stra of g acts as a compact operator on L2 of g. On the other hand, you can look at this thing. You can think of L2 of g as a representation of k, but now by right translation, we can use the left translation. This isotypical space is a sub-representation of L2 of g. What it means to be in this compact ideal is this somewhat geometric condition, slightly geometric condition. It's a fact that this compact ideal, I forgot to give it a name. It's just the sum of all of the discrete series ideals. In other words, all of the annihilators of kernels of discrete series. It makes them a little more geometric, and it makes it rather remarkable that there could be any discrete series at all, because if you think about what this operator looks like, I mean it looks like an integral operator, k of g1, g2, but with a translation invariance property. It seems that you're very far from the sort of compact support condition, because if g is not compact, it seems very difficult to build a g-translation invariant, compactly supported kernel, effectively g mod k times g mod k. Yet, these things exist. It's very mysterious, and they exist. You can imagine that it's not so easy to build these on the basis of this funny condition, and this is what it means. You need to find operators which act on g, l2 of g, as compact operators, not on all of l2 of g, but on these isotypical summits. Okay. Now, any questions so far? It's a survey tool, because it's like a class. Any students have a question? That's the other one. It's k hat, so k is the maximum compact subgroup of g, and so I'm decomposing l2 of g, the non-compact group of g, isotypically according to the representations of k. Any questions? From the beginning, you could have a look at the action of g times k on l2 of g. Yes. You could have a look at the... Yeah. That's possibly a very good thing to do, and I'd have to think about that, but maybe this condition becomes more efficiently expressed in that way. Thank you. All right. These are the things that Harry Chandra studied, and I'll tell you what he found out in Duke Cozum. I'm going to run out of time. And first of all, back to compact groups, our friends, and let's try to bring Dorae cooperators into the story. Every compact group is reductive, by the way, and so now we're looking just at the compact, easy groups inside of the group, a collection of all real reductive groups. And so this is a beautiful story, which was first explored by the late Great Raul Bot, as people would say. So let me remind you or tell you what Raul Bot had to say. First of all, before Bot gets into the picture, we have to hear from Henry Vial, who classified the irreducible representations of... Connected groups, at least, as you just said, connected. And the classification goes according to this famous and beautiful character formula. So the representations are given by parameters. According to Hermann Vial, they're listed exactly by parameters, which I'll call Vial parameters. Discrete series representations are classified by Harris-Chandra parameters. So I wanted to give these things a name, which was suggested by Harris-Chandra parameters, but not exactly the same. And such a representation, of course, has a character. And if you know the character, you know the representation, according to elementary theory. And the character is a class function. So if you know it on a maximal tourist, you know the character everywhere. And so here's the formula for what it is when X is something under the algebra, and then you have an exponentiation like this. And it's written by some enormous thing, which has a numerator involving a vowel group associated to G, and then some signs, and then some exponentials like that. Where phi is this mysterious parameter, which I didn't tell you what it is yet. And then there's a denominator, which is a little mysterious as well. But it's so beautiful that I mean, you have to write the thing down like that. Okay. There's a beautiful formula, and all I want to extract from this formula for the time being is that there's a term on the top, which is an alternating sum. And G plays a role. They're going to be different league groups with the same maximal tourists in just a moment. And so G plays a role through the size of this group of permutations, this vowel group. And then there's a variable called G of phi, evaluated at X. And then there's some universal denominator, which depends on G. And I'll put an X here as well. Yeah, G gets 28K. Yeah, let me try and guess which letter I'm most likely to refer to, which is G here. And yeah, thank you. That's Vile's formula. And it's important to know roughly what phi is. So phi is a weight, and it has certain properties, and it's important to just give them names. First of all, there's some kind of integrality, as you know. And I'm going to call this, it's not exactly integrality, meaning that phi is the differential of a league group on the morphosome. There's a little shift involved, and I'll call that G-shifted integrality. And then there's a non-singularity hypothesis. It means that W should be fixed by no vowel group element. And finally, what else do I want to say? Well, we want to avoid repetitions. If you move phi around by an element of the vowel group, you'll get something which gives obviously the same formula, plus and minus, and the way of getting rid of that is to insist on some G-domains like that. So this is what a vowel parameter is, and everyone knows that roughly what the picture looks like. If you start to draw these, it tends to look like a series of lattice points shifted away so that in the continuation of this lattice zero would not necessarily be in it, a series of lattice points inside of some vowel chain like this. This is what you learn in graduate school, right? This is what you taught you in graduate school. So this is the beautiful formula, and as I say, vowel. That's not part of the plot. I reinterpreted this in the language of Dirac operators, and let me just do that very briefly. So on to the plot. So now let me assume for simplicity the following situation. So I said that I would be considering situations in which the same maximal torus sits in two compact lead groups, and so here it is. This is not absolutely necessary to describe what what did, but it simplifies things, and it also simplifies things to assume that k is exactly the fixed point in G. So I said that I would have SU5 and U3 times U2, something like that, another type of joke. So the involution breaks up the algebra of G into two pieces like this. And we can study the triplet algebra of p, and thanks to all of the various assumptions that I made, p is an even dimensional vector space, so this is just the endomorphisms. There's some other vector space, s, which is the associated spinor representation. It's a representation of k, at least in the algebra of k. And the basic fact which gets this whole thing going is that the character of s is just the quotient of the viral denominators of G and k. This is what I've noticed. So here you have to put G and k at the same time? Yeah. So G is the same to you as before, so maximal torus is actually right here. And here's a formula, basically a formula of a buck. Suppose you have a function on G. G is a compact group, so this function gets raised to a compact convolution operator. I'm interested, however, just in the value of f of the identity, the map, which sends a convolution operator, a G invariant convolution operator, to the value of f of the identity. That's a trace functional on c star of G, so this is a trace, something homological in nature. And here's a formula for it. There's an normalizing factor which is geometric, it's just one of the characteristics of G and k. And then there's a sum, and the sum has several complicated constituents. So not that bad. I described before, vial parameters, vial parameters index discrete series, sorry, compact series representations of compact groups. I'm going to make a sum here of a larger set, which I'll call the Harris-Chandler parameters. I'll tell you what they are in just a moment. So associated to each Harris-Chandler parameter, there's going to be a numerical constant, which is a sort of dimension. It is the dimension of the associated representation when the Harris-Chandler parameter is one of these vial parameters, and that formula is just extended. And then there's a trace, and this sort of trace, which I'll tell you about on the next line, is an operator-theoretic trace, and here we go. What is the d? Is that the dimension? Yeah, I said it in our last lecture. So this is the dimension of phi. The dimension is actually a polynomial function, as you know, on this dominant chamber, and what I want to do is just extend that polynomial function to the entire plane. So that has the effect of making the dimension negative here and here, so it's no longer a real dimension. It's plus or minus a dimension, if you like. Okay, Harris-Chandler parameter. The integral one is just a dimension of representation. For the integral one, it's dominated by the representation. Yes, just the dimension of the representation. Yes, the dimension of the representation, well, strictly speaking, divided by the volume of G, if you normalize the volume to be one, it's just a dimension. Alright, and now let me describe what this trace fires, because I couldn't just put this. It's an ordinary operator trace. It's just a trace of an operator, some of the diagonal entries, so it measures how a certain operator acts in a representation. And here's what the representation is. I just want to take G and I want to build a certain associated vector bundle on it. And so what I'll do is I'll just test it as, and then I'll test it with W phi and then move it to itself. W phi is the representation, effectively the representation of K associated to the parameter phi. The only nuances that phi may not satisfy the correct integral properties to be a representation of K. But on the other hand, S may not be a representation of K either. And when you tensile these together, these things, the tensile product is a representation of the group K. And this is the trace I want to talk about. I should have said a super trace. S is a Z2 graded vector space. And that's the formula. It has some interesting consequences. Oh, I've got to say one of the highest gender parameters, so let me just, that's rather important. Well, it's the same sort of thing we were just discussing, the weight. But it has the following properties. First of all, as with G, it has a G shifted into gravity property. It has a G, non-singularity property. But it has a K dominance property. I should have not raised this section. So maybe, so the situation is something like this. Instead of looking at points just in one valve chamber, you're now looking at three valve chambers like this. And it looks at, what would it be? Yeah, I'm not going to say any more about it. This is just, we're talking about compact groups. We'll get to that in just a moment. It's going to be maybe a punchline, if all goes according to plan. But this is compact groups. Of course, these spaces here are not very complicated. What I'm doing here is I'm forming the L2 space of sections of some associated bundle on the homogeneous space G mod K, on the symmetric space G mod K. And on this space of sections of some homogeneous bundle, since the spinners are floating around, the Dirac operator attacks. And by the usual Euler characteristic argument, we can replace the trace by much smaller trace. Namely, we can just look at the action of F just on the kernel of the Dirac operator. And it follows from this formula, therefore, with a little bit of thought, that every representation sits in the kernel of some Dirac operator. It tells you why to look for representations. They're always realized for those harmonic spinners. It's the only place you have to find them, and it tells you quite a bit more as well. This is a formula of a bunch. It's not like we're doing anything that Vile didn't do, because the way this is proved. I think I'll skip. This is a routine calculation. It's not difficult. I'll just write down one or two lines of it just to explain where the sort of characteristic comes from. It's not difficult, but it certainly uses the Vile character formula. So we're not stealing Vile's thunder here. This is Vile repackaged by Rale Bart. The calculation is just this. You look at the bundles associated to these various spaces, S tensor W phi, the Dirac sum that is. Now it's infinite dimensional. This is, well, there's a small z-tugrated calculation. There's some cancellation between the two things I'm about to write down. One of them is this, and the other is this thing. It's just L2 of G. Tense of the spin is star. This is the same thing as the L2 differential forms on G and K. And that's where the other characteristic comes in. So you calculate using a little bit of linear algebra in Vile's formula for these vector spaces as K vector spaces. Where are they? These things add up as a K vector space to the thing inside of the whole thing there, where K is now acting on the left here. That's how it works. I want to make the point that it's very geometric and simple and direct. Except for the fact that there are some combinatorial calculations which involve Vile's formula for compact groups. Now, let me go on to the discrete series. Any questions about that? So we leave Raubach behind for a moment and we turn to Harris Chandra. What I want to say is the story for discrete series is exactly the same. But in order to do that, we want to keep this a little bit, we just throw this part of the picture here. So let's remove the one hypothesis which is preventing us from talking about discrete series so far, I mean in a meaningful way. Namely the compactness of G. So let G just be a reductive group, but otherwise let everything else be the same. What Harris Chandra discovered is that the only time you even have discrete series is when there is a maximal torus in K, which is also a maximal torus in the natural sense inside of G. So we can apply, so to speak, the previous formula. This previous formula is written in the language of compact groups. On the other hand, if I have any reductive group like G, for example SLINR, then it has associated to it. SLINR is not a good example. Let's take SU21, something like that, a group with discrete series. Then there's an associated compact group, it's compact form, which would be SU2 plus 1, SU3 attached to it. So there's always this compact form of G floating around and we can write down this formula in the world of the compact form. So what we can do is that you can, in fact, make some kind of formula back in the world of G itself. So this is one of Harris Chandra's famous formulas. So let me assume that G is a group which has discrete series. It means that indeed T is a maximal torus inside of U, or if you like, T is maximal of G. I want to write down a formula involving traces of operators and in order to write down a formula involving traces of operators, I need to know that the operators are not too big. For me, not too big means compact. For Richard, it means something smaller like a smoothing operator. And strictly speaking, you're not allowed to take the trace of a compact operator, but it's not too bad. The smoothing operators I've danced there. Anyway, if you add up all of the discrete series ideals, as I said before, that you obtain an ideal of operators which act as compact operators. The act as compact operator is not on all the vel to of G, that's impossible. But the act as compact operators on any isotypical subspace, and in particular the act as compact operators on anything that looks like one of these. So it makes sense to take this trace. It makes sense in the non-compact context where it looks kind of crazy to take the trace of anything. It makes sense if you're in this mysterious compact ideal. And the formula is the usual formula. I'll write it as the other characteristic of G mod K, but in the non-compact situation, this G mod K is a symmetric space. It's a non-compact type. It's contractable. So this is actually one. And that's what the identity is. It's the same formula. It's exactly the same formula. And the sum is overall Harris-Chandron parameters, which means exactly what I said and meant a while ago, if you like, for the compact group. So in the world of Harris-Chandron, this thing is not an operator trace. It's something that John Schoen was like about. It's an orbital integral, one of the easiest. Maybe one of the most complicated. The orbital integrals behave the worst in the case of T. It's actually an orbital integral of T. It's the exact same for a coefficient of T. And that's the formula. But it has exactly the same consequences as we saw before. Namely, if you have one of these mysterious discrete series, where are you going to find a discrete series? What are you going to do with it? How are you possibly going to catalog them? Well, each time you have a discrete series, if you think of each time you have a discrete series, it contributes an ideal, which looks like the compact operators to the C-star algebra. And inside, the compact operators is a little rank one projection, v tensor v star. And the value of the identity of such a little compact operator projection is what's called the formal dimension, that d, associated to the representation. And what this says is that the Euler characteristic, which is one times the formal dimension, is equal to some mysterious number times a certain trace on a certain explicit space. What you see is that each discrete series representation is realized inside of it. Excuse me. Each, yeah, I said it right. Each discrete series representation is realized inside of the kernel of a Dirac operator. Not for a fancy reason, just because of a direct, analytic and combinatorial calculation like this. Actually, I should say, this is a theorem, by the way, and you can prove it by translating it into octal integrals and citing Harris-Chendron. On the other hand, you can attempt to prove it directly. Okay, it's not as easy as I suggested, maybe, by deliberately making a connection to compact groups. And the reason is that gmk is not compact, and so there's a little bit of analytical worry to the hat, and that's something that I'm examining, is Seochitcata. But the formula is somehow very believable and elementary when you see it from the perspective of our work. Let me close as quickly as possible by going back to Conk-Cesperibb. Tell me what it does. Well, that's the blackboard. This is a topological k-theory group of a certain vector space, an aqua-variant topological k-theory group of a certain vector space. It's a fancy way of saying, kind of a defeat overly sophisticated way of saying, the free-of-billion group generated by the set of Harris-Chendron parameters. When you calculate this book, what you find is that it's just a free-of-billion group on the set of Harris-Chendron parameters. So, it's a very shifted version of the representations of k. This map simply takes a parameter phi and sends it to the index of the direct operator manufactured, not using the fundamental spin of bundle s, but using s tensor v-files. And inside of here are the non-singular, are the actual, my apologies, general list. I'll show you what's going on. When you calculate the aqua-variant k-theory, here's what you find. Here were the Harris-Chendron parameters that I was trying to draw before. What you find is that the proof of, that it's going to work or not, there we go. The calculation of this k-theory group makes no, does not care, does not care about non-singularity along these hyperplanes here. So, you get more parameters in k-theory than you get Harris-Chendron parameters, because you also pick up some singular files. On the other hand, if you have an actual Harris-Chendron parameter, it's almost an immediate consequence of this, that the Harris-Chendron parameter goes to the corresponding discrete series. Harris-Chendron examined this situation further, this is not the end of the story, but he showed in fact there's always a unique Dirac operator in whose kernel lies any given discrete series, that's a bijection between Harris-Chendron parameters and discrete series. And what this math does is it sends, it's like a diagonal map, it sends a Harris-Chendron parameter to Harris-Chendron parameter. And then there are other parameters here. It's rather beautiful that you can obtain a much simpler group here by ignoring some singularity considerations. And it looks like you should be able to calculate the rest of this map in a similar, beautiful way. But when you sit down to actually do this, and so you open the book of David Bogan, or you open the book of Tony Knapp, and you always want to do the bookkeeping, to see exactly what the other generators in k-theory are like, it's impossible to calculate the rest of the C-ster algebra, not just these compact operator ideals, but the rest. What you find is that it is immensely difficult to do the calculations, and a huge amount of the combinatorics of valve groups and root systems is involved. Nevertheless, at the end of the day, after you've done all of these calculations, an extremely simple situation arises, and I think you have a niceomorphism here. It's a strong suggestion that we're doing something a little bit wrong, we're not looking at things in the right way. And, thank you for the reminder to stop. And so I leave it right there. We do not know at the moment how to continue the story in an elegant fashion, but because the discrete series works out so beautifully, and by virtue of other clues, which I don't have time to tell you about, involving the non-discrete series, maybe Yamli will mention calculation in the afternoon, which is very encouraging, because of all of that. I think we have a right to expect that there's a better formula, which accounts for this isomorphism in full, and it describes representations, tempered representations, in a way which is reminiscent of Afghistides' work, or Naki's work that I mentioned before, except organized around the concept of Dirac operator, if you like, quantization, in our ordinary sense, rather than this strange, non-geometric work of idea of Naki that we still don't understand at all. All right, time is up, so thank you very much. APPLAUSE Is there any questions? Of course. LAUGHTER May I ask an only mildly provocative question, for the purposes of this? So, one thing I thought a little bit about, I don't know, but maybe related to it is to put a smoothness inside your CSAR. A sub-article. Yes, yes. A smoothness of our real, which should help considerably with the index here. Yep. So what this would require, or maybe someone here knows about it, because it should be known, is an understanding of the compactification of reductive groups. This is something that has not been properly studied, as far as I know. Well, there's huge literature on compactifications, but you're right, not a huge literature on the corresponding analysis. So what one really wants, you see, is a compactification with good properties, namely, multiplicative properties. And so it's known how to do it for, see, for the compactifications of compact sleep groups. But I don't think it's known how to do it in general, but maybe it's not much harder. There are two interesting candidates. One is called S, and the other is called C. And this one was invented by Harris-Gendron, and this one was invented by... This is probably between these two. This is almost certainly between the two. Or, I'll answer it C is the continuous thing at Harris-Gendron. The S thing is very geometric. The S means... This is as usual too small. So this is corresponding to things that would vanish at the boundary of the compactification. Correct. So G is compactified, for example, by the wonderful compactification. And if you look at functions on G, which vanish to infinite order on the wonderful boundary, that's what this thing is. So the wonderful compactification is a little small, for these purposes, it turns out. Or so it seems. Because although it's wonderful, it's not good multiplicatively. And that's what you need, if you need to understand the convolutional. Yeah, so let me say one more thing, and then I'll shut up. This thing is Harris-Gendron's algebra. It's not geometric in the sense that it is not. It involves a very complicated, as you know, a complicated condition, and it's not, at least in an obvious way, associated to any compactification at all. I'm guessing it's... It's a momentum space, in a configuration space. But if you look at the Fourier transform picture, so you can also think of this convolutional algebra of functions on G as a point-wise multiplicative algebra of functions on the tempered dual. In that picture, and the tempered dual is basically a bunch of affine varieties, in that picture, this thing becomes the castleman algebra for the tempered dual. It's exactly the functions on the tempered dual and Fourier transform, which vanish to infinite order on the boundary of the tempered dual. The tempered dual is just an affine variety, so you can compactify it with a point or something. Both of these... I'm not saying I'm answering your question on the country. I'm saying that it's very interesting what you're suggesting. Both of these have simple and elementary descriptions, but neither of them is appropriate. This one is tailored for the tempered dual, and this one is tailored for momentum space, if you like. This one is tailored for configuration space, and they both have a role to play, but neither is particularly satisfactory. Neither of these two obvious choices help, and it would be truly satisfying to have a third one. I'm not sure how it stands relative to the Harris-Chander. It's certainly bigger than this one. Yeah, it's hard to imagine anything bigger than this, which would fit inside of this. It's a strain to show that it's inside of here, but it does. So this is, I would say, huge. But you see, one should be able to find it in such a way that it gives an algebra. Yeah, that's another issue with the Harris-Chander algebra. It's not obviously an algebra. I mean, geometrically, yeah. I despair of this thing existing, but I would love to be a Frung, because I'm a convolutionalgebra guy. That's how I think, and so a good convolutionalgebra would satisfy me immensely. But I just don't know what it might be. Related to that, is there a nice, perhaps obvious, smooth subalgebra for the semi-direct product? Yeah, just take the natural version of this, with Schwarz functions in the vector directions, and see infinity functions on the convex group. So that thing plays simultaneously the role of this and this. It has both of the good features of this and this. So it seems that, for sure, that's the correct smooth algebra in that context. So since that should fit together with a lot of copies of the gene, perhaps that would lead to the good answer. Yeah, no comment. That's an interesting suggestion. I will go calculate after that. Good question. Thank you. It's the question that I asked you some years ago. It's about the people of the first passaman, where he proposed another kind of sumo. I don't know if this is the answer to me, that he was still objecting. No.
This is an expository talk about C*-algebra K-theory for reductive groups. I’ll try to explain what it is, what it actually says about representation theory, and what else it suggests about representation theory, at least to a willing mind. The story begins with Harish-Chandra’s parametrization of the discrete series representations, and the realization of discrete series representations using the Dirac operator. I’ll discuss these things, and then touch on other parts of Harish-Chandra’s theory of tempered representations that are prominent from the K-theoretic point of view.
10.5446/59253 (DOI)
Interbron and character of representation. So there will be some overlap, but I promise I will tell you something new so Okay, so that's this is trying to work with Nigel and Shantung So in beginning to be give us so that G to be a B group then in my talk the main example you can just take the G to be SO2R and You consider this convolution algebra just as with any I guess after Nigel's talk I guess there is no complain about this sub R Yeah, so this sub R means that this is the convolution algebra in order to get a system out algebra You need to complete it with respect some known so here you with this R Basically means that you have to choose a special known which is operator known on the view that algebra operators on L2G This is what I mean by sub R. So and if it is a compactly group Of course, you can also form this system algebra in that case There's make no difference which one you choose but for non-compact then you have to choose a specific one and this is One we are going to use today. Okay, and what we want try to study as we also like to explain Want to study the case theory of this system algebra So that's the reason why we try to study it because this one you should consider it as the topological case theory of the dual space some some generalization or because in the when it's non-compact the G the G dual space could be much more complicated What is the topology want to see the study the case of that group? It could be complicated and also we want to study this this the case theory of this one so that's a simple example is that Also, basically explained where this idea came from if G is a billion or compact then this reduced the system algebra is a more time same or it are equivalent to the continuous function See the function on G do so a billion it is a puncture I can do and it's compact you can see this by this Peter Wabster, okay so that's Okay, so that's another construction is that as before it G to be reductively group with maximum compact subgroup K And this is you can build a debug operator This is what the construction used in conch has powerful theorem in the morning's lecture So if you have a irreducible k-reduce station this mu denotes the highest weight So that you have a k-reduce station you can this G mark a as some topological table It's just like you could a space so you can assume it spin and you can form a G Prevalent the walk operator acts on this space twisted by some vector bundle So this different choice of that irreducible k-reduce station will give you different G-prevalent vector bundle on the G mark K space But so that's a buddy for any of them you will always get a G Prevalent the one up the operator so that it defines our index Super sense. So this is This one living in the case area of the group C star algebra of G If a G itself just compact then this is just a what you know the equivalent index area because if G is compact the case area of the reducer C star algebra of G is the same as r of G r of K So what I'm trying to say is that if this compact see the case area of This group is the same as our K So The index is actually the equivalent index in a euro sense so that it is just a kernel minus cold kernel But in the non-compact case then you have to there's something different You have this is we want to understand what's going on if the G is non-compact So in non-compact you will see some new phenomenon which you will not see in a compactly group case The first one is that if the group is compact non-compact this space is non-compact So the kernel it is in fact Non-compact it will be infinite dimensional in general it should be for example It could be a discrete series representation So that's which if you you cannot really apply the index theory for compact for compact manifold in which the kernel is always finite Dimensional and another interesting phenomenon which you will not see in compact case is that there's possible By some choice of this meal you will get some G Outrage such that it has no kernel no co-kernel no kernel So if you from the viewpoints of the index theory for compact manifold you get a zero But in fact this index is could be non-zero. So this one is something Nigel missing in Nigel stock. This is what I'm trying to tell you what how to understand this phenomenon with that Yes, because they here as has a grading so that so means a plus Okay, okay, so that's if I want to understand So Harco is what I'm trying to do today is to explain how what is how to understand the analytical in the index of This D-MU how especially for this case so that is from here They'd suggest that if you are wondering on want to understand the index You should not just consider its kernel because it's there well How do you get some useful information another thing? I want to tell you what kind of representation theory radical information you can get from this approach so this is what I'm going to tell you today and also just in 45 minutes and how we'll talk about that the The top of the top logical side the also try to understand the same question use the fixed point stereo But my top will be only analytic. So I will just consider the Analytics so you want to understand as an analytical index which many in this case area group the first we want to see What is generator in this case area group? So that's before we built this do our copy of the index is always take value in this case area group So what kind of a generator it? Contents this case area group. So this begin boom up this begin with a compact in-group case Which is always good to give us so for compact the group case Then that's how do you construct a generator in that case area of the group system? I'll show of K. Then you just Without the case area that is generated by projection for item potent. So how do you construct item potent? So just take any finite dimensional irreducible representation Because the group is compact so the representation irreducible one are always finite dimensional So you can just form its matrix coefficient And this matrix coefficient by sure serum sure some kind of a thing you can show the convolution of To make this coefficient if the two rotation are different you could always get a zero And if the same the convolution you get a one over dimension of path which is finite because the representation of finite dimensional of this one So that is to say this element after you multiply this number dimension of the representation to the matrix coefficient It satisfies its cover roots with itself equal to itself So that is to say this one give you the generator and not this one give you the generator or the generator in this case area Group can be obtained in this way each generator Corresponded matrix coefficient of irreducible representation so that and how is related by the Fourier? Fourier transform so that if you have generator in this case area group actually what I give you is that the KD group is that For compact is simply just discrete many infinite many discrete points So it's just a that the other function on the point corresponded to the you do support implementation you speak English So let's just give you the so this is a nice. This is a we have a good understand about the case area or matrix coefficient for compact the group case Okay, next is how do we understand it? How? Can we relate it to representation theory? How do for example one important information for representation theory is its character? So what you can do is that for any you begin with you take one element H and you just Define a map which is that this is already integral So just that consider integration over the orbit integral Associate to that element you choose so that you can you can prove this one is a trace map So you define a map from the case area trace map, right? So this is that F G So one last thing here is that because the group is compact there's no Converting issue you will worry about everything just compact so the integration makes sense There's no serious usually need to worry about what does the notation case of H mean? Oh, okay. Oh, sorry. Thanks for being a case of H is that isotropy? I'm sorry group Yeah, so that's a real way stem says that if you have a T representation You can form a do our operator K clever and your operator or this double operator This came out T space coach on orbit twisted by some line bundle it comes from this T weight and the kernel of the D or the index of the here What I really mean is that plus minus so the index or the kernel is that give you that irreducible K representation with highest weight mu and all the next living here so this is what happened in a real way and Over the integral what tells you is that stuff from here or away you get anything here Which the average index is all the equivalent index is just a one irreducible K representation with highest weight lambda So it's because we know the generator just given by the matrix coefficient So it gave me an element here you apply integral to here or apply the wire character You get both case you get a character of the represent tape representation Take representation so it's nice and simple in the compact case. So what I want to understand is what's going on Oh, so please compact No, but oh, yeah Okay, yeah, so I mean but what I want to talk about today is that what is it? What's what's going on for non-compact case? No, just You just a question of Usually not exactly yet. Oh, yes. Yeah, thank you. I just yeah I just want to do There's a shift there's a shift it go Okay, now if we move from Compact to non-compact then then there's a huge gap here some fact about non-compact every irreducible unit representation Are you in general it is an infinite dimensional which is because the group is non-compact and another which new phenomenon is that the do space For compact group case the do space is just labeled by the weight dominant weight in the party while chamber. They're all discrete But for non-compact There will be some continuous part. This is something also a Nigel point out But he didn't talk about the continuous part in the morning I mean the talk in the morning so that there's a continuous part and Discrete part is quite a part correspond to discrete series representation But continuous part correspond to something else. We wanted to understand and also accordingly this group system Algebra also decomposing to compact part and the decouple and the continuous part So there's a part and the point the difficulty here is that for the discrete apart You want to maybe mimic what we did for compact the group case take some matrix coefficient Which is okay because the matrix coefficient is L2 interoperable but if you take the matrix coefficient from The representation that Continuous part it will no longer be L2 integrable So if you want to do orbit integral or do anything will be trouble so that the integral is not well really defined in that case so That we wanted to this is some new phenomenon You will see in the non-compact group case of course I will give the example of so2 are just in few minutes. Okay, I will just tell you there's a continuous part We have to be we take care of Okay, any questions so far Okay, so what we are going to do today is that how do we detect those information both the continuous part and Noncontinuous part and the discrete part so one thing we assume that we consider case when T is a maximum tolerance in K which happened to be a maximum compact tolerance So I happen to be a maximum tolerance in G So this is the crisp which we sometimes we call it a dimension of the rank of G equal to rank of K And we take element T in the T regular element Okay, so that one thing is that if F is a Schwartz algebra on G Schwartz means that because the G is non-compact so we have some rapid decay property for any function on G And we can define the orbit integral after harsh changes definition is that this is different from slightly different from the Compact K the case so you can consider the integration of the orbit integration But you also multiply by this while denominator So this delta T is just what you see in the wild denominator This is just this one as alpha range over or the roots or the particular group the roots So why you wanted to multiply by this one? This is some kind of a normalization The reason is that if you just consider this operator the behavior could be bad But after you do this normalization somehow is more easier to attack more easier to although just Multiply some known factor but makes the property much better So there's what harsh gender tell us if you use this orbit in the integral with this factor then that's Actually harsh gender give a new Actually not new or give it the algorithm to compute what orbit integral is which is of course very difficult and in particular But some property you can tell Actually, I don't know the property but also he can compute but at least some property We will know is that those function for any Schwarz function f this capital FTF are always smooth regular and There's some there will be some jump But the jump out when you pass from regular to over singular there will be some jump But those jump are extremely interesting with is known as the harsh gender jump relation But we will not talk about that today just let you know there's some this this function is some nice function on T with some bounding jump another thing harsh standard teach me or teach us is that Not only this over the integral is interesting But also if you apply some differential operator you differentiate this orbit You they all to this one you get a smooth function on T Which is also extremely in the interest because if you just consider this one It may not define on the single report because I said there's some junk and there's that you cannot extend it to 40 But after you take some derivative Differentiation you can define this function on 40 and their value at a single point it turns out to be very interesting In particular the value of this function identity will give you that harsh channel Control L which is of the most contribution or deep serum in how it channels work So this so one thing I learned just yesterday is that shall I explain to me this month also develop using this hyper electric operation to compute this effort this over the integral Different way so harsh under off just gave our algorithm to compute it But if you really want to compute it write down the formula, it will be very difficult It's just some there's a way to but how to compute it how to get an explicit one That's hot, but there's a way which knowing this actually worse than my hair take it to here So it's good to know this one. Thanks to show that and but what I'm going to talk tell you today It's not the formula, but if we apply this over the integral to case area Everything is quite simple. So all the hot pot in this formula. They were all equal to zero So you will get much there somehow when you apply the case area to the group sister algebra They open nicer representation in a very funny magical way. So when you apply the integral just by some Reason, I don't know you always get some nice picture and the function turns out to that image of this I have this map turns out to be nice. This is what I'm going to tell you today Okay, any question Okay, so so so from this slide the partial I is that this one is very difficult to this F F capital F over the integral is difficult to calculate in general, but when you consider those elements from case theory turns out You will get some nice picture So that here's the main theorem that let G to be a reductive regroup with maximum compact so that is We want to say a rack of G equal to rank of K the maximum tolerance T in K happened to be the maximum Maximum tolerance in G What we showed is that if you apply the open the integral to the case area for any element in here You will get a function on T and you will not get an old function on T But you get it just rft environment under under under this one which is isomorphic to RFK so this is what's that we showed and Yeah, so this is that so far you can see here. This is looks very similar to the Concast part of my isomorphism you learned in this morning, but actually we go the opposite way So we go from the case area to RFK Concast particles the other way and to define the map from here to here You don't need to do our operator Basically you if you know the generator in here We apply open the integral to generate here. We will find out most complicated part in In our channel's calculation. They always equal to zero. This is our observation So that yes, so of course the proof realized actually we say this proof is now we realize heavily realize how to decompose this group sister algebra And this is by the work of the Wasserman and player Chris and and Nigel so that of course we have some comments But what's my work this morning? But anyway, so that's we needed to up to show this isomorphism We do need to understand explicitly what is the what is the sister algebra look like and also We needed to know what the representation theory we needed to know what is the classification of the geopsification down by NAPS document and we also needed to know Serum and open the integral of Channel's formula so what I'm so from here what I'm trying to say if you know everything about a representation theory You're throwing here many things cancel and you get a cute nice Or tempered yet or tempered because this is the reduced assist I actually so Let's go tempered. Okay, so this is the man's theorem any question Okay, so Of course, it will be interesting to say if you don't know representation theory, how can you understand this? Isomorphism we have I have no idea but we know this I Please are assume you know all the condition of which is bad news for me But I always wanted to see what it can we see about the representation here Okay, let me show you one example, of course, we have talked about I tell you what is the main theorem is and I tell you one fact in Representation of non-compact group. There is a discrete part. There's continuous part one example You will see is for SEO tour This we will only consider tempered tempered in simple six basically just this part Okay, so that's the temper of the temper view of the G of SEO tour picture look like this one. They're a discreet serious representation. So each discreet dot every dot Chris by the one discreet a serious representation. This is what The those representation Nigel talk about in the morning lecture. So this is a discreet part But I will tell you today is how to handle the continuous part. They are for SEO tour they are to continuous family of representation called a spherical principle and Non-spherical principle and you see the G2 space looks for here this for this principle there is auto principle It's at this end of point it's a composite to so the topology is quite weird So but this is what the G2 picture look like So what I want to tell you is that if we apply over the integral to this guys or this guy's and this That's what you will get. I mean somehow apply this case area. They can organize those representations in a fun way So that what you get some you will get this image of the map turns out to be isomorphic to RFK This is Theory I want to tell you Okay, so on the other hand because this G2 space decompose into this great part continuous part equivalently is as I said, there's some And gravity between the view space and assist our algebra the system algebras of decomposing to Into this one so this one for M. So this one correspond that every limit of discrete services and take representation So every dot give you a C here. They are all discrete There are two continuous family C 0 for our quotient by the two correspond this fact the family so you get half line and Another line you will get this is that non spherical principle theory strips and take notation because it is split into two dots The correspondence should be the two cross with system of R So there's decomposition according to that G2 in the system of my level So next what I'm doing to a question is the Z2 is acting by Yes, yes Are the discrete series those pairs are plus or minus? Yes. Yes. Yes every dot correspond the plus minus But no no no no never this plus one minus one plus minus two Okay, so next what I'm going to tell you is that how do you construct the generator for those fellows those discrete generators? And how do you construct a generator in case theory for this continuous part? And what is their image under the orbital in the orbital in the interval? This is what I'm going to do next question Okay, what's at the corner between them? This part just you can just ignore this number part. I'm not going to talk anything about that part Okay, so I will just focus on this part and then this two different continuous line and some discrete part Okay, so that's what this begin with these critical series part which In this morning's lecture you already see this is quite a serious presentation many times and those is quite a serious representation that behave more like continuous or more like representation for Compact group so it's easier to handle so that For a tool that as I said in my talk I always only do the example of the SL2 up to win this You know how to do it in general for a tool to our Discreted series representation are parameterized by harsh channel parameter is just a order non zero numbers or the positive number gave you Or the positive in that they were the positive in the integers gave you holomorphic discrete series representation Negative gave you an aphoromorphic, but they can labeled by the integer f but the zero is missing there Okay So that for every discrete a series representation you can define Mitch's coefficient in a your way and the dimensions coefficient is not too bad because it's L2 integrable So that you can check that if you choose that Mitch's coefficient and there's this funny deep eye deep eye is also some number You have already seen in this morning's lecture So deep eyes some like the dimension of this Represent of this but dimension quote because it's even dimensional you have to make sense of what is the dimension or Some sometimes we call it the punch or the measure this measure for that is pretty serious Representation so that this one you can check will give you an item potent in the case error So for every discrete a series representation It's Mitch's coefficient will give you the generate will give you one the generator here so that this deep eye is a formal degree And if you apply over the integral to those generators is By some calculation by throwing our chance work You can calculate it's just equal to e to a degree i pi theta So this just this is a how great but you can see this is that for all non zero Yeah, but as I said this K theory group is isomorphic it to R of K case s1 in this case Which is isomorphic to Z so you can see here one zero the when M equal to zero It isn't missing this in this picture and you want to wonder okay, so we Know we know the contribution comes from this quaternary representation, but we want to understand that for the zero It must come from the continuous family. So how do you get it? This is what I'm going to tell you Okay, so that in order to deal with this continuous part This continuous part one thing you should do is to cut this is called a wave packet so that the continuous part This is the reminder this picture we have a one Other this is a two continuous family for this two family you its parameterized by s as is the parameter The half line that this be a family of principle series representation for every if you just pick one You reduce the work with this just as well as you can also define the matrix coefficient What is the matrix coefficient in that case? It's not L2 integrable. So if you want to come to construct a generator using that matrix coefficient There's no way you can do it. So that's that so that is to say consider the matrix coefficient of a single representation from the continuous family is bad idea Instead to resolve this one you should consider the matrix coefficient of the family of the principle series representation What's it? What does that mean? So this is what I mean if you take edge to be a Schwarz function on the Half line while it's half line half line basically the this is the prime transition of the gdm Just have some rapid decay function on half line and you So this for this family of principle series representation There are infinitely many of them parameterized by s but they are all isomorphic as k space So they are different as G space geolocation, but you cannot distinguish them as k k space So you instead you can choose a fixed vector a family of fixed vector basically if you identify them as the one K space we just choose one k finite vector just this is what I mean by a family of a fixed vector over here Okay, so this one is this a parameter by s such that is no equal to one So the wave packet associated to the choice of the function and this vector is defined to be this one So here I will explain what this one is so we yes is a vector in the PS in the principle series representation So G inverse x on here on here we get that so it's taking a normal number Hs is the function parameterized by s and the mu s is the punctual level measure This half line the mu s is that some the punctual level measure for SO2 for this one It's just that the hyperbolic the x time s times Hydropotic attention s so there's just some funny function on the half line So this one you can do that in integration how China shows that if you choose the edge to be a Schwarz function on the Half line this one what you get is a bill bill Schwarz function on G So that's that you can see the if you choose the one measures coefficient It will not be not be L2 integrable So it's no way it can be Schwarz function G, but if you choose that this family of Metrics coefficient, this is something like the matrix coefficient But like a weighted matrix coefficient with this edge then you get a Schwarz function on G So this is our replacement for for the measure coefficient for discrete series representation But unfortunately if you want to compute this one still very hard So if you can if you want to continue the wave packet because this is this turns out to be Schwarz function on G You can apply that what we did in the integral to this one if you want to compute it We have to slow in the harsh channels formula turns out to still it will be a million terms and it's well for SO2 or maybe it's possible to calculate it But in higher rank case in higher dimension case it will become more and more complicated nevertheless This now we have to use some idea from index theory So that one special phenomenon I want to tell you that where this two dot come from for this family of principle series representation primatized by us This is s as goes from 0 to infinity when you take S to be 0 this Split into two dots That's because when it's S equal to 0 this principle series representation turns out to be Reducible it breaking to two irreducible geo representation known as the limit of discrete series representation okay, so this is a and in this case and of course we can define the wave packet as before but as I said here if you just consider one wave packet It could be if you want to calculate it's all bit in the orbit The integral it could be very complicated But what we are going to consider is that we are going to compute the orbit integral of the difference of the two wave packets so Then also in order to define this wave packet I need to choose a function H which is a choice function on this half line. I also need to pick a two vector V and W So I don't have some you pick the two vector Because they are all on the same they choose to be too deep too different To different all I mean the space is the same for this yes Yes, so you choose you choose a family which coincide with Oh you want to say it's cos of me that they're wanting yes, yes I will just say it in a moment in a moment so that this kind of the integral I guess her gender will never can create this one because from the Harshand's viewpoint for them if you consider the matrix coefficient of discrete series representation you consider difference of two Mr. Coefficient although you choose different vector you get if you apply all the integral you always get zero So you get nothing from there because this is harsh under Well, so this is something the difference of these two are generally will not calculate this one this too And but even for this if you consider difference of two wave packet in general We also get a zero because you are considered to be packing the information they cancel But it turns out it does not always cancel sometimes you get something non-zero in SEO 2R I will tell you there's a chance you can get something non-zero So that is that if we and W I think this one if you choose we and W because they are there if you choose one from the Plus part one is from the other one then it will give you the value of this edge function at zero if you choose it to be if you to choose this to be the same D zero or same D minus no matter what H you get you always get a zero you guys get a zero now You can see here phenomena here if you give two different choice you can get some information something now About the edge But you will only get the information of H at the zero you lose all the information there So from this viewpoint we see if you consider some small function special function Motivated by index theory you lose many information For example, you will not see the whole family or this that for the answer this edge of two edge of three You don't know it but you still capture the information about H zero where this one there are some become the ability over here So still get something meaningful. So this is what happened in the Be made of this quits if all the continuous case Okay, yes, and oh by the way I have to say if you of course harsh if you know how to share this Machinery, you of course you can calculate the the opening of each wave effect independently you find out the Formula is quite difficult. There's many terms many terms, but if you Consider the difference all most of the other difficult part they all cancel But still something left the same left is just that the value of edge at the zero point Yes, okay, so oh next slide is that why do we can consider the difference of two wave packet? Actually, this is not not come from out of nowhere Actually, this is suggested by index theory for example in the index there always consider kind of D kind of D plus minus kind of B minus so that is always considered super tricks. So this is a more than by index here So next I'm going to tell you for the continuous box. How do you? For the continuous part, how do you construct the generator? I guess I have to speed up a little bit So that if you take the edge to be the e2 degree minus e2 degree minus s square x square So this is a function on this offline If you take two vector V and W you can actually form a matrix this makes the matrix is a two by two matrix The parameterized from R to this is this L to is the hubr-schmidt operator on this Roup-Statio of G. So this one is correspond this term this component So you should consider this idempotent as idempotent on the due space Because if you want to construct the idempotent in G as a function on G then it could be hot But if you want to consider idempotent on this offline, it's simpler So we you can construct idempotent in this case Actually, this is this construction of this idempotent is the same as the construction of the k-sericars using the heat kernel Use the matrix the matrix. It is the same But anyway, if you construct the idempotent which is leaving here by applying inverse Fourier transform Because this is function on the due space if you apply inverse transform like the wave habit you will get Two by two matrix each component will be a value on it will be a function on G You'll always be Schwarz function on G So you will get this is two by two matrix in the system of G Which is also a k-seric element because it is idempotent So this is how we construct the generator for the continuous part for the discrete path you just take the Mitches coefficient and for the continuous path you should consider this too and Also by the calculation I just show to you if you like apply the orbital integral to this case You get h of 0 h of 0 equals to what? So what you get so as I say for the orbital integral if I apply to Discrete a series representation you get e to a degree minus e to a degree im theta But m varies from all non-zero integer, but zero is missing beam. So this is where the zero came from Okay, so this is one generator in the case theory and when you apply this orbital integral then you get that one Okay, so to sum up We have case theory. Oh by the way this one you will not see for this family and because this is half line The case area of half line is always equal to zero So this is a defect about that it's growing the case theory It's growing the case theory you miss some information for example You will not be able to see this family, but you can organize all the representations in a funny way such that there's some There's a more face up so that's okay the city composition this while you will not see you I don't know we don't we will not be able to construct it none Now they're I don't put it here, but we have to do this part and also this part. This is this credit service representation This is the wave packet so that for the generators in In the case area is given by those come from the Discrete a serious representation and also from this one respond this one this wave packet and Then you apply the orbital integral you will see this one get a e to a degree im theta this one get a walk So this one shows I I thought so we use the orbital integral to verify the case area case area of this the group system I'll draw the same as this one So it's a match You saw some way something away, but you get something nice And also you can get some even get some information about it So this this one you get is the character of the discrete a series representation for this one But if you divide by this one denominator this one get a character of that limited discrete service And this is true not just for so to on so in general if you throw in that this great a service representation You get the character for the discrete service if you throw in that is a continuous family You get the character of the limited discrete service Which one oh just take a moment there just one Yeah, this one so that's because this is a wall-in-wall generator So so I guess because for this P You have to take a one vector from the one limit of this great a service to take another vector from the other limit of this Great a service come either together to get the generator and the potent P and you for apply the orbital integral here What you get is this one Okay, so this is the example what we did the example for a third one So to sum up you will not see this one for this one just feel the hole Service feels that actually this is this the map of this one is what Nigel did not discuss in the morning's lecture Okay, we basically understand what's going on the continuous service Okay, so let's see how what is how is what is application? How is really two other things so that this has really two conch has come cast off from the morning's lecture It is met from offer RFK to K theory of this group system To this case theory and what they will be able to see the other man. It turns out this is inverse man But to show this one is an inverse man. There's a lot of work to do This is not just because in order to although apply over the interval you can calculate this Find out all the generating they find out it match with this RFK to show this is that gives the inverse map You basically need to understand that give any drug up at a D mu what is the? Current operator is that it have to some calculation There's some not what to do in order to verify it is an inverse map, but it is an inverse map Okay, this is how it's related to that Concast for zero So basically we built it up use the open to integral to build the inverse back to that you are induction which is that with some use of representation theory and harsh channels work and Also, I tell you meet so I tell you miss is that if you have any representation K representation you can form a G you can learn do a calculator. I tell you meet says that if G is regular in super sense In the car will give you a irreducible discreet a series representation But you know to your Schmid when mu is a singular you can't zero so that is to say from the viewpoint of Attach me you will not see this continuous family So that use this OBD interval can detect the case when When it is there, oh you know to your Schmid's case, but in that case But the use the OBD interval you can detect that character information about the presentation Okay, so this is how it's related to attach me Last thing I wanted to tell about that how did to relate to come was co-aging actually Once I said before for every discreet a serious representation You have to associate with a real number D part which should consider as a dimension dimension of that limit this quater service although it's even dimensional But you have to make sense of what kind of dimension in super sense for that The meter representation this is the workout by commerce coverage They apply the L2 index there L2 means that because Always a discrete group gamma such that the coaching space turns out to be compact Then you can apply index three over there You will get some real number this real number is meaningful This is the full quote a formal degree for her gender's punctual measure This is a sort of generalization of the dimension of time because if it is a finite dimensional You can really stop out the dimension of the representation But when it's even dimensional what do you mean by dimension how you want to get some meaningful number? So that's I will explain how it's related to OBD interval Actually, this is related by Lopita's rule. So this what I say this line if it is compact liquid case The station or finite dimensional everything here you can just remove all the quote everything makes sense the dimension of representation equals to the value of the character at Identity because the character if you value that identity it will give you the dimension of the representation for compact Be good case and the character identity basically you can write down because the definition is just this one pushing by this one and As F goes T goes to identity this one goes to zero this one goes to zero So you have to apply Lopita's rule to calculate it. So this is D some of the Evaluate because by half-changer that when you take the derivative of the hope to integral you evaluate at zero you get something something and You take the derivative for the one denominator right Right derivative you value at zero you get something non zero so that's a infinite finite dimensional case everything makes sense But when she's non-compact this deep eye what is this deep eye means how do you get this deep eye first of all this wall is not Identity because when T goes to identity the limit of the character turns also goes to infinity because That one goes to island theta over EI theta for example 12 is minus EI minus I theta as See that goes to infinity this one goes to wall this one goes to zero. So this one actually goes to Infinity so this one will not be equal, but this one always equal to this one just by definition but you cannot really apply what we got here because For if you want to want to calculate this one you cannot apply the Lopita's rule The reason is that as T goes to identity the denominator goes to zero But the numerator does not goes to zero So if you tell your K-Hood students of course this is wrong, but if you just brutally do it just use the Lopita's rule Then you will get this number you can recover this number from the Lopita's rule So that's why this derivative of the orbit in the integral at a singular orbit It turns out to be interesting for them. It's related to the formula. So this is generalization of the wall dimension formula for this way the series I guess I'm wrong all the time. In fact, I have one example, but I will just skip it You
In 1980s, Connes and Moscovici studied index theory of G-invariant elliptic pseudo-differential operators acting on non-compact homogeneous spaces. They proved a L2 -index formula using the heat kernel method, which is related to the discrete series representation of Lie groups. In this talk, I will discuss the orbital integral of heat kernel and its relation with Plancherel formula. This is a generalization of the analytic index studied by Connes-Moscovici to the limit of discrete series case. In a recent work by Hochs-Wang, they obained a fixed point theorem for the topogical side of the index.
10.5446/59254 (DOI)
Thank you very much for the introduction. I would like to thank the organizers really for the invitation and for letting me to speak in this one for conference. So today I'm going to talk about very similar topics as Naijo and Yan Di and they're joined with Peter Hawks. So the motivation of this work comes from understanding of index theorem and representation theory. They're what happens for their intersection in the context of k theory. And my talk divides into two parts. The first part is some introduction part about representation and k theory. And the second part is about fixed point theorem and applications to representation. So the first part is about representation, actually, time-quad representation and k theory of reduced group C-style algebra. And throughout this talk I'm going to consider connected semi-simple degree with finite center. And that is, of course, a special case of reduced group. And the key example is SO2R. And the motivation of representation of k theory mainly comes from the book by Dixmere about C-style algebra. In the book he talked about very steady unitary representation, essentially like steady representations of maximal group C-style algebra. So let's say we have a pi which is a unitary representation. So it is a continuous homomorphism from this group to unitary operators on some help space. And to study this kind of unitary representation, it's essentially the same as studying representation of maximal group C-style algebra. So maximal group C-style algebra is some universal C-style closure of L1G. And it's defined by, it's determined by compact sub-algebra, sorry, dances of algebra, compactly support a smooth function on G. And we have a really transfer, for free a transfer of transform, so GFG, DG. So given an element in the function on G, one can find the free transform in this way, which gives an element in bounded operator on this whole space. And studying this representation, studying this representation can recover unitary representation. And today we are only focusing on tempered representation. So I need to, we call what is the tempered representation. So pi is tempered. A unitary representation is tempered. If all is matrix coefficient, it belongs to, has some condition as function on G. So if all matrix coefficient is defined in the previous talk, but it can be defined in this way. They're, they're also normal basis in the soup space. So this is of, regarded as a function on G. And if this function on G belongs to L2 plus epsilon of G for all epsilon larger than 0, then we say that this representation is tempered. And in particular, it contains discrete series representation. And while we're interested in this temper, this kind of tempered representation, because it's closely related to reduced group system algebra. So similarly as the idea here, if we have a tempered representation, and we can again define the Fourier transform. And it can be shown that this representation extends to a representation of the reduced group system algebra. So the reduced group system algebra arise as a quotient of the maximum group system algebra. Okay. So what I'm going to do next is to review, review what is a groups, a reduced group system algebra for semisimpoly groups. So, so I will first recall the theorem by Neb and Zuckerman about classification of irreducible tempered representation. So which says that every irreducible temper representation pi of a semisimpoly group is a cement of an so-called parabolic induction and reduced representation of the subgroup of G which is loaded by P and P is made decomposition MAN standard, what is that, Iwasova decomposition. And sigma tensor lambda tensor density, tensor 1, where sigma stands for irreducible representation of the compact part of the decomposition of this parabolic subgroup. And lambda is the representation of A which is the non-compacted million part of this decomposition. And why I want to recall this because the form of reducible system algebra follows closely in this form by the theorem mentioned in the previous talk. So in the papers they give precise computation of reduced group system algebra in terms of these induced representations. So as direct sum over parabolic subgroup and sigma arises a representation of the compact part of the representations. And the direct summent is actually some compact operators on some Hilbert module. Hilbert module is made, so this is a representation of G, so it arises a Hilbert module over A hat. So just write it down. So in this, so in the bracket of this compact operator we have sigma fixed and lambda is changing. So this is regarded not just as a, regarded as a Hilbert module. And using this decomposition, okay, so this W sigma is some finite group arise as some fixed point, fixed point of sigma on the while group. So let me just write down the definition. So this is a finite group and this means we take the fixed point subalgebra of the compact operator and using this identification one could obtain the k theory of the reduced group system algebra. So it's computed as mentioned G mod k. So we only use the parity of the dimension of G mod k because k theory is integrated. The reduced group system algebra is, turns out to be direct summed over p, where here p is required to be the so-called parabolic, sorry, caspital parabolic. And the definition of that simply means that m has discrete series and sigma is taken from a discrete series representation of m. And k theory is simply generated freely by some generator depending only on the parabolic subgroup and this, this discrete series representation of m. So this is some generator. So I will explain this example using the example of, sorry, explain the theorem using the example as R2R which is also discussed in Yan-Li's talk. Can we take the table? And I'm telling you, if you're a letter M, you just ask that M is compact. Oh, sorry. M is not compact. M is not necessarily compact. I'm sorry for that. I will give an example where M is not compact. So, I still have two R. We have two kinds of parabolic subgroup. One is the minimal parabolic subgroup. A versus the diagonal. And the other one is the whole group. And these are the hospital parabolic subgroup because in those cases M has discrete series. So, I will simply write down this. So, in this case M is the plus and minus of the identity matrix. So, the representation is just D to hat. And for this one, M is the whole G. So, it's just a G, discrete series of D because we know SL2R has discrete series. And what is, so I use G hat to denote all temporary representation of G. And it's restriction to P and sigma. It will look like this. For the minimal parabolic subgroup, it looks like a half ray with this endpoint included. And another half ray with two points attached to the end. And for the parabolic group G, they just arise as discrete series. Okay, so in these cases, what is C star reduced G? So, up to Morita equivalence, the first one corresponds to C0R mod Z2. And the second one corresponds to C0R cross-paralytic by Z2 just as in the previous talk. And the second one is just given by a bunch of C's. And if we take the K theory, BK sigma, we obtain a generator associated to this cross-product and generators for each discrete series representation. So, that is an example. And we will keep this example in mind. Okay, so now I'm going to move on to the second part. I'm going to introduce the main result which is a fixed point theorem. So, it's called a problem to integrals and fixed point formula. So, there are some overlaps with Yanly's talk, but for the purpose of the recording, I'm going to say it anyway. So, we consider the orbital integral for, which is defined as a map with domain of continuous function supported, a compactly supported on G. And it's defined by integrating on the orbit. So, G mod the centralizer of a group element inside G. Okay, and we have to make assumption that G is semi-simple. The group element is semi-simple. So, in the case when G is a linear group, then semi-simple means that this matrix is diagonalizable. So, in this case, this map extends to a continuous trace on the Harris-Changer-Schwarz algebra. And this is due to Harris-Changer. So, Schwarz algebra I use C, G. And as I said, it is trace and continuous. So, it extends to a map on, it defines a map on K-zero. But the special thing about this algebra is that it is a dense, holomorphic closed algebra inside, sub-algebra inside C star, C star algebra dense, holomorphic closed sub-algebra. The reduced group C star algebra, this is kind of mentioning Nigel's talk in the morning. So, because of that, this trace map extends to K-zero of the reduced group C star algebra. And we will define index map out of this trace by considering an equivariant index map of, an equivariant index of some gene variant electric operators. So, generally speaking, we set the problem like this, that M be a complete Bimeli manifold, on which the group G acts properly asymmetrically with compact quotient. And in this case, one has an equivariant index map for all gene variant electric operators. So, we like G, we like D be a gene variant, electric operator on M. And there is an equivariant index map. So, I write it formally as a map from equivariant k-homology of M. So, every gene variant electric operator arises as an element in equivariant k-homology of the manifold, and it admits the equivariant index map in K-zero. And in the special case, when M and G are compact, this is just a map into the representation range, which is the usual equivariant index map. So, our main theorem is to compute this trace given by the orbital integral defined on the equivariant index map. So, the assumption is G is semi-simple, and the group element is also semi-simple. And we give a cohomological formula for this number of trace applied to the equivariant index in k-theory. Of course, in this case, we assume this start to be zero. Otherwise, the trace is always zero. It turns out this trace is given by a fixed point theorem. So, I'm just being lazy with some compactly supported function multiplied by the ATSC Go single integrand. So, we know that for index theory for electric operators on compact manifolds, the index is given by integration over tangent space of M of some characteristic classes involving symbol of D and tau class. And in the equivariant case, still in the compact case, one has this formula given by a TSE Go single integrand without this thing. But in the non-compact case, where the fixed point set MG may not be compact, we have this CUTL function making this integral convergent. So, C of G is an element, it's a compactly supported continuous function on the fixed point set. Such that integration of HX for all H in centralizer is one. So, this is a generalization, the equivariant version of ATIA L2 index theorem. So, we have this, we have this integrand if G belongs to some compact subgroup of the group. And it will be zero if G does not belong to any compact subgroup. So, in other words, if G is elliptic, it means a fixed point theorem, and if G is not elliptic, this number would be zero. And this is the... Is it because it doesn't have fixed points? That's true, yes. So, we proved this theorem using the heat kernel method, and I will omit the proof. So, what I'm going to do now is that I'm going to talk about an application to representation theory, namely, a new character formula. So, we will assume the rank of G coincide with the rank of the maximum compact subgroup of G. And in this case, it means that G has a discrete series. Also, it means that there exists a compact canton subgroup, T. So, T can be viewed as a... The conjugation T can be viewed as a maximum torus for K and for G. So, the corollary is the following. For G belongs to the regular element of the torus, and... And pi is the irreducible temperature representation of G. And as I said before by Nappen's argument, it is labeled by... It can be labeled by P as a hospital parabolic subgroup, a sigma, a discrete series representation of M. And we will only consider lambda being the trivial representation of A hat. We can only compute that. So, what we obtain is the following. We will call G of this K theory generator. So, recall that this notation means generator of the reduced group C star algebra. And with this assumption, we will have dimension G mod K being even. So, we have 0 here. So, the trace makes sense. So, the conclusion is that this will be equal to theta pi G. Right, that is the result. And this notation means Harris' change of character formula for discrete series or limit of discrete series. So, I will... I will especially want to mention that representation of rise in this form pi means limit of discrete series or discrete series. We will see an example later in here. Here, here are the solutions. Yes, we only for G being regular element in the Taurus, elliptic regular. So, it doesn't determine all the character. It just defines... Determine the character restricted to the elliptic element. Okay, so... So, I would like to say a remark. And that will bring the conclusion of this talk. So, remark is gained by Neck and Zuckerman. The induced representation given by P, sigma and trigger representation of A. Actually, it's reducible and it has a decomposition into irreducible representations. Pi J. Okay, so limit of discrete series representation arises as a direct summand inside the induced representation. And I want to say that this character, theta pi G, will essentially be the same as theta pi J G up to plus and minus sign. Okay, so about the proof, I want to mention that we actually... proof this formula used to fix point zero. So, we use... proof is... takes point zero for M being G mod T. So, in this case, G acts on M, probably, and asymmetrically, with, of course, compact quotient. And for a twisted double operator on G mod T. So, G mod T has a complex structure. So, we have a twisted double operator on this manifold and we will twist the line bond O, given by the Harris-Changer parameter, where it satisfies limit of minus half sum of positive roots is integral. This is a line bond O over G mod T. And we have used the fact that the index of this operator is B, P sigma. And this kind of follows from concuss power of... subjectivity of concuss power of isomorphism. Okay, so that's all. I think the last thing I want to say is the... an example in this case, which is very similar to... to end this result, except for the existence of the... wild denominator. So, I just want to say this example. So, when we have... when we apply tau G to this generator, we obtain... so, okay, so we obtain... Pi plus G or pi minus of G. So, this pi plus and pi minus stands for the two limits of discrete series representation. Up to a sign. And if we... if we apply the tau G to the limit of this discrete series generator, this generator, we obtain... And this N stands for the parameter for discrete series D plus or D... D... D N plus B N minus. And this G belongs to regular element inside SO2. And that is all what I want to say. Thank you very much for your attention. Thank you. Is there any question? Try. I'm sorry, but I thought that they would be all the same, no? Yes, they are all the same up to a sign. Great. In this case, one would be one over 2 pi sine G, and the other is negative one over 2 pi sine G. And the distribution of plus or minus one depends on the choice of positive root system, because the... because the 100 change of parameter is singular in that case. Thank you.
K-theory of reduced group C∗-algebras and their trace maps can be used to study tempered representations of a semisimple Lie group from the point of view of index theory. For a semisimple Lie group, every K-theory generator can be viewed as the equivariant index of some Dirac operator, but also interpreted as a (family of) representation(s) parametrised by A in the Levi component of a cuspidal parabolic subgroup. In particular, if the group has discrete series representations, the corresponding K-theory classes can be realised as equivariant geometric quantisations of the associated coadjoint orbits. Applying orbital traces to the K-theory group, we obtain a fixed point formula which, when applied to this realisation of discrete series, recovers Harish-Chandra's character formula for the discrete series on the representation theory side. This is a noncompact analogue of Atiyah-Segal-Singer fixed point theorem in relation to the Weyl character formula. This is joint work with Peter Hochs.
10.5446/59255 (DOI)
The organizers for this wonderful conference and for the individual opportunity to give this talk. So today I will talk about G-invasion Hormonfic-Marsine Equalities. So this is somehow linked with the work of Vern and Arredon, who that Vern talked about in his talk. So we'll see how after. Okay, first I will give an introduction about more inequalities, Hormonfic-Marsine Equalities and G-invariant Hormonfic-Marsine Equalities. And then I will state more precisely the main result and then I will sketch the proof and the more times the less catchy. Okay, so start with a compact complex manifold. So smooth everything here. With a metric and a Hormonfic Hermitian Veclovene on it. So we will look at the Dolbo operator, which is a complex equivalent of the Dora-Moperator. And the Laplacian, the Codera Laplacian, which is a direct type Laplacian. So we define the quantization, the usual way, from this Laplacian. Okay, and then we may give another point of view on this quantization because as the Dolbo operator, its square is zero. So you have a homology. You can define the so-called Dolbo homology. And the Hautse-Ren exactly tells you that the harmonic forms here is the same as the homology. So in my talk I will talk about homology groups, but you can think of it as part of the quantization. Okay. So now if you take a Carrian manifold, which is a complex manifold, and with a simple active form, you can think of that as the following compatibility conditions. So this must be a metric. Okay, so this is what the Carrian means. And you assume that you have a pre-calculation bundle, which is a line bundle equipped with a Hermitian metric, which is chain curvature, gives the form omega. So chain curvature, just a reminder, it's when you have a holomorphic Hermitian bundle, there is a unique connection that preserves both a holomorphic and Hermitian structure, and the chain curvature is the curvature for this connection. Is Carrian less than being Carrian? No, no. It's a Carrian manifold. The project is made out of the Carrian name. Okay. So in this context, when you have this line bundle, you can look at the semi-classical limits, which consist in taking power, increasing power of the line bundle, and looking at what's happened when p tends to infinity. Okay, so it's semi-classical because here, one over p stands for the prime constant, which goes into zero. Okay. So now we will go back to the symplectic category for just one slide. So in the symplectic category, you can define from a direct operator the quantization in the same way as I did before, and if moreover you have a group acting on every data and preserving everything, then the quantization becomes a representation of your group. And so you can look at the G invariant part. It's very much studied. So to study it, we introduce the reduction of m, which is defined from a so-called moment map, which is a map from m to the dual of the algebra of G, and so the reduction is a quotient of the pre-major of zero by this moment map. Okay. And then the big question is the link between quantization and reduction, and the main question is the given certain conjecture, which states that the quantization and the reduction commutes. So this is the sequence. So then this should be studied by a various authors. And so, Mein Raito and that for Abidian G, and Manuel Kahn and Nien Tien Tsang for non-Abidian G. It's supposed to be whatever. Sorry. It's for E, okay? Yes, yes, okay. But you can do it for E, I guess. The line boundary. Okay. So now when you go back to the Kalarian setting, so with a complex structure, a complex structure, you can define the reduction in the same way. Okay. And then show that you have this isomorphism, which corresponds to the one above. Okay. So in this talk, what we're going to show is that this, all these results is not true anymore, is the line vendor is not positive, which means that this is not the pattern. But as far as holomorphic morphine lucidities are concerned, quantization and reduction do commutes. Okay. So I will now recall about the standard morph inequalities. So in a topological setting, well, with a smooth manifold. But yes, it's not complex anymore. So these inequalities link the topological property of a manifold with the properties of a smooth, a morph function on this manifold. So there are two forms, the strongest tells that if you take here the alternate sum of the dimension of the column, the columnology groups. So this is a heliocalacteristic, but you truncate it. So here you don't go to the to end, but you stop at some point. Then this truncated heliocalacteristic is controlled by the same alternate sum, but with a number of critical points of index N. So the weak form is just one by one, you have a control of the dimension of the columnology group. Okay. So a proof of this inequality is given by written, and he used a deformation proof, that is he introduced a real parameter T, and he deformed all the data by conjugating with the expression T times S. And then he shows that the Laplacian, the derange Laplacian here, has this form, so with Laplacian, plus T squared times the norm of the, of Df, plus T times another zero-order operator. And then by studying this operator, as T then to infinity, he managed to prove, to prove this, this inequality. Okay, so now, shortly after, the May gave an olemorphic version of these inequalities, and this olemorphic version is asymptotic, meaning that you have this parameter P that stands to infinity, that does not appear in the most classical most inequalities. So you start from a compact complex, a major, with a line brand on it, and you want an asymptotic estimation of the more sum of the number, this number, so the log-comology rather than the log-comology. So, more precisely, what this means, is like you fix a metric on your line-bender, then you have a chain curvature, then you will define here, you will identify the chain curvature with a matrix, with the help of any gene-variant matrix, and then you define these two open sets. So the first one is the point where the curvature is non-degenerate, and this matrix has exactly Q negative again values. And so this open subset, I use the metric G to define it, but in fact it does not depend on G. Okay, and then the theorem is the following. So you have on the left side, less than the merce sum as we have before, and here you have some geometrical term given in terms of the curvature of the band. And moreover, if you take the full holier characteristic, you have not an inequality, but an inequality, and this is just like the Riemann-Hau-Heuhr-Hurlhoek theorem. Okay, so these inequalities are very important in complex geometries that are from numerous applications, and so for instance historically, the first self-to-Demay used them to characterize the merce sum manifolds. Okay, so... I am confused a little bit. Minus one power Q, is the number at the right is positive? Here? No, here, it will be low. Oh, here, sorry, yes, this is positive because on this... Okay, okay, okay. Okay, so every way one is convinced. No, no, but it's okay. So what is M of Q? So M of Q is a set where this one has Q negative again value. So if you take minus one to the Q, you have something which is positive at the end. Okay. Okay, so the proof of Bayt-Demay is based on the observations that when you look at the Kodera laplacian and Wittem laplacian, the formula are quite similar. And in fact, well, in fact, this is the key observation for Bayt-Demay to prove these inequalities. Okay, so here maybe I can draw like a parallel between olemorphic and non-olemorphic mass inequalities. So maybe I will do it here. So here you have the mass, here you have the olemorphic mass. So here you have the Durand homology. Here you have the Durand homology. Here you have a Morse function. And here you have a metric on your random bandon. Here you have the hn of your function, which will give you the Morse index of some of the two different points. And here you have the curvature of your bandon. So here you have the point of index j. And here you have the point where this has j negative again value. So index j means that the hn has g negative again value. So these two correspond. Here you count these points. Here it's an open, so you are not able to count it. But you have some sort of volume of it. So what I call the volume is that, because here as you noted, this is a positive thing. So this is what I call the volume. And here the right hand side in the Morse case will be the sum of 2 minus g mg. And here you can rewrite this thing as the sum of 2 minus to the 10 mg volume of mg. So you see there is a pretty hard one of these two inequality. But yeah, the main ones are asymptotic. So now I will talk about another proof of the domain inequalities given by this myth. And the proof is based on the following observation. Is that the left hand side you want to control is in fact controlled by some alternates sum of traits associated with the codenial application. And this is true for every positive view. So you have equality if you actually can't write. So now to control asymptotically this term, what you have left to do is to compute the asymptotic of this term. And so you transfer the difficulty to the heat kernel and using the heat kernel method you conclude the proof. So in my talk I will use this approach that the myth one. So now for a moment. The myth approach. That's why. Okay so when you have a positive line bundle, by positive I mean that when you define omega like with a pre-contestant formula, then this is a metric. Then the holomorphic machine inequality degenerates to just saying that. So the dimension of h0 of Lp equals P to the n times the volume of m. And so this is just a consequence of the classical coder that I mentioned in terms of this theorem says you that the the commonology when P is large enough, in fact there is just one group surviving when P is large enough and every other group is with Danish. So here it's not this inequalities are not very interesting because there are consequences of classical theorem. And so you can to go further you can study a localization of this of this number dimension of H0 which is given by the the Bergman kernel. So the Bergman kernel is the Schwarz kernel of the orthogonal projection on the holomorphic section. And I can give a local asymptotic for it which is a local version of that. Okay I will not talk very much about it but just I will mention it later. So now we have a group, we are in terms of group which acts on everything, preserving everything. And so in the case where L is positive, Ma and Zang have students invariant, very like Bergman kernel. The same thing as the Bergman kernel but now you project onto the G invariant part of the holomorphic section which is a holomorphic N invariant section of your line band up. And to study this invariant G invariant Bergman kernel they use the K-D-M reduction of N. And if you want the picture is the following. So you have when you have no G or you have some group G when L is positive and when L is arbitrary. So here when L is positive you study the Bergman kernel and here is the G invariant Bergman kernel. Here you have the holomorphic most inequalities and what I am trying to fit is this one. So as Ma and Zang use the K-D-M reduction of N, I will use some kind of reduction which is to be defined to get there. So this is what I mean. So we want to obtain some version of the most inequalities for these spaces. And yes, so just to link it with the result of Merny and Pardin. This is the trivial part of the representation Q of M and P. And so in the talk of Michel Verne they give an asymptotic estimation for the dimension of it. So this is in fact the asymptotic estimation of the earlier characteristic. So this is the Q equal N case in my result. Okay, so now I will state the result. So first we set everything. So we have a complex manifold. You have two bundles on it, so one line bundle and another bundle, which are both permission and holomorphic. And then you define omega as the curvature of your bundle. So here this is not the K-D-M form. It may be arbitrary. So it can be degenerate as Vensi in her talk. So we define omega that way. So it's not the whole other way around. When you have a K-D-M form, you have a pre-contamined bundle. And here you have the bundle and you deduce something which is not a K-D-M form, just a form. Okay, so now you take a connected compactly group with the algebra tree, which acts holomorphically on M and such that this action lifts to the bundle and it preserve the holomorphic structure and the hamletian structure. Okay. So now then G acts on everything, G acts on the double commode. So now as I said, when we want to study the gene variant part of it, we will have to define some kind of reduction. So in the K-D-M case, it is well known and you have all sorts of good properties. But here, as omega is not a K-D-M, you have to essentially do the same thing, but check that all the properties are still valid. And so this is the first time I think this was done, and it was done by Kershaw and Tolman for tourist action and pre-simpletic manifolds. So, you know, with not holomorphic context, but it's more or less the same thing that I will do. So first you define the moment map. The moment map is just defined by the constant formula. So it is the difference between the connection, the chain connection of L and the derivative. Okay. And then we still call it moment map because this equation, which gives the differential of mu at K, evaluated at K, this is still true even though here omega is not a symplectic form, but we would forget about that and still call it moment map. Okay, so who is the generator? Omega? Yeah, it may be. Okay. So you can still look at the three-dimensional zero by this map, and it's some subset, which is the standard by draw picture. Here, I go to the manifold. Imagine that your group is acting by translating because your moment map is going to the draw of the algebra, and here you have some subset. I guess I lost it. Okay. So now for everything to work, I will have to do two hypotheses. The first one is that zero is the regular value of mu, so this is a standard hypothesis, I guess. And you have to add a second hypothesis, which is that this subset, the pre-image of zero, is admissible. So the terminology came from a paper from Kines de Silva, Karsan and Tolman, which defined it for a circle action. And so what does that mean? That it is admissible is that each time you have a killing vector field and you apply the complex structure, it goes out of your set. Okay, so killing vector fields are in mu minus one of zero because it is invariant, and when you apply the complex structure, you have to go out. Okay. So the first hypothesis will give that mu minus one of zero is a sum manifold on which the group acts locally free, and so that you can define the quotient, we still call it the reduction, and it's a knobby fold in general. Second, so I'll draw it with the projection here, and here you have the change. So now the proposition, which is like the extension of the previous paper to arbitrary group action, is that so every data you have upstairs goes downstairs, okay, and you have again some, the same property for the reduction as for n. So first the complex structure descends to a complex structure, making the reduction a holomorphic manifold. Okay, so this is the H2 hypothesis is for that to be possible. Then the bundle induced and the reduction are holomorphic for this complex structure, and you still have the precantization formula linking the omega g and the curvature of the bundle l, okay, which is not very difficult, but it's not like immediate, you have to do a literal computation. It's not just obvious, it's easy. Okay, so, and yes, last point, the projection will induce an isomorphism between the kernel of the form omega g downstairs and the kernel of the form omega upstairs. This means that the degeneracy of omega and omega g is the same. So, every degeneracy goes downstairs. Okay, so now I can state the main result. So, I will add here an assumption that g acts facefully on m. This assumption is not necessary to add the result, but the statement is more complicated when you don't have this assumption, so I make it. And so, you have exactly what I announced before, so on the left hand side, the morphosome of the dimension of the g invariant part of the chronology, and on the left right hand side, you have a geometric term as in the inequalities, but here this term is defined downstairs on the reduction, okay, and there are two differences with the inequalities of the maze. First, the order of the power of p is not the same. Before we have p to the n, and now you have p to the n minus d, but this is not very surprising because n minus d is a complex dimension of the reduction. It makes sense. And then you also have this r, it's an integer, which will shift the q, here you go until q, and here you have a shift, because I will explain just after what this number r is. And so, yes, you get also a reform which can control every homology group separately. So, here just a comment. In the Vernier and Paradone work, you have just a control when q is equal to n, but it's an equality, so it's more powerful if you want. And here you can have a control for every q, but the control is more loose because it's just an inequality. We do for a line, but not for each. Yes, yes, more right. In fact, in the proofs, the e is not really an issue in this proof. Okay. So, yes, here mg is defined exactly as before, but I introduce the integral r. So, here you have this binomial form, it's not a metric anymore because r is not positive. But what you can show is that when you restrict it to the orbits, it's not degenerate. In fact, assuming h1, this is equivalent to the hypothesis h2. This is essentially h2. And as it's non-degenerate, its signature is constant, and in fact r is the negative part of the signature of this linear form. Okay, so some remarks. First of all, if you take g equals 1, then you find the maze inequalities. Okay, because mu is 0, so mu minus 1 is 0, so m and d is 0, n is 0, and so everything works. Great. Now a comment on this shifting by Haar. It's not really a surprising because as a... Okay, so it's not really a proof, but it's a philosophy. It's because there is a result stating that if your curvature is non-degenerate on the manifold m and has everywhere j negative again values, then in fact you have something which looks like a Kodera vanishing theorem saying that the curvature will concentrate in degree g. Okay, so more or less the g negative again value explains the degree g of the homology, and here because of the definition of air, when you take the quotient, you lose air negative again value in the fiber. So when you go downstairs, so it's like an understandable that we have this shifting of air because of that. Okay, so as I said above, the faithful hypothesis is not necessary, but it's simplified, and in the same spirit we will assume that g acts freely and not just locally free on mu minus 1 of 0, so that the reduction is a genuine manifold and not just an arbitrary, but it's not necessary to have something. Now if you go back to the case where L is positive, then once again our theorem is just a simple consequence of Kodera and Riemann-Hoffelsbrug's theorem using the theorem of Steng which tells that the gene value part of the homology upstairs is the homology downstairs. Okay, so now I have like 10 minutes left. I think, so I will go to the last part, I will go to the last part, that's why I'm not ready to begin. Okay, so sketch the main steps of the proof. So we will use the heat-cannon approach, so I will have to fix a metric, the metric enables to define the Kodera labelation as I did in the beginning, and then we define PG to be the orthogonal projection on the G invariant forms. Then you have some sort of large theorem for invariant forms, which states that the kernel of the projected labelation is homomorphic to the gene-variant part of the image. So this is an invariant case for the Hodge theorem. Okay, so the starting point of the proof is that you have the analog of the business observation, which is that the most sum you want to control is in fact controlled by an acton 8 sum of traces of the heat-cannon, but here you have to take the projected heat-cannon, and again you have equality with Q equals 1. So now, thanks to this observation, we can just study the asymptotic of the heat-cannon and deduce the result for it. Okay, so first I will denote truck-scanner with this notation, so I don't know maybe everyone knows what a truck-scanner is, and I'll still write it down. So when you have an operator K taking a section and you evaluate it at X, this is K of X X prime, S of X prime, D X prime. So this is the truck-scanner, and the interesting feature is that when you take the trace of such an operator, you have to integrate the trace of the channel along the diagonal. This is interesting because here it's the trace on the infinite dimensional space, and here it's the trace on the finite dimensional space. So it's simpler. Okay, so there are two main steps in the proof. First, you show that this truck-scanner is very small when you are outside of mu minus 1 of 0, first step, and then you study the precise asymptotics near mu minus 1 of 0. So you see here when we will integrate, we will break the integral in two pieces. First, what is a way of mu minus 1 of 0, which will disappear, and then what is near mu minus 1 of 0, and this will give the term on the reduction. Okay, so just to note, there are exactly analogous results for the invariant band channel obtained by Ma and Zhang, and so there is really a parallel between these two situations. Okay, so first thing, it can be small outside of mu minus 1 of 0, so you can express it like this. You take a gene-variant open neighborhood of this set, and then when you take two points outside of this neighborhood, you have a fast decay, faster than any polynomial, for the projected head-canner. So this is the first state, so I will skip it. There is a proof. So the second step now, you have to study what is the behavior of your head-canner near mu minus 1 of 0. So for it, I will introduce a normal coordinate near this set, and you have the following statement. Okay, so it looks rather ugly, I think, but in fact, it's not as complicated as it may look, so I will explain. So here I am not making an equal sign because there is a constant missing, and there are some uninteresting metric term correction missing, so it's not important, but it's almost complete. So you have this asymptotism. First, the blue term. The blue term is the usual term. When you know the usual asymptotic for the head-canner, which is used in Bismuth proof, you have this term coming out. But of course, it's defined not as the reduction, but on the whole manifold. Then you have the power of p. In the usual theorem, you have p to the n, and here you have p to the n minus 0, too. So it's not the same as you were. And then you have a third term, which is really new in this situation because it's a term that is a head-canner in the normal direction to a mu minus 1 of 0. So if you don't have a group acting, you have no normal direction, so this term does not exist in Bismuth proof, but here it's the new, and here you have an harmonic oscillator, which is explicit, and so this can be computed explicitly, but the formula would be like a three or four line long if I do it. I did it like this. And finally, you have here the purple term, which tells you what, that when you are not on mu minus 1 of 0, you have a fast decay when p tends to infinity, and also when p is fixed, you have a fast decay away from mu minus 1 of 0. So it's like a two-fold fast decay when p tends to infinity and when z orthogonal tends to infinity. Okay, so here I will skip the proof, I think, but let me just say a quick one, or maybe I can... How do you prove it? The first step is to trivialize every geometric data you have, so you replace your manifold by g times of vector space. Here, d is the quotient here, you replace it also by vector space, etc. Everything is trivialized in the geometric point of view, and then you prove that your id kernel is really close to the id kernel in the trivialized situation. Okay, and so to prove our theorem, we can study only this trivialized situation, so it's much simpler. Then the operation of stairs descends to the laplacian downstairs, you rescale it, and you prove the convergence of the rescale operator, and after that you prove that this convergence implies the convergence of the associated id kernel, and then you can conclude by going back to the original kernel that you care about. Okay, so now we can conclude the proof of the inequalities that we'll do here. So, what we have proven so far is like... So, for every view positive, the sum of the dimensions of the g in the invariant part of the kernel g is controlled by the limit. The limit of this as p tends to infinity is less than l u, which is defined to be the limit, which is in fact here a real limit of alternative trace of the kernel. Okay, so to see that this term has an impact limit as p tends to infinity, so what we do is we compute this for each queue. So first I say that this trace is equal to the integral over m of the trace of the Schwarz kernel on the diagonal. Then we can cut off what's it's away from u minus 1 of 0 by the first theorem, and then near in this neighborhood we can intertwine the integral and the limit, and get this integral. Okay, so now as you have a fast decay you can replace this condition on the z-artogram by just taking every z-artogram, and what's the point in doing this last step is that now you can compute explicitly the integral over z-artogram. So you compute explicitly this integral, this one, and you show that this term tends to that one. So now here you have indeed this limit exists for every u, and then it takes you going to infinity, and you have that limit, okay, so the limit of p to the minus m plus d of your Morse sum is less than the limit for u going to infinity of l u. And now you see here this limit is integrated for x in mg, this term, so you see you have an indicator here, indicator function, so you will just care this is exactly the alternation of the integral over mg of j, m minus d over m minus d factorial times m minus 1 times 1, okay, and this is exactly the end of my talk, so thank you very much. Questions? Yes? I always ask you a little bit about your human download, and to choose your omega you made some arbitrary choice, so is there a best choice? So it's what I talked about with you, just a minute ago. So I don't know, but I think it's a good question, but for classical Morse inequalities, the dimension of the cheers of the Ramp-Romodigroup is less than the number of critical points of index j. So you can add this, this is still true, okay, but here it may happen that this is still strictly less. So in a complex setting you may ask the same question, is there a better metric so that you have something more to say than just the same thing, but I don't know. But even for very special manifolds like Toricumfold? I think it's interesting, but I don't think the situation is the same as here, you can't optimize on your metric, I don't think so. What do you suppose, you have complex solution in the manifold, and you have an assumption that says that the texture goes down to the retrospective. Yes, so the H2 assumption tells you that it's equal to 0, but you can see it as, I like to see it as this bnr form is degenerate along the orbit. It's non-degenerate, so your correction, it cannot be like a 0, it has to rather be some non-degenerate. It seems to be strong because if you move a bit the mu, it might change the metric, but your line better. It's not an open condition, I mean. If you change the metric on the line better, it changes the mu. Yes, I agree, but it's open as a condition. Do you have an application like the mys, meaning that if some integrals is bigger than 0, the field of gene variant meromorphic function is maximal. So yes, this is something, most inequalities, meromorphic meromorphic inequalities are most often used to have something like this. The dimension of the I0 is bigger than something which is big, so this is big even though the band off is not positive. So here the application should be like this, but I have not one application. Maybe you're seeking for b-folds like gene variant. So if you're integral of a variant action, it's positive like in the mind, there's some integral which, so let's say it's positive. I don't have any indication of that, but yeah, if the integral is positive, don't stir this estimate. Thank you. Thank you.
Consider an action of a connected compact Lie group on a compact complex manifold M, and two equivariant vector bundles L and E on M, with L of rank 1. The purpose of this talk is to establish holomorphic Morse inequalities, analogous to Demailly's one, for the invariant part of the Dolbeault cohomology of tensor powers of L, twisted by E. To do so, we define a moment map μ by the Kostant formula and then the reduction of M under a natural hypothesis on μ−1(0). Our inequalities are given in term of the curvature of the bundle induced by L on this reduction, in the spirit of "quantization commutes with reduction".
10.5446/59257 (DOI)
Also for arranging, I could actually speak this morning instead of yesterday afternoon. Also thanks to the speaker that swapped with me. So I'm going to talk about K-type of temp representations, which is some work with Jan-Lie Son, Sjilin Jue and Maizal Hickson, which is about relations between representation theory and indexing of direct operators or geometric optimization. It's about these three preprints. So actually it's quite a bit of material to cover in 40 minutes or 45 minutes, but it's just a coherent story which would be nice to try to present as a whole. The talk will be in three parts. They cross over to three preprints. So I'll talk about a geometric realization of certain representations, about a multiplicity formula for decompositions of representations, and then about lattice formula. So first about geometric realization. So I'll be talking about a group G, which is connected linear real reductive. So just examples, so maybe she's preserving the inner product with signature PQ and C, the 2.1 SOPQ and then we have the moment, looks like this. So we have the case of maximal compact subgroup talk. We'll start with the definition that's technically accurate but not so intuitive. Suppose you have a unitary representation, unitary reduced representation of the group G on the Hilbert space H. First of all, a vector in H is called K finite if you apply the group K to the vector and you take the span of what you get, that's finite dimensional. Those are dense in the space and the K finite matrix coefficient is something of this form. So X and Y are K finite vectors and this function here is a matrix coefficient for those vectors and the representation by is called temperate. If this function on G is not quite L2 but L2 plus epsilon for every positive epsilon. It's a bit of a technical definition but what is really motivated by is this. If you decompose the L2 functions of G into representations of G, then these are representations you need. So a more intuitive way of looking at them is that representations occur in this bunch relative composition of L2 over G and that's the main reason why they're relevant. Another reason why they're relevant is that in the classification, the language specification of all admissible representations, that's a very large class of representations including the unitary or reduced ones, you also need temperate representations so they play roles in different parts of representation here. Just an example for SL2R, I think Yanli showed exactly the same slide yesterday. So there's a discrete part of the discrete series, labeled by positive integers and sine plus or minus, two half lines which are in principle series, some funny limits of discrete series over here. These are all temperate, just a representation of temperate because its matrix coefficient is constant and therefore not integral if the group is non-contact and there is an extra set of unitary representations which is a complementary series which aren't temperate either. The discrete series, so that's the discrete part of this, that means the matrix coefficients are actually in L2 over G, not in L2 plus epsilon. So equivalently in this decomposition, the discrete series are the direct summons, so this is a decomposition as a direct interval which has, the screen can have a discrete part and a continuous part. The matrices, sorry, the representations with positive measure occur as direct summons and those are the discrete series. So not every group has them, for example for SLNR only SL2R has them. Right. Okay, so let's fix the temperate representation pi of the group G and the idea is if we restrict pi to a maximal complex subgroup that contains a lot of relevant information about pi and it's all simple than looking at the whole representation pi. A very, bit of a vague analogy is if you have an irreducible representation of a compact group K and you restrict it to a torus that determines the representation. That's not quite true in this case, if you can have different, you lose representations of G with the same restriction to K. A precise statement is Vogan's theorem that says a temperate representation with real infratestinal character is determined by its lowest K type. Real infratestinal character in this case means line, there's a horizontal line here. But in any case, the idea is if you restrict pi to K, you lose information but you still have quite a lot of useful information. The goal of this talk is to first of all realize this representation pi on K geometrically using compensation or index theory and then use this to find the decomposition of this restriction to reduce representations of K. And the word K type in the talk refers to the delta to the courier, so the delta switch M delta is not zero. And I'll use index theory of drag operators or geometric quantization. So just a bit of a setup about the drag operators. I'll look at a complete Romali manifold M, acted on by a compact group K. E will be a Hermitian K-equivariant vector bundle over M, Z2 graded and we'll assume there is a Clifford action, as noted by C, from the tangent bundle to the anamorphisms of E, to the odd anamorphisms of E. Satisfying this Clifford relation by twice you get minus norm squared. And we'll call it a Clifford module if these data are given. And this is actually the only example I'll use. So if we have a K-invariant almost complex structure on the manifold and a line bundle, we can take the complex exterior powers of the tangent bundle, tensile with this line bundle and the Clifford action will be given by this exterior multiplication and complexion by the two-hole of a vector. It's actually a spin-a-bundle for a spin-C structure. To define a graph operator we need a connection on this bundle E, connection Mabla, which is Hermitian for the metric, and has this compatibility condition with the Clifford multiplication. So for two vector fields, E and W, the commutator of this operator on sections of E, and this operator on sections of E is Clifford multiplication by the Levy-Civita connection with respect to V applied to the architecture field. If this whole tree defines the graph operator like this, so you take a section of E, apply a Nebla, you get a one form with values in E, using a Riemannian metric you identify T star and with Tm, then apply this part and this part using the Clifford action and we get another section of E. M is compact and it's an elliptic operator, so it's a fed-home and we get the usual equivalent index, or sometimes noted by the index of assuming the data like the Clifford action to be given. That's just the equivalence classes of the kernels of D on the positive and negative parts in the representation ring of K. So by the way, if these kernels are finite dimensional, this is all very nice, but I want to realize infinite dimensional representations, so I actually want to go beyond this because our representations are not finite dimensions. So I want to look at a non-compact N and I'll use an index theory developed by Maxime Bravelen. In Michel's talks he already said there are lots of different ways of defining quantization of non-compact spaces like forming symbols and applying a Thia-Stomph's Resilitic Index like Quoely-Mille, Michel, Du, and Xiong-Nan and Waping-Zang's method and this is another one. So suppose there's a map psi from M to K, the motivating example originally was psi a moment map for Hamiltonian action, we identified K with K starting using K-variant in a product. Given a map psi like this, it induces an effective field at every point in the manifold, psi gives us a Li-Alzware element and that Li-Alzware element defines a tangent vector by the infinitesimal action like this. My convention is to put a moment in psi. And the called psi taming, actually Bravenmann called psi taming, if the set of zeros is compact, so we need the growth behavior of V in a sense to tame the rack operator together well defined index. So we need V psi to kind of go to infinity in some suitable sense. So given such a psi, I'll use this, you formed the rack operator, this is also called a written deformation. This kind of deformation was used by Zhang and Waping-Zang to prove quantization to the finite quantization in a compact setting. So it turns out, Bravenmann showed that if you look at the kernel of this operator where you rescale psi by some appropriate function f and you look at it in two sections, its kernel will still be infinite in general but restricted to every isotical component. And it's finite dimensional. So for every irreducible delta in k hat, irreducible representation delta of k, it has the finite multiplicity in this space. So we have these multiplicities m delta plus and m delta minus and there are differences in the edge like an index. So the difference is independent for example in choice of nabla and f. And we have this force from a Corbaudism variance property that Bravenmann proved for this index. So we can define the index of the pair e psi like this. So we just define that the delta component is this multiplicity. Like this, in other words you take the part of the kernel of this, the formed rack operator with this rescaling function under even L2 sections of e minus this. So this doesn't land in the representation ring of k anymore because infinitely many of these numbers can be non-zero. So this lands in the completed representation ring where infinitely many numbers can be non-zero as long as they're finite. The example is result by Pauli Miel from 2003. It's a TB maximal torus by the discrete series. The discrete series are parameterized by Harrison's parameters which is the discrete subset of i times the dual of the torus, the d-outbound of the torus. And then we use the manifold g-mult t which is now a very different morphic to the coagulate orbit through lambda. And we just take the projection of that orbit onto k. And for a natural complex structure on g-mult t defined by a trace of positive roots and a line bundle which is defined by the weight lambda Pauli Miel showed that the restriction of the discrete series of this dash to k equals this index. So as I said Pauli Miel used the form symbols in a diastole-versity-liptic index which gives the same result. And you can view this as a spin-c quantization of this coagulate orbit. So y spin-c turns out of course the coagulate orbit has a symplectic structure but this complex structure of j is not compatible with the symplectic structure and also the line bundle l is not a prequantum line bundle so it kind of outside is in the active setting. There was another realization of representation pi by Schmidt in an L2 dual vocal molecule on g-mult t twisted by with the same complex structure same line bundle. The good thing about this realization of pyrofrictive decay is that you can use it to prove a multiplicity formula for decayed action of pyro which is Pauli Miel's main motivation to use quantization to produce the reduction to express multiplicities as quantization of reduced spaces. Another advantage is as I'll say now it is generalized to more general representations. So this result from last year says not just for discrete series but for any temporal representation we can do a similar thing. Here we replace the torus t by another cartons subgroup of g. Again there is some sign here which is the explicit thing. Again we use some almost complex structure on this quotient which the one we use now is actually only k invariant or g invariant. This is again some line bundle defined by Pauli Miel to pi. If pi satisfies some regularity assumption basically its infinitesimal character is regular. Then this map psi is again this natural map identifying g-mult h with a co-jointed orbit and projecting it onto k. And then this again has a natural interpretation of some quotization procedure of a co-jointed orbit. If pi doesn't satisfy its regularity assumption this psi is defined slightly different. Actually our collaborator you just pointed out this applies to slightly more general representations than just temporal ones. Alright so this is the realization of pi with fixed decay that I wanted to mention. Now what can we do with this? Multiplicity formula. So again given a temporal representation of g we want to define the multiplicities and delta in this decomposition. And if we know those integers we know pi with fixed decay and I claim that says all about pi. For the discrete series there is a formula called blackness formula that I will mention towards the end of the talk. It's proved by Hecht and Schmidt in 75. And for general temporal representations there is a lot of formula but an algorithm in the atlas software developed by many people including David Vogel and Fogel and Kluen people. So these are kind of combinatorial ways to compute these things but they don't really say very much about general behavior of these multiplicities. If you want to prove something general about them it might be tricky. For example you might want to know when are these multiplicities zero, when are they one and that can be tricky to say from this formula or this algorithm. So I'm going to start to restrict the setting to the spin C case. So I'm going to assume that the Clifford module E that we used before is now the spin or bundle of some spin C structure. And I'll let L that be its determinant line bundle. And again the example I gave before we have a complex almost complex structure and a line bundle that's an example of a spin or bundle for spin C structure and actually the only example I'll need. And the determinant line bundle is the top complex exterior power of Tm tends to be the square of the line bundle L. So I think Michel already mentioned this in her talk on Monday. Given a line bundle and a connection we can define a general version of a moment map basically by imposing constants formula. The moment map associated with connection on the determinant line bundle will be a map from M into the dual of K. Determined by this one. So if the curvature form of this is a symplectic form this is just usual symplectic moment map but I'll use this much more general thing. Difference between the lead derivative and the connect the covariate derivative is a vector bundle and a more visible line bundle in there for scalar which is this. And maybe it's also a line bundle. And just like in the symplectic case we can find a loose basis. So we take a moment map, take the inverse image of some point and then file it by the scalar, the stabilizer. Now the question is can you quantize this? This was worked out by Pauli Miel and Michel. If size a regular value of mu this will be an overfold and through some construction actually restricted to a size of a torus action and in any case MxI is a compact overfold and inherits the C structure from the big manifold. And you can take the index of the spin-sys rub operator on that space in the overfold sense and get an integer. XI is a singular value that can still be done which is a very non-trivial construction but worked out by Pauli Miel and Michel. So in any case for any XI we can define the index of MxI or the quantization of MxI. And then they showed in the compact case. I'm stating a simplified version where K acts as a million stabilizers. The equivariant index on the spin-sys rub operator on E which is the left hand side decomposes into irreducible like this. So this is the quantization if you will of reduced spaces corresponding to these values. Mu delta is the highest rate of delta so this is the irreducible character of delta. So this really pushes the quantization with introduction thing past the symplectic case into the spin-c case which is more general. The actual statement doesn't assume a million stabilizers but then this becomes a bit more complicated. You actually get some quantizations here for different row shifts. And as most people will be aware of course this has a long history. There was Giedemann's turn-work result in 82 for K-manifolds, a symplectic case by Eckhart and a singular case Eckhart and Bershemar. Also pushed by Chan's and by Parallel for those in the tactic case. But we're working with non-compact manifold so we'd like a version of this for non-compact M. Just assume M is possibly non-compact but the spin-c moment map mu here is proper and also taming. So you won't need taming but this is the state. Then you assume K actually with a billion stabilizers and then you have a similar decomposition. So we have index which is now possibly infidimensional decomposes into irreducibles according to this formula. We're now infinitely many of these can be non-zero. Again the actual result doesn't assume a billion stabilizers and looks a bit more tricky. We don't need to assume you're taming either. For symplectic manifolds, the symplectic analog of this was conjectured by Michelle in her ICM talk in 2006. Proved by Chan and Ma in Tianzang and Pauline also gave a proof of this. I should also say in the proof of this we basically reduced it to a compact case where we actually used quite a lot of Pauline Miller and Michelle's proof. Because as I mentioned earlier in Pauline Miller's result on the discrete series, the quantization was a spin-c quantization because the line model wasn't a pre-quantum line model for the symplectic form. The almost complex structure wasn't compatible with the symplectic structure. So we really were really in spin-c settings when we wanted a spin-c version of this theorem. So putting these things together, we realized pyro strict to K as the index of something on space g mod h. And this index decomposes as in the theorem on the previous slide. Putting these things together we get that pyro strict to K decomposes into irreducible space, as opposed to these compositions of reduced spaces. So basically this means combining those two theorems you just have to show that the psi here and the mu here can be taken to be the same thing. So this gives an expression for these m deltas in this decomposition. This was done by the results by other people before us, so this relation was done by Pauline Miller for the discrete series in 2003. That was his main motivation to realize discrete series to K in his way. And for temperature representations with this regularity assumption, Michel Verne and Michel Duflo, he's a very different approach to also give a proof of this using Karel's character formula. For the mill's work, we really build on this work. It's really like we take that and see how it generalizes for temperature representations. And this is a very different approach. All right, so what does that tell us? So we have pi restricted to K decomposes like this, and the m deltas are given by this. This in general can be tricky to compute. It's nice to link geometry to representation theory. So first of all, if you're like us, it's nice to link conversation to representation theory. But representation theory is asked, what can you actually compute once you have this relation? In the simplest cases, if this set is empty, then this will be zero. This parameter here is not in the image of mu, then the multiplicity will be zero. So give some control over the support, this multiplicity function. Also if the loose space is a point, then the multiplicity will either be zero or one. Excuse me? Exactly, that's the next sentence on the slide. Yes, so the index can be zero because it's an only fault index. So if we, the invariant part of a one dimensional representation of a finite group, so it can be either zero or one. I'll see if this shows in the next case. And an example of this is if you take the group, group SUP1 or SOP1, this is the connected component, SO2. You can compute under a regularity assumption on pi that the loose space are actually points. So you can compute by dimension, by computing dimensions of dimension zero. And because the map mu is actually a symplectic moment map, its fibers are connected. So the loose space are points in this case. So for these groups and for temperature representations with regular infantesimal characters, they're strictly multiplicity free to maximum compact. This is actually done by Cronum in the 80s where SUP1 and SO0 be QP9. Okay, I'm going okay for time. So in the realization of pi restricted to K that I mentioned earlier, actually uses a blackness formula that I used earlier, and to make this package nice and self-contained it would be actually nice to give a proof of blackness formula also in terms of quantization and index theory. And this is work with Jan-Issom and Nigel Hickson. So now suppose G has discrete zero representations, particularly is the rank of G equals the rank of K. I'll just state the formula first. Suppose T is a maximum torus in K. Again, we choose a positive root system. R plus rho is half the sum of the positive roots, we split them up into compact roots and non-compact roots. And then the key ingredient of blackness formula is this partition function. It's function P on the integral lattice in I t star into the non-negative integers. For every element sigma in this lattice, P gives the number of ways that sigma can be expressed as a linear combination of positive roots alpha with non-negative integer coefficients. So this is a combinatorial object. If you know the positive root system, you can compute this. You can put it into a computer and then a computer can compute what the value is. The formula then says this. So if you have discrete zero representation pi with horizontal parameter lambda and a representation delta of K with highest weight nu, then the multiplicity of delta in pi with 52k is given by this function. So there's a sum over this while group. There is a sine here, sine of the while group element. And then this partition function applied to some elements. So as this does not describe it enough. So this is the function of the partition function depends on the not the same for a big value. The formula says this is the compact case. The partition function in this formula, the partition function depends on the not the same for a big value. So this is the formula for compact formula. In the page number, you have a sub-chamber of regular elements. In the partition function, it depends on the sub-chamber that you choose. So you need to change the post-adjoints. You need to change this space to make it. Okay. You need to change this space to make it double. This formula defines the compact formula for compact. Okay. Thanks for the correction. Okay. All right. Thanks for that. So this is computable by a computer or by a lot of time and effort. One thing is there is a sign here and this sign here means there can be cancellations in this formula. So if you want to know when this is zero, for example, if some terms of opposite signs to the others, to others, they could cancel. So it can be hard to say when that's zero. So I mentioned before, Poirot Emile's realization of the street series like this. So it's the index of some drug operator on a GMODT, the conversation of co-adjoint orbit. Yes, this is the conversation. But the precise statement of the result is that this index here satisfies the right-hand side of the Platinum formula. And then by Platinum's formula, it equals the left-hand side, which is the friction of pi to k. So the precise statement is the right-hand side of Platinum's formula is the multiplicity of delta in this index. And then because these things are equal, it equals the multiplicity of delta in restriction pi to k. So Platinum's formula is used here. I also mentioned this result by Schmitt. So the L2 kernel of the Lobot-Drac operator on this bundle here equals the representation pi. And it's concentrated in one specific co-amoled degree, the MnSoG mod k over 2. So here we have an index, and here we have a kernel. So we know this is related to pi, and this is related to the right-hand side of the Platinum formula. So if there's something related to the left-hand side and here the right-hand side, we'd like to relate them to each other. But because realization by Schmitt is just a kernel of an index, it's hard to say let's just use a homotopy of Fretel operators to prove that the two numbers are equal. So we actually want to show that this operator is Fretel in a certain sense. Because we're working with infinite dimensional spaces, we can't just talk about Fretel operators on whole spaces. So we'll say if you have representation, you're the representation of compact log k, we'll call a bounded operator on that space k Fretel if it's Fretel on every isotope component. So we break up the space into the parts that transform according to a certain erused representation of k. And if on that space the operator is Fretel, we call the whole operator k Fretel. This is true for every delta. And then it turns out that the total operator, global operator used by Schmitt is actually k Fretel in this sense. So we can talk about the realization of pi restricted to k as an index, not just as a kernel. And this is true for whenever the heresthetic parameter lambda is large enough. So it's important that there is a difference between just having finite dimensional kernel and being Fretel. So we want the operator to be Fretel so we can actually talk about homotopy of Fretel operators. We knew already from Schmitt's result that this operator has finite dimensional kernel on every delta component because the screen zero representation has that property. We actually want it to be Fretel. So we have a global operator on this thing which is turns out to be k Fretel for large lambda. And then we show its homotopy as homotopic to the form the operator used to define this index. It's a bit less trivial than we thought it would be. For example, if you, so this is the Schmitt operator. The operator used to define this has this information term. If you just put a t in front of that term, the variable is not obvious if that is continuous because this is an unbounded operator. So there are some things involved in proving this homotopy. So if the norm of lambda is large enough, then the restriction of pi to k and then the mod Christy of delta is the index of this Fretel operator, the operator used by Schmitt on the delta iso typical subspace. By the homotopy that equals the mod Christy of delta in the index of this deformed wrap operator which by power down result equals the right hand side of that is formula. So then we have an equality between those things for large enough lambda. And then for general lambda there is a notion of coherent continuation representation here that allows you to generalize things for large for regular enough lambda to ever lambda. This might be related to also what Michelle talked about if you know the behavior of the line. The last thing I want to show is a quick impression of how to prove that this Dolby Blak operator is K. Fretel. Is it a trick that was already used by Nigel and Ergunner in the earlier paper? I actually referred to it quite frequently on others for this. Suppose you have a map, so it's, you want to prove something about an underformed operator or actually you use the form operator to prove this. So suppose you have any map phi from G mod T into K, then we compute the square of this direct operator with this deformation term and conclude that the underformed operator squared is largely equal to this, the formed operator squared plus some vector wall in the morphism minus a constant plus 2 pi times the inner product of phi and mu, where mu is the standard moment map in G mod T in the code of orbit for lambda. And this is true, so all the delta icicle component outside the compact set depending on delta. So this operator here is just non-negative, so we can leave this out and get this inequality whose operator is largely equal to this thing outside the compact set depending on delta. Then by choosing phi in the right way, you can use this to show that this thing here is smaller than some positive epsilon outside the compact set. And turns out that the phi that makes this work is this phi. See the main work of a large power of work we prove is finding this phi. If x is in E and K is in K, we write this decomposition as the Katomny composition and phi that point is at of K, composition B inverse, hyperbolic cosine at x applied to lambda. And it turns out if the norm of lambda is larger than this constant C of 2 pi and take this phi, we find that this double operator squared is largely equal to some positive epsilon outside the compact set for every delta icicle component. And that implies two federal components. Right, that is what I wanted to say. Thank you very much. I believe you. I forgot to mention this. There is the proof by you and the flow in HECMA using Kerber's formula. I think that's, I really believe this. So we use the fact that polynomials show that the right-hand side of the Betteon's formula equals this index. Right, but I probably, I think I wrote down. The Betteon's formula is expanding some function on four-year series. And it's quite delicate because the rational function doesn't really do that. And you have to describe in which direction you want to expand. So I am so sorry, I am wrong. I think you're right. I convinced you're right. And then this is wrong, but I think it's still. So what we proved is that the Betteon's formula is the same as the Betteon's formula. So what we proved is that the restriction of pi to k and all this delta equals this thing. Yes. And then using polynomials results, we deduce Betteon's formula. So I think polynomials did a better job setting Betteon's formula than we did. And to show polynomials were mis-correct. So what we did was we looked for the Betteon's formula. So we looked for the Betteon's formula. So how do you concentrate? You just use the V-value. You should deform. It's a bit unpleasant. So if you take these vector fields side, it will go to infinity. And this will be unbounded. So we first have to show you can localize to a compact set. And on that compact set, this will be bounded. And then show it's on that compact set, homotopic to a definition by a different vector field. So basically that's two steps. Localize to a compact set where this becomes a bounded operator. And on that compact set, you'll have homotopy because then you know this is a continuous field. The same idea as you, I think it could be. It could be. So, not customer. If you listen also to this section.
Let G be a real semisimple Lie group, and K<G a maximal compact subgroup. A tempered representation π of G is an irreducible representation that occurs in the Plancherel decomposition of L2(G). The restriction π|K of π to K contains a substantial amount of information about π. (This is roughly analogous to the fact that an irreducible representation of K is determined by its restriction to a maximal torus.) By realising this restriction as the geometric quantisation of a suitable space, which is a coadjoint orbit under a regularity assumption on π, we can apply a suitable version of the quantisation commutes with reduction principle to obtain geometric expressions for the multiplicities of the irreducible representations of K in π|K (the K-types of π). This was done for the discrete series by Paradan in 2003. In recent joint work with Song and Yu, we extended this to arbitrary tempered representation. The resulting multiplicity formula was obtained in a different way for tempered representations with regular parameters by Duflo and Vergne in 2011. In independent work in progress with Higson and Song, we give a new proof of Blattner's formula for multiplicities of K-types of discrete series representations using geometric quantisation. This formula was first proved by Hecht and Schmid in 1975, and later by Duflo, Heckman and Vergne in 1984.
10.5446/59258 (DOI)
For the chance to speak here, it's a great conference. I'm quite happy to give this talk. So this is the title of my talk. It's mostly going to be about spectral asymptotics. It's going to be semi-classical spectral asymptotics for the Dirac operators. So let me first begin by explaining what the problem is that I want to address. So we start with an odd dimensional manifold, the Riemannian manifold, which is oriented and further equipped with a spin structure. This already gives us a spin Dirac operator, but I want to talk about coupled Dirac operators. So we're going to be twisted by a line bundle, which further has a Hermitian metric, and a unitary connection, or a compatible connection, the metric. So that, and if you further give yourself a little one form, a real one form on your manifold, you have not just this one connection, but a family of connections. This base connection, a0 plus i times ra, where r is a real parameter. So you have a family of connections on your line bundle, and each one of those gives you a Dirac operator, coupled Dirac operator, acting on sections if this is the spin bundle, tends to be the line bundle. And what I'm interested in is the spectral asymptotics for this Dirac operator, and the semi-classical limits, as our question can. And by spectral asymptotics, I'll restrict myself for the sake of this talk to these two spectral invariants. I'm mostly interested in this spectral invariant, this is the eta invariant. And maybe I don't have to tell you very much what this is, because it already appeared yesterday, and everybody seemed to be very happy about what it was. Nobody complained in that talk, so I'll assume this. And so this, but formally speaking, this is the signature of the Dirac operator, the number of positive lines, the number of negative eigenvalues, and defined using regularization, because there's infinitely many of them. And then there's always the ambiguity of how you count 0 in the signature. So the related eta invariant, the related spectral invariant is the dimension of the kernel, the nullity of the Dirac operator. So this is what I'll look at. I'll look at the semi-classical limit of the eta invariant. And if you're a semi-classical analyst, you don't like r going to infinity. You're rather stated in terms of 1 over r, or 1 over r going to 0. And that's because this 1 over r has a physical significance as the Planck's constant semi-classical parameter. So this is the proper way of stating the same problem. So this is the problem I will press. It's related to another problem which is very well studied in quantization. This is a confidence on quantization. So there is an analog in geometric quantization, which is the following. So instead of a real, in the last slide, I had a real manifold. And it said you could start with a complex manifold, which has a Hermitian metric. And you could give yourself a Hermitian homomorphic line bundle on this complex manifold. And then you have tensor powers of this line bundle. And for each tensor power, we have a dobo Dirac operator acting on tensor powers. It's just the homomorphic dobo differential plus its adjoint. And in geometric quantization, what you consider are the spectral invariance of this dobo Dirac operator as this power p goes to n, v to these tensor powers. And the analogous invariance, the eta invariant of this Dirac operator is not so interesting, actually, it's 0. The spectrum is symmetric. But you could still consider the dimension of the kernel, which by Hartz theory is just cohomology of this tensor power for this line bundle. And instead of the eta invariant, there's still a nice spectral invariant. This is the analytic torsion or the homomorphic torsion, what we want to call it. This what this thing is, is there's a determinant on 0 q forms for each q. And it's the alternating product of the determinants on 0 q forms. So the determinant on Hart forms divided by determinant even forms. So you could consider these spectral invariance as p goes to infinity. And there's a lot of results that are very well known by now. The first one is just the physical homology. There is an asymptotic result. And this was, we'll explain yesterday in Martha Schultz talk. So again, I don't have to tell you, because you've got a whole talk on this yesterday. So this result of the MIE says that if you look at the q-cohomology and the p tensor power, it grows like p power n. And it's the complex dimension. And there's also the constant is also explicitly given. It's the integral of the curvature. And there's this minus sign, because this xq what appears in this, what the integration is over is the subset of the manifold, but the curvature is q negative eigenvalues. So this is a determinant. It's minus has the same sign as minus 1 q. It's positive. So that's the homomorphic and homophonic quantities of the MIE. This is just giving you the asymptotic of the dimension of the carmels. Now, to give the asymptotic of the torsion, you need to make an assumption. You need to assume that the line bundle is positive. Otherwise, it's not so easy otherwise. And assuming that the line bundle is positive, the asymptotic of the analytic torsion was given by Bisgut and Asero. So the result says that the logarithm of torsion has this asymptotic. This actually has two terms in it, but you can write them as 1. So there's a term which goes like p to the n, and there's a term which goes like p to the n times log p over here. But this is the leading term, and then there is a remainder. And this is actually not just an inequality. This is actually leading. So that's the asymptotic of torsion. What's related is the asymptotic. In this case, when the line bundle is positive, you have Kudar revanching and Riemann-Rach. You know exactly what the homology is. Contemplates in degrees 0, and then eventually it's given by the holomorphic Euler characteristic. And there is a further interesting refinement of the homology, which is given by the Baudmard-Cronor. This also appeared yesterday. This is the projection of smooth sections onto the holomorphic once. And the Baudmard-Cronor also has an expansion, which was first given by Katvan and Zeldic. And it starts at p power n in the same order. And this is the Baudmard-Cronor. It's a function of the Baudmard-Cronor. And another thing related to the Baudmard-Cronor is the totals quantization operator. So this is f as a function on your manifold. And you can compose f with the projection onto the homomorphic sections. And this gives a quantization procedure. This quantizes your function f. And this is a correct way of quantizing functions on at least on a Kela manifold. And this was first shown that the fact that this is a correct quantization procedure was shown by Baudmard-Mimerick in the tradition Meyer when they showed that this totals quantization has the right semi-classical limit. And I'm going to ask to verify these two properties here. So in particular, they have to show that the commutator of the totals quantizations is the totals quantization of the possible parameter at the leading order. So those are some of the results in the homomorphic setting. Any questions about these? OK, I think this is well known. So in our real case, we don't have a complex structure. So perhaps what's closer to our context is what's known as the almost Kela case, or the symphlectic case. And in this case, what we have is not a complex manifold, but just an almost complex structure on a symphlectic manifold. And it's almost Kela in the sense that if you contract the almost complex structure with the symphlectic form, you get a metric. So this is, in other words, the almost complex purpose compatible with the metric. So if you have almost Kela manifold like this, and if you have a prequantum line model, so the curvature of this connection on this line model is exactly the symphlectic form, then you can define, again, a renormal as Laplacian. So this is just this. The first term here is the Bach model Laplacian on tensor powers. And you can renormalize it by n times t minus n times b. And it's again the real dimension, p is just this power. And for this renormalized Laplacian, there is a spectral result, which was first given by Goodman in a rebate. What it says is that if you look at the spectrum of this renormalized Laplacian asymptotically, it splits up into two parts, one part which stays near 0, bounded away from 0, and then a second part which grows linearly. So that's the first result. The second result, then you might ask, how many eigenvalues are here close to 0? And there is an asymptotic result for this. So the number of eigenvalues which stay bounded away from 0, also grows like t to the n. Yeah? And so this was Goodman in a rebate. Furthermore, you can also ask for the distribution of the eigenvalues. You're not just the number. And there's a well-known distribution for how these eigenvalues are distributed asymptotically. So you can look at a function of the eigenvalues that appear here, and the way they're distributed is by this spectral density function. So f of this is maybe a little confusing way of writing it. This maybe I'll rewrite it here. So what appears on this left-hand side here is lambda is in this interval spec. And then you have this distribution lambda minus lambda t. This is the spectral distribution, and this is applied to this function f, this case function f. So this is the spectral distribution in this interval. And when you apply it to this function f, you get this formula, the asymptotic. This is an asymptotic formula. And then there's this spectral density function which appears in this formula. I should put it here. You have the first case, right? Sorry? In your formula, you have the second case. Yeah, they're kind of multiplicity eigenvalues. And so this is the formula for the spectral density function which was given later by a force we can remove in. So Kullin and Rewey showed that this should exist, and both the Rewey actually computed it in terms of the covariant derivative of the almost complex structure here. So in particular, if you have on a kilo manifold, this complex of this current, both we can Rewey. I'm not sure. I know you did it as well. Are we using a different method? Result is different. I'll let you sort it out. I'm not going to get it. You should take it here when it's a little. I know there is this 1 over 24. What's the constant rule? 1 over 24 was corrected by you, and I know this. But the snablages was correct. I know this. OK, so anyway, the point I was making is that if you're on a kilo manifold, the complex structure is parallel, and in this case, the spectrum is all concentrated at the origin of the snablages 0. All right. So, all right. And the reason I say that all these results together is because there's essentially, well, there are different proofs of these results. But there's at least one technique which works for all the results that has even the kilo case. I'm going to give this a collective case. And this is the rescaling a local index theory arguments. And this was well explained in the book of Maher and Marinus. Maher just spoke. So that's it. And it was also well explained by the Barstabh Uchul. The technique was well explained in Martha Stark yesterday. So that's what I wanted to say about the quantization case. Any questions? OK. OK. OK, so anyway, so now this is the quantization results. Now let's come back to the real case, the problem that I said in the beginning. So this is the coupled Dirac operator that I considered before. And now you could apply the same technique rescaling local index theory technique to this problem. And you get a result. You get a result using this technique. But so the result that you get is that the error invariant of this Dirac operator and the semi-classical limit is, goes like this is a little o. And this is half the dimension. So 2m plus 1 was the dimension of this r-dimensional manifold. So this is half the dimension. But as you see, unfortunately, in the previous results, you had a leading term. But in this case, we don't get a leading term. So you just get a pound using the same technique. So yeah, you're going to have to work a little more if you want to see a leading term here. And this problem actually was motivated by, was first considered by Taub's, this error invariant problem. And his result was that it was a little bigger than half the dimension in the result that he got. And this epsilon was between 0 and 1 half or something. And this problem was also considered by Sey, but his result is the same, but it's only in three dimensions under some additional hypotheses. And the reason Taub's considered this problem is because it came up in the proof of the Weinstein conjecture on contact manifolds. He proved the Weinstein conjecture in three dimensions of complete generality using the seven written equations. And along the way, he had to consider this asymptotic behavior. Which Weinstein conjecture is this? So this is the conjecture on existence of ribor bits on a contact manifold. Maybe I'll state it a little later tonight. But to contact manifolds. OK, so that's the first result we have for our error invariant, that's the problem. Now, this was our result. Now you can ask, well, is it sharp? Is this bound actually obtained in some example? So let's look at a very simple example on S1. On S1, the drop-up is very easy. It's just IDK theta plus r. And this back room of this, I hope everybody knows how to compute. You can compute the error invariant even. So an error invariant, maybe I might be off my back by a half or something here. But it's essentially the fractional part of r. So it's the fractional part of this parameter r, ribor r. And you can see that since it's a fractional part, it's going to be a little bit more complicated. Because the fractional part is going to be o of 1. And what our general result says is that it should be little o of r of the one half. So it's less than what you see. So our asymptotic result is not sharp in this case. OK, so that's S1. Already you see that it's not sharp. But you might say that this is too simple an example. So in general, you can look at S1 bundles. Or OK, so let's look at another example. This is a very general class of examples. If you want, you can look at S3, for example. And you can do the computation there. It's not so hard to do in S3, either. But I want to present a more general class of examples. So you could look at S1 bundles. So you could start with your complex manifold that appears in quantization. You could start with the positive line bundling, your complex manifold. And the real manifold to take would be the unit circle bundle of your complex manifold of your positive line bundle. And then there is a connection here, a shown connection that gives a splitting of the tangent space of the unit circle bundle up here. This is the vertical tangent space of the fibers. And then there's this horizontal loop to be identified with the whole back of the space in the base. And we need to choose a metric on this and a spin structure. A spin structure could be put back by a choice made on the base. For the metric, we choose the adiabatic family of metrics, which is great about study in the business of the rock opera, so you choose a family of metrics, which is respects the splitting. And it's something on this part that's pulled back from the base on this part. It's just equals to 1 on the generator. Yeah. And you combine them together using this adiabatic parameter. So this is one of them. For a family of metrics, it's put on this sort of bundle. We also need to choose an auxiliary line bundle since we want to twist the drug operator up here. We just choose the trivial one, and we choose that one form to be a deal to the generator of the S1 action. So with all these forces, we now have the well-defined coupled drug operator on this circle bundle. We could try to compute its spectrum to get a D& variant. So the spectrum, you can actually, well, this is not really computation, but you can relate it to the spectrum of the double board drug operator on the base. So the spectrum of the drug operator on the circle bundle, this is our coupled drug operator, it comes in two types. So the first type is coming from the zero eigenvalue on the base. So the zero eigenvalue of the double drug operator on the base. And it's co-homological because the zero eigenvalue of the Laplacian downstairs is co-homological. And this eigenvalue is very explicit, and it's actually just linear in R or correct R. So this is the type one eigenvalue. You know exactly what it is, how it looks like. And then there is eigenvalue of type 2, which is not linear in R, but it has this square root behavior in R. And it's coming from the positive eigenvalues of the Laplacian on the base. So it comes in two types. But so this is not really a computation. And this second type of eigenvalue, you see this mu, which is a spectrum of the Laplacian on the base. So it's not really a computation, per se. It's just a relation between spectrum of stream and gas layers. But this is enough to get a handle on the Eda invariant, just the fact that eigenvalues come in these two types. And in fact, for epsilon, if you choose epsilon reasonably small, then there is a whole theory of Eda better limits of the Eda invariant, which was started by this mu sheet. So it was started by Whitton, I guess, at some point. But a general result was given by this mu sheet, and then by Dyle, who generalized the previous score. And using this Eda invariant, let me if we can compute the Eda invariant for epsilon small enough. And you get an asymptotic result for the Eda invariant, in this case. This asymptotic actually goes down pretty far. It goes down to one, not just to a warning. But you see that this leading terms of this leading expression here is discontinuous in the coupling parameter r. I think you should decide also weight in sum. The circle under Eda form was computed by weight in sum. That's right. Yeah, that's right. Yeah, so in the computation of the Eda invariant, there's three pieces, there's a spectral flow part, there's an Eda-teleform. And there's a transgression form. And the Eda-teleform was computed by weight in sum. And what I did here was compute the spectral flow part. So the spectral flow part is not so easy to compute for general circle models, but for unit circle models of complex line model, you can compute it in this example. So the weeping sum computed part of this computation. OK, so what you see here is that the Eda invariant is discontinuous in this parameter. But because, again, you see that there's this greatest integer function that appears in the formula. Inverting integer function. And whenever r changes in a way that this crosses an integer, this Eda invariant, if you just look at this expression for a while, you see that it jumps by order r to the n. So this Eda invariant actually jumps. And so you see that in this example, the best you can do is prove O of r to the n. n is 2n plus 1 is, again, the dimension. But what we proved, the local index theory result gives you is O of little o of r to the n plus 1 half. So you're still off by a factor of 1 half. So between what you can prove and what you can see in an example. So this was what I was trying to do for a while. Trying to close this gap of 1 half. And so to prove a sharp result, we make further assumption. So the first assumption that I will make is that the one form is context. So one form is content. So what this means is that in range, it's never 0. So you have contact assumption. And then there is also a contact entomorphism that you can define by contraction of d of a with the metric. So there's all contact entomorphisms that's defined this way. It's anti-symmetric with respect to the metric. And the sharp result under these two assumptions, so you assume that if you assume that the one form is contact. And if you assume that the spectrum of this contact entomorphism, so this depends on each point, if you assume that the spectrum of this contact entomorphism is independent of the point on your manifold, then under these two assumptions, I can prove the sharp result. And these are two reasonable assumptions. Contact entomorphism is a very general class. And if you assume that your manifold is what's called a metric contact manifold, so the metric is coming from an almost complex structure on the contact half plane, then that's that satisfies the index. In that case, the script j is just j, the almost complex structure. And the spectrum of j is just plus or minus i. It's independent of the point on the manifold. So in that case, you have again this sharp result of this large set of examples. So that's the first result. And again, you see there's still no leading term here. So I wanted to get a leading term. And this leading term you can get if you make for the resumption. So if you further assume that the rate flow, so the rate vector field is defined using the contact form by these two relations, if you assume that there's so there's a flow for this rate vector field. And if you assume that the flow is non-resonant, so again, I have to put non-resonant on my slide. So what does non-resonant mean? So you look at a closed orbit. You look at a closed orbit, you have a point-query return map which is acting on a contact type of plane. And you look at the spectrum of the point-query return map. It comes in different types. So there's elliptic eigenvalues. There's hyperbolic ones. So the elliptic ones which lie under unit circle. There's hyperbolic ones which are real. And then there's lots of atomic ones. The elliptic ones come in pairs. Elliptic and hyperbolic ones, the lots of atomic ones come in multiples of four. And the non-resonant assumption is that these two sets of exponents, this flow k exponents, that appear here. So you have alpha j, u and alpha j, 0. These two sets are rationally related and independent. So there's no rational linear combination of these two, which is 0. So that's the non-resonant assumption. And under this assumption, you can get a leading tromb for the eta invariant. It's of order r of the n. And what appears in this leading tromb is, again, you see this in the form line, you see again, you see this in alpha j. This is similar to the both the Kiribay or Maureen Eskimo density function. And this non-resonant assumption, by the way, excludes the so-called one-counter example that is k-value. OK, so that's the result. Now I want to tell you what's involved in the remaining time. What's involved in the proof of these two results, is the short results. OK, so first of all, we're going to reduce the asymptotics of the eta invariant to trace asymptotics. So there's a well-molded formula for the eta invariant in terms of the heat trace, which is given by this muth and frit. So this is the formula. So this is an integral of the heat trace, d times e to the minus td squared from 0 to infinity. This integral, and this is an integral of time, all time. This time integral, you can separate into two parts, one for small time, one for large time. The large time part you can write as a single trace, and without an integral. Of a function, this is the sine error function here. And this function looks like the following. So the error function is rapidly decaying and everything. But the sine error function is also rapidly decaying and smooth except that the origin has a discontinuity. So this discontinuous functional trace, it doesn't have an expansion yet. So this trace of this sine error function is non-local. So this trace of this sine error function is non-local. And we have to work harder to get asymptotic of a discontinuous trace like this. But it's not so different from, if you're familiar with wild counting functions, you're looking at a wild counting function of an interval. You're looking at the trace of a characteristic function of that interval, which is also discontinuous. So then you can use a standard tabirian argument to get at this discontinuous functional trace using trace asymptotics. But this is a tabirian argument. This asymptotic is the A to N bearing of a trace formula, which is a physical trace formula. And this physical trace formula, what's important in this trace on the right-hand side here is this insertion of this data had this second function here. Without this data head drum, this is a function of d over root h. This trace asymptotics you can get at using local index to its local. But to get the sharp result, we need to consider a more refined trace with this data head inserted here with this combination. And what's important in the insertion of this data head is, get a check, rather. This is the inverse for your transform. It is that there's a full power of h in the denominator. Getting in with that full power of h in the denominator, this trace is not local. And but it still has an asymptotic. So there's an expansion in powers of h. And there's a local part in the asymptotics, but there's also a normal local part, which is coming from contributions of rib orbits. So this sum over gamma is sum over rib orbits of the contact form. And these constants that they're appearing in this formula are respectively the period of the rib orbit, the remanian length of the rib orbit, and the mass of index. So this is an analog of the usual with the trace formula. But what's new here is that the dynamic contributions in the usual with the trace formula there on the co-tangents on the Hamiltonian flow on the co-tangent space. But here they are on the base map. There's a dynamic system on the base map. This is somehow new here. And the even and varied asymptotics follows from this one. So I want to tell you a little bit about what's involved in the glistening trace formula. So as I said, the usual way to prove trace formulas is using FIOs and Hamilton. It involves Hamilton flow of the symbol. So if your operator is non-scaler, you have a non-scaler symbol, but you still have its scalar eigenvalues of the symbol. And you can look at Hamilton flow of the scalar eigenvalues. In this case, the trace formula, if the symbol is diagonalizable, smoothly diagonalizable, these are smoothly smooth functions. And you have a smooth flow for these. The trace formula will involve closed orbits of these Hamilton flows of these eigenvalues. But in our case, the symbol of the Dirac operator is clip of modification. And its eigenvalues are plus or minus absolute value of psi plus x times the co-vector on the co-tangents space, and a is the one form. And since this is absolute value, this is the square root singularity, this is not a smooth function of the co-tangent space. It's not smooth along this characteristic variety. And it's not diagonalizable, so the usual trace formula will not apply. But to prove this trace formula what you do is you try to look at the local model to begin with. So if you look at the local model, so this is the standard contact structure on R3 with some nice adapted metric, the Dirac operator looks like this. And these are the three-pillar-multimeter seas. It is a three-pillar-multimeter seas, and this is the coupling, which is coming from the contact flow. And then there's a free part, which corresponds to the ray vector field, and then there's a part which corresponds to a complex harmonic oscillator. And this complex harmonic oscillator has a well-understood spectrum. It has infinite dimension. It's conjugate to the real harmonic oscillator of one dimension. So these eigenspaces are actually infinite dimensional. They're called Landau levels. And in this case, and at least in the model case, you completely understand the spectrum. And what you try to do in general is try to make your general Dirac operator under your assumptions, try to make it look like the model one. So in general, it looks like this. This is the local expression for your Dirac operator. These are, again, the Clifford matrices. This is the coupling one form. And this is the tensor which diagonalizes the metric that appears in the local expression. And the idea is to try to conjugate. Conjugation does not change the spectrum. I try to conjugate it to make it look much like the Dirac, this model Dirac operator. So this is the model part. And then there will be an error. The error, you try to make it as best as you can. So you try to make it commute with the leading point. So this is a standard procedure for normal forms of operators. And then once you have this normal form, what you do is you break up the trace into these eigenspaces of this model operator, or the Landau levels. And then you study the trace on each level. That's the idea. Now, the normal form has an interesting, as an interesting procedure in this case. So the conjugation operator that appears, this conjugation of the Dirac operator that appears in the normal form, is conjugation by an FIO, the Fourier integral operator. And the Fourier integral operator that appears here is, as an interesting form, is now. OK, so it's conjugation by an FIO. This is E to the i t f of 2, while this f, while w stands for while quantization of a scalar symbol. OK, so this first part is the product of two operators. The first part is an FIO. And the second part, what's important here is that there's no h in the denominator in the second part. And it's called Clifford-Wall quantization. So Clifford-Wall quantization is quantization of not just a scalar symbol, but a scalar symbol which takes values in the exterior algebra piece of it. This scalar part is quantized by the while procedure. And the exterior algebra is quantized by the Clifford procedure, so the Clifford-Wall quantization of a symbol E, C. So this is something you want to quantize here. And this is just the scalar part is quantized by the while procedure. And then the Clifford part is quantized using Clifford quantization, in case of this or that. Clifford makes this. And once you quantize it this way, you can compute conjugations of such FIOs. And the reason for quantizing it this way is because what appears in the conjugation formulas in trying to produce a number of more are causal differentials of these symbols that you quantized. So this causal complex is just defined. The causal differentials are just defined on the exterior algebra, on the exterior algebra part it's either wedge or contraction, and on the symbol part it's either multiplication or differentiation. So these causal differentials appear in the conjugation formula. And then what gives you the normal formula is the Hodge theorem. It's a symphlectic Hodge theorem for a causal complex. So that's the rough idea of how the proof goes. And this normal form procedure generalizes a normal form in the book of every semi-flexible analysis. This is a new book. But he did it in three dimensions. He's a generalizing man. And so this gives the trace formula for small time. And then there's a large time trace formula, but that requires some more work. It requires a propagation of singularity's lemma, or the weak version of a propagation of singularity's lemma, which is not mine, but I found it to be a very good one. Which is not mine, but I found it in this book of the Marcy Andrews theorem. So that's roughly how the proof goes. I don't have so much time to give the details. So let me end by, I'd like to end with always some further questions. If anybody here wants to work on this, join me or do it without me. Then there's a lot of questions to be answered here. So first of all, you have this trace formula of the either invariant asymptotics. You could try to do the same under other assumptions. So instead of a contact form, if you choose some other form, there's a lot of models for one forms, and singularities of one forms on a man called. If you change your contact assumption, the assumption is not redundant by the way. It's fairly necessary. If you don't assume your manifold as contact, it's not clear what the dynamical system is that will appear in the trace formula. So if you change the assumption, the formula will change. The trace formula will almost surely change. So you could try to work it out. The other thing is we considered the data invariant. We could consider the determinant again, as in this book, the best row. We could consider the asymptotic numbers of the log determinant in this real case. And this is not so easy. Again, this move as rows. Example, you need positivity, which doesn't exist here. This assumption doesn't exist here in the real case. So you can look at asymptotics of the log determinant, and you would have to understand this case. Low line eigenvalues of the Dirac operator. There are people who work on this expansion of low line eigenvalues for magnetical fractions and so on. Then try to understand the propagator. So you could try to understand the wave equation of the Dirac operator. Our proof doesn't really go through this wave equation. It uses normal forms and stuff. So you could try to understand the propagator and propagation of singularities for this wave equation of the Dirac operator. And again, so that's another thing. And another thing you could try to do is try to consider, again, our technique is more refined than the local index theory technique. So you could try to apply our technique to the actual quantization problem. So actually, tens of powers. And the Dirac operator and tens of powers and stuff. Try to see what this local technique gives in that case. And finally, you could study the, so we consider the magnetical couple Dirac operator. You could also consider the magnetic operation. It's slightly different. You can try to prove a trace formula for the magnetical fashion, instead of the Dirac operator. And there's, so this is a semi-classical problem. You could consider the micro-ocal analog of this problem. And this is related to what's called the subroman in Laplacian, which I don't have time to explain. But that's a much bigger subject. Much bigger subject is subroman in geometry, but I don't have time to explain it. So that's all. Thank you. Thank you. Any questions? So you say one might try to apply those techniques to black-keyed and sense-contained space. Yeah. So the first thing to find out about analog is just a general question. It was only a general question, but you would have to figure out the right assumptions. What's the analog of contact assumption in the even dimensional case? I'm not so sure. But we could try to think about it. We could get something about black-keyed conjecture. At least some particles. I don't think this is a possibility. Even the usual good-solid transform involves Hamilton contribution from periodic Hamilton trajectories. And that doesn't really give you existence of a Hamilton, closed Hamilton trajectories. And there are examples where you don't have Hamilton trajectories, but you still have the trace model. So one more question. Let's have a speed. Let's have a given back equation. This cannot be properly mixtured.
For manifolds including metric-contact manifolds with non-resonant Reeb flow, we prove a Gutzwiller type trace formula for the associated magnetic Dirac operator involving contributions from Reeb orbits on the base. As an application, we prove a semiclassical limit formula for the eta invariant.
10.5446/59259 (DOI)
It's such a technical title, so for my talk today, this black box basically is about a legal proper action on manifolds. And while I was preparing my title, I was thinking of maybe another alternative of the title, but I was a bit afraid with that one. So maybe I should write it here. So I guess there are many experts here. And for different experts, probably what have your own preferred definition of possible differential forms for proper action? So to be more precise, I'll try to give this related more precise title. So maybe let me start with the differential form when I learned to graduate differential geometry class. So if we are having a smooth manifold, there were all familiar with differential forms. But now, if we have a loop, say a finite loop, then answer actually is not unique anymore. We can consider at least two possible answers. We can consider kappa invariant forms. Udall has differential forms for the proper action. But on the other side, for people interested in equivalent index theory, there is another very natural choice. We have to consider a direct sum of differential forms over the possible fixed points, supplementary for all. And then you take some kind of gamma-neighborner forms. So now, this kind of a list gets much longer when we consider general proper actions. So I'll not even try to write down the possible list, as I said. There are many possible answers for different purposes. Is there a backflip or a skip? Do you want to have a skip, yeah? Oh, yeah. Thank you. Should it be this? And then there's an action of gamma on the union of m gamma. And then you can consider the numerical form. Are the fixed points fixed? What? Are the fixed points fixed? OK. Why? Well, so let me just be precise. Here it means. And if you take the next one. In the other way, I have this gamma action. What I do is I consider gamma x of gamma 0 m. And then this. I come to the gamma part. I also move the n part. If they can ask about gamma, you're going to do forms. But now, let me explain to you one approach, one possible proposal to this differential form, which probably fits better if I put this word in front of me. OK. So our motivation is for looking at the basic case, differential forms are manifold. Then what if we started with zero forms, we know they're just the functions. Now for one forms, we can consider, well, we know. We all look like just f0 df1. It's a pair of functions that I can consider spell. And now that we do see this, it's something related to this tensor product. At the time, we want to put some kind of a right topology on this tensor product to make it a nice thing. But then this can continue for k form. We can have a k plus 1 tensor product. Well, now we are having a collection of spaces. Now, the natural thing we can think about is, well, maybe we fit this into some kind of a chain complex. This is the suffrage we're going to introduce or discuss here. It's the Haukshaw homology. So for our discussion today, for the general proper actions, you can see that we will need some kind of a replacement of this kind. We need to have algebra, or maybe some more general category. And then we'll need to talk about the homological algebra part to define the right homology. So let me move on to it. So any questions so far? So I did this. I'll be more than happy to discuss with you about your favorite choice. I'm sure everyone has your own choice. And probably are all different. So now let's start with the algebra part. So now the convolutional algebra. So yesterday, we have thought about, if we're given a group, a league group, we can consider the convolutional algebra. So for my interest today, I'll consider kind of a smooth version. Let's see the complex product smooth functions of the league group. Then we can talk about the convolutional product. So now today, I have a model group. I also have a manifold. So if you consider a kind of a generalized convolution algebra for such an action. So the first observation is we look at the C infinity functions of the manifold. Then we have a group action. So this G act on this function is just by the translation. So I'll just run it. So G acting x phi phi G plus x. And then using this action, we can combine a kind of a convolution with coefficients. So the space, we're going to consider, I will use this notation for my space, algebra for group action. This is the functions of our group. And with coefficients in our algebra, let me just also include the complex support assumption. So on such a space, we can consider the following generalized convolution product. So if we have two functions, our G with value in our function on n, we can consider the following generalized convolution algebra. So this integration is over G with respect to some how measure. When you're done with that, so H x on two values. So this is a type of convolution product that will be interesting. One of the G's should be 8S. Sorry, H. So now, with this algebra, I will try to explain to you what it would mean by the Hock-Schott homology. So let's go to general definitions of Hock-Schott. So this works for general for algebra over field. So let's say for us over r with capital 0, but you can work with more general coefficients. And as I'm motivated just now, so we'll consider tensor product of this algebra with itself. So we can see there the following chain complex. So we have Ck. This algebra is just a case here. And I forgot to mention, so here for this algebra, it's not just a pure algebra. It actually has some very nice topologies. We have the free sheet topology. Then we want to talk about this tensor product here. You're going to take a topological tensor product with the proper tensor, a biological projective tensor product. Then you can define the differential. The differential goes like this. So it's decreasing the degree. So if we write it here, what do we have? We have a 0. So given us k plus 1 tensor, we need to construct some k tensor. So what we do is we just contract any two nearby elements. Then we'll have to take alternating sums. The second term, 0 tensor. And then the last term is ak. For that, I want to move it around to the front. So that's the differential. And then you can check that b squared is 0. The homology associated to this complex is what we call Huxley homology for our algorithm. It's defined to be the homology of power. So that's the definition. So let me give you a few examples to explain to you why this kind of homology group is interesting and why it's possible to study of quantization and also a covert index there. So examples. The first example is the one we just started. We will consider algebra is the algebra of smooth functions and with manifold. And then there is a very nice map. You can map for the cancel of functions to differential forms. So let's give us a map from the Huxley complex to the differential forms. Here our F is complex. It turns out that in the case of smooth manifold, this map is actually a quasi-isomorphism. So this is the classical result of Huxley, Caustic, and Rouss-Roux. With zero differential on the right. Now here, this is no differential. I did not measure the differential. So now let's get closer to our interest with group included. So we'll next case it all before the case. In the case of all before. So we have a gamma action finite group, like this greater group. Now we can consider this algebra. It's really called the cross product algebra. So we can ask the same question about the function of homogenous like this. Then the answer is, well, first of all, I told you is the second one is the partial form of all possible particular function of the function of the partial form of the function of Rouss-Roux. Now that's for our finite group. Then you can ask, OK, how about we change the group a little? So we can consider the case with g compact. And if I consider m to be trivial, then it's becoming the convolution algebra. Point trivial in the machine. So this is just a smoother version. I don't even know the compact. Smooth function is compact. And now we can ask about the answer. Turns out, well, for many of us, Peter Rouss-Roux, this algebra is basically the direct sum of matrices. So matrices have trivial homologies because they are weaker than complex numbers. So with that in mind, this basically says that when k is 0, we have cross functions. So smooth function. So the space of all of them. And then when k is positive, we have a 0 country function. So let's maybe consider a general case to relate what you are interested to the talks before. So if we have a general algebra, now what's the interpretation of our hh0? 0. So remember, for hh0, the differential of c0 is trivial. It's lower complex. So this is just a. But not more not. The image of the differential. But the image actually becomes the commutator. So it becomes. So you can see that if we look at the functions here, they are just the traces. So this gives us the information of traces. So in general, we can think about Hock-Schott cycles or Hock-Schott homologies as higher traces of our algebra. We can use that to study our algebra. So that's the nice connection to what we see yesterday about the traces on the algebra is like the orbit intervals. So that's useful. The next topic I want to discuss is what is about the proper action. So to explain to you the conjunction about it, I took the definition of a proposal for the differential forms. So this is called the basic relative forms. So after releasing. So what is this object? But to do that, I can start with explaining to you what is this object and then what it would mean by basic. So first of all, I started with introducing the following set space. So we have a subset, a closed subset of this product, G and M, as a generalization of the top of space. We have here, I call it lambda 0 for M with G. So I use this notation to hint at that. Actually, you have a power of such objects. So this is a closed subset defined as forms. So we have G0 for all possible group element with M. And then we require G0 fixant. So now you can see the difference between the orbit for the case and the proper action case. In the case of orbit fold, our gamma part is discrete. So every piece of such set is actually a smooth manifold. But now for our proper action case, our G part has a non-trivial smooth topology. So this one now is a closed subset of this one, but not a manifold anymore. So this is actually a stratified space. And actually, this is a semi-ultra-grade. So this is a semi-ultra-grade. So if we know what I mean by this. And on this set, actually, this is your nice G action. It's actually not just a robot. So G act on itself. Same formula I wrote. G0 and M. The same formula we saw in the case of orbit fold. Is it in the ring of the bell or in the ocean? Well, it's still in the ocean. Thank you for bringing that up. But it's not in the differential form on the inertia. So now I want to tell you what do we mean by relative force. So for that, let me first tell you what do we mean by relative co-tangent balance. It's the generalization of the orbit fold case. So it's with relative co-tangent. So our new staff, just since the sheath, the quotient of lambda 0 by G. And this is to be thought of as a stack? You shouldn't really do this as a stack. I also cannot avoid that time. Thank you. So now what do we mean by that? So intuitively, it's just what we wrote in the case of orbit fold the case. So what do we consider? Consider a sheath. So now at every point we have the stock. It's basically co-tangent space at M of the fixed point. So this is kind of a very natural object and a normal discussion of orbit fold. But now you have to put them together. You have to put kind of this thing together. Now the kind of slightly tricky part is you have to introduce the right topology of this to make it a sheath. So algebraically the right formulation to define this is through something called Rhoa1, authentic complex. I can a little bit follow algebraic geometry. But here the intuitive definition is if we have a section of this sheath, what do we buy smooth? So we have the restriction of the following bundle over g cross-hands. So we have this bundle. So this one has nice smooth manifold structure. We can talk about the smooth sections. But when we say section of this sheath is smooth, basically it's coming from locally, from a smooth section of this guy. And we use that to topologize this sheath to make it a smooth object. So that's what we call the relative cotangent bundle. We can see it's relative because we're not considering the cotangent bundle of the whole thing. It's only along the end direction. That's relative forms. Now we can consider the relative form by considering the exterior wedge product of this sheath. So sections of this, smooth sections, as they are called relative k forms, is on k. So that's relative to this, like onto the m part. As we observe just now, our g, our room, acts on this, what we usually call inertia. Here we have lambda 0. So to discuss the right differential form, we need to also consider g action. So that's the next step I want to discuss. Now I want to introduce the g action. So what is the g? How does g get involved? So here we have the following observation, which we already have here, I think, in Wernst's talk. So Michel's talk. So what do we have? Because for any g0, we can have the following centralize. So these are group elements, such that g is 0. So g is 0g. Now if you want to look at it, this is a subgroup. And the observation here is, as g here commuted with g0, this subgroup acts on the fixed point manifold because of the compatibility. So now with this, you can consider the associated and the algebra action. That's why we use the fragment. Now with this the algebra action, we can consider a special type of relative k-forms. So those forms try to vanish under this image. So vanish under this image. So we see all these other forms. How do we if? So this is actually this smooth family depending on g0. Now with this one, we can consider, so, in horizontal, the order of the room cosine is equal to 0 for all cosine inside the algebra and we can now centralize it for all cheese. So that's the horizontal forms. Now another observation is, the g action on the original is kind of a loop space. Actually also descend to back down this kind of horizontal forms. So we adjust those g-variant horizontal relative forms. So that's a long definition with a lot of words. Maybe I should show you an example of it but you can probably have some idea. So the simplest possible case, we have a circle action on the R2 by rotation. So now in this case, what is our lambda 0? Our lambda 0 actually is R2. It's already a useful thing on the set. So what do we have? If we have identity group element, the fixed point set is a whole R2. But if we have not given group element, the only fixed point is just orange. Now to draw it, you can see that we have a circle. We have orange in here, one. At the one, we have a slice. And this whole thing should be viewed as a subset of R2, both as one. Like I said, we need some kind of algebra or algebra geometry here to describe this because this is not a Neural manifold. So now what are the four on our basic random table form? So here's a symbol for that. Now with that symbol, I can explain to you zero forms over basic random table R2. That's one. So you can see that probably there are just smooth functions on this set, but I have to do it in the algebraic sense. We have smooth functions of how long? Zero R2 of S1. And now we have S1 in the same way as S1. So what do I mean by smooth functions of this kind of a singular set is basically saying that there are restriction of smooth functions on this. At least a local. So that's zero form. So I can ask about what happens for one form. For one form, actually you can see, well, for group elements other than identity, our fixed point has just zero dimensions. So there's no non-trivial form so we can support. So then you can see that it's all located at this identity component. So it looks like just the functions of R2, one form of R2 is right under the S1 action. I do also have this horizontal plot. So it's actually, let me just say like this, R1. So this R is a radial function. So they are really the basic horizontal forms on the arc with respect to the S1 action. Now you can ask, well, how about hires? Hires actually all vanish. So if you take the volume from one R2, wouldn't that be just invariant on the S1? Why is that? Wait until you've got it. But unfortunately it does not show up in this situation. Is it basic? What? Is it basic? Basic. That's not basic. Oh, not basic. OK. You're going to want to subtract two more patterns. OK, so that's this example. So now let me tell you the general against the structural theorem. This is from Bernitsky. This is unobstructed paper around 1987. At least from his paper, we cannot talk completely recovering the theorem. Maybe he had it, a different version. So what his theorem, conjecture, is if we consider the function of the formology of the algebra we considered, is this kind of basic relative. So now maybe next I want to try what we are able to show in a moment. Sure, no question. So if I get into this, I will. Well, I'll explain to you what we have done. I need to show you. Please keep patient for a second. I don't know what we've done, where we are. And maybe what we hope to do. So this is a theorem. We tried to where the mark was from, but it has a plot to it. So first off, I want to remind you. One is there is a natural sheet from that from the chain complex of this one over the step to this sheet complex over this. So there is a mass from there is a natural mass of sheets, or maybe pre-sheets of sheets of sheet complexes from A, M, and G. So I should clarify this to show this is a sheet complex or pre-sheet complex to the relative complex as a sheet over actually M, G as a stack. So this chamber is a generalized hawk shot for costal line. So in what case we can prove. So in the current state, we are not able to prove this map to be a quasi-isomorphism for general G, but we can prove the following special cases. So G is, as one we know this is a quasi-isomorphism. So for the kind of simplest possible case. And then for another case we know this two is, well if our say this and much in the case where I go, for this one, for this isochropic subgroup is a simple how we can do it. So what we do by that is stratification here. So we need the algebra for the isochropic of the G and for the isochropic of the algebra. We need the trivial of the whole group. Now we know this is also quasi-isomorphism. So another way to put, if we use this isochropic of the algebra as a criteria to stratify our portion. This is saying that stratification is very simple. On kind of two pieces of strat. So if we do not too many of the strat, of the data, then you can really prove this. So in general we can see that this is more of quite a data about the stratification of the portions. So that's where we are. Does that answer your question? Okay, so you can have a reduced, so it's the right look, you can have a quite nice, the right look. Yeah, quite as long as it doesn't matter really better than the other algebra, which is the kind of the smooth direction. This is an obfuscation. The obfuscation is not known before us. This was known in the 80s by the work of Minsky, Blocketsler, Lister, Van Dekamp. So how much time do I have? Five. Thank you. So I'll let me end my talk with a few remarks. The first remark is, we know that we have this Hawkshaw homology here. Actually we can improve it by introducing another complex. So I will not get into detail because of time. You can consider something called Ciclin-Promax, which is now viewed as the noncompetitive of Doraan homologies. But there you can ask the same in a question. So for the other one actually it's also well studied. So we continue this notion for periodic ones. So there are different versions. And for this one, it looks like a log-ly, but actually it's basically the equivalent of forms. Let me barely ever introduce it. I can book it. So what do I mean by that? So here what we consider is basic relevant forms, what we're considering kind of forms on fixed policies. Where is the centralizing group? So it's like, now you're still not considering this, we can consider a kind of acrobyte to wash off of this. You can consider something like the German functions supported nearby the origin of the d-article, let me just be a bit away, so v of g0, along identity with a value in forms. And then here you can consider the Euler-Cathomb model for acrobyte homology, where you have both the contraction and also the Doraan. So if you ask about the sum of it, that's the kind of used in the definition of the case, or the acryloin to forms. So you can see the relation between this one and this one is basically looking at some kind of a spectral sequence associated to this complex or taking the E1 page, the first page of the spectral sequence. That's how we get to that. Yes. So the statement is this better one, so the relative basic one is the E1 page of some spectral sequence to this everywhere. And then another remark, I'll get to connect the two quantizations. So there's also a very interesting dual complex people often study for Huxer-Dochor homology. Which is unit used by Ako, so Ako first. Here I'm looking for a case for algebraic self. Then you can ask about both of the HH2. This is related to the infinitesimal transformations of algebra, which in the classical case correspond to the partial brackets. So you can ask about this one. Now you can see, given in the case of Orbefold, we can have invariant symbolic forms which contribute to this. But you can also have other contributions from fixed-on data with co-dimension 2. So what are those objects and how do we quantize them? There's kind of a mysterious question probably related to the deformation of the singularities. But let me stop here. Thank you. So any questions? No, I haven't done that yet. But certainly this could be fitting to that framework. Yeah, for that part, I agree. But here I don't think the tensor product is really difficult. It's more about the geometry part. The geometry of the lambda zero, which is kind of singular subsets. That's the semi-artificial set. We're using some real artificial geometry. So the total algebraic job, the lower part of the fraction, is much better than the regular algebraic job. That's true. Thank you for that remark. There's one more question. So I have a question. So in which ways, because one day there's this kind of differential form, can I enter in index theory? That's a very good question. So I guess, well, this is just a wild guess. Probably I should have taken that off. You know, if you want to ask about what are the differential k-theorem, the differential forms, then maybe this thing shows up as, for that you need to have differential forms, instead of just columnology or k-theorem. There has to be more differential from better ratios. Could인지acherlain本当 Fairiland buy sent reading I say no and vote victory. Thanks for listening. Have a great evening. Thanks again. Thank you.
For a compact Lie group action on a smooth manifold, we will introduce a complex of basic relative forms on the inertia space, which was originally constructed by Brylinski. We will explain how basic relative forms can be used to study the Hochschild homology of the convolution algebra. This is work in progress with Markus Pflaum and Hessel Posthuma.
10.5446/59260 (DOI)
It's going to be difficult. Maybe I'll redo it. OK, so on this left-hand side, you should think that there is an integral over the loop space. And on the right-hand side, just a sum over selected loops, which are precisely the closed-ruthy 6. And it is in this sense that eventually the interpretation of the proof and of the formula I will give will be sort of as a localization formula in equilibrium homology, where on the left-hand side, we have something evaluated on the full loop space, integral full loop space. And on the right-hand side, we evaluate simply sums over closed-ruthy 6. So that's a stupid question. But when you say equal, these things exactly equal. Exactly equal. And that's a funny expansion. No, no, it's not. It's not a funny expansion. In this case, it's exactly equal. It's as equal as it can be. It's an equality that we will eventually describe as an identity of the same sort as an index formula in some way. So that's an equality. So the purpose of the talk will be to extend the very explicit formula to a between locally symmetric spaces, explain how to obtain the formula by an interpolation process. So we will actually interpolate smoothly between the left-hand side and the right-hand side. That connects with infinite dimensional version of Deuysse Weifengmann-Berlin-Wang formulas. So first of all, left-sheds formulas. So I will start with some very general arguments. So Euler characteristics. So I take x to be a compact manifold. And I define the Euler characteristic. So as you know, there are many ways of computing the Euler characteristic, either through a triangulation, but there is another way through the heat kernel. So you introduce basically the Hodge-Loplaschen square x associated with a given remanian metric on the manifold. And we have this basic formula, which says that the Euler characteristic of x can be evaluated in terms of traces of certain heat kernels. Actually, there is a sign TRS, which means supertrace. It's graded trace. It's alternating sum of traces. And this is for the heat kernel of the Hodge-Loplaschen acting on the various forms of various degrees. So you take the alternating sum of these traces, and you get the Euler characteristic. This formula is quite easy to prove. So actually, one of the goals, one of the uses of such a formula, in this case, can sit in making s 10 to 0. And this way, you will obtain Schoen-Gausbohne. So more or less, this is a scheme of how to obtain the other heat equation all index formula. Now, left sets formula. So I take slightly more refined thing. I say g to be a diffeomorphism in diffeomorphism of x. And so I know that g acts on the quorumology of the manifold. So then you can define the left sets number. That's the alternating sum of the traces of the action of g on the quorumology. This is a global invariant. So again, you still have a formula of the type of McKinsey, which says that the left set number, which is something which only involves the quorumology, can be evaluated in much, with a much bigger object, which is the full-term uncompact, just as before. And the formula is almost the same, except that you introduce here the action of the diffeomorphism in the trace. And again, such formulas are quite easy to prove. So by making s then to 0, we obtain l of g as a sum of local sum of contributions of the fixed point of g. That's the left sets formula. So I wrote here an interpolation formula, in which here there is a g missing. So the interpolation here is meant to do the following thing. If you make s then to infinity here, you obviously get the left set number, because you obviously project on the harmonic forms, which represent the quorumology. And on the right-hand side, we make the limit as s then to 0. You obtain whatever formula there is for the left set number of g. So at the same time, this object remains constant and interpolates naturally between the left-hand side and the right-hand side. So this is exactly as the mental mechanism will apply in the case of the trace formula. So let me now take a compact manifold, x compact remanian manifold. Let me introduce delta x to be the Laplace-Baltrami operator on x. And then for fixed t, I just call g the heat current. So I deliberately use a segmentation that I used before for the nithiomorphism. So I'm going to ask four questions. So the first question is, is the trace of the heat kernel an Euler characteristic? What I mean by this is, can I think of the trace of the heat kernel as being an Euler characteristic of exactly the same type as before, just by doing the proper changes? Now, the second question is, can I express this Euler characteristic, or if you like, is generalizing the heat current using a much bigger object? So before, in the case of the chronology, we have the chronology, and we resolved it by the Durand complex. I'm asking, can I resolve in a certain way the space of smooth functions with values in R by a certain complex and re-express the trace here in terms of alternating some of traces on a much larger object? So the analogy again is passing from the chronology to the Durand complex. We want to pass here from the smooth functions to some complex which will resolve the smooth functions on X. And the third question will be, by making b tend to infinity, do we obtain Zellberg's trace formula? So in other words, can we think of the same type of trace formula as a generalized index formula? So again, I gave here the interpolation scheme. Here is a trace here on smooth functions of g of the heat kernel, which is a global invariant. By making b tend to infinity, we still don't know what the parameter is. We will obtain Zellberg's trace formula that we will think of as being a local formula. So what this means by analogy is that we will have to think of the smooth functions as the homology of some complex and even more as the harmonic objects in some complex. So I will try progressively to give the instruments which allow us to do this. So the answer is yes. So first of all, the question of resolving the state of smooth functions. So in the case of a manifold, again, we took the homology and we resolved it in the proper sense by the Durand complex. Here we're going to ask, can we think of the smooth functions on x as being the homology of something? So let me construct an infinite number of ways of doing this. So I just give myself a real vector bundle on x. And I call this script to be the total space of E. And then I introduce the space R. So this is the relative Durand complex. That the smooth forms are the total space, but which are just forms along the fiber. So this family of Durand complexes along the fiber. And DE is just a fiber-wise Durand complex. Now, I claim that my plurgery is lemma. The co-homology of this complex is precisely the smooth functions on the base. Well, this is because the co-homology of the Durand complex itself is made. Ultimately, it comes at a degree 0. It is a function which are constants, fiber-wise, meaning that they come from the base. So this is a complex whose co-homology is the smooth functions. Now, does this complex have a Hodge theory? Does this new complex R? Does it have a Hodge theory? So I claim that there is such a Hodge theory, because if you introduce GE to be an Euclidean metric, there is a Hodge theory for all these vector fibers. And this Hodge theory is in some sense given by the Wittner-Lapplation. So just a short reminder about Wittner-Lapplation. So in very simple case, I give myself E to be an Euclidean vector space. The classical obvious Hodge-Lapplation has continuous spectrum. But in some sense, the Wittner-Lapplation has discrete spectrum. So you can view it in one way, is to twist the Durand operator by the Gaussian function. Do not change the co-homology this way. So you construct the corresponding Hodge-Lapplation by taking the formula joint. What you get, essentially, is a Laplacian. You get a first operator, which is H, which is a harmonic oscillator, and N, which counts the degree in the exterior algebra. And so you recover, in some sense, the proper computation of the co-homology of a vector space, which is concentrated in degree 0, because H is the kernel of H is one dimensional, and N, 0 forms. So just to remark here that via the so-called Bergman's isomorphism, you can instead replace the Durand complex of E by the algebraic Durand complex on E. That is, this complex is quasi-isomorphic to this algebraic Durand complex. And then we see appearing here the polynomials, which are the symmetric algebra. And as we know, which is a natural inverse to the exterior algebra. So this construction here will play an important role in the cycle. So now we have our base manifold, X, and the fiber bundle, which is E. And the question will be to be how to couple the base and the fiber so that we can do something. So we need to introduce, first of all, the heat operator on the base, X, coupled with the written Laplacian on E. So this is the way that we'll approach the construction of this ultimately high-polyptical Laplacian, how to couple the base, which contains the heat kernel, with the fiber, which contains the written Laplacian. So let me explain the construction explicit in the case of a symmetric space. So let G be a real reductive group, K to be a maximal compact subgroup, and X equal G over K to be the symmetric space. I then introduce a corresponding carton splitting equipped with the non-generated bill in a form B. And this carton splitting descends to a vector bundle, E, which is made of two pieces, TX plus M. So it's a general fact that on a symmetric space or locally symmetric space, the tangent bundle has a natural auto complement, which is this thing which I call N. TX is modeled on B, and N is modeled on K. And actually, our interpolation will ultimately be built on the total space of such a vector space, of such a vector bundle. So we will explain how to couple X and TX plus M by lifting first everything to G multiplied by the realtor. So just an example in the case where G is a sum of R, K is S1, X is the hyper-alf-plane, and TX plus M is of dimension 3, so that the total dimension of fiber bundle is of dimension 5. So how are we going to construct this coupling? So Casimir and Kostalt. So first of all, on the reductive group, we have the Casimir operator, which is a second-order differential operator acting on the group, which is neither elliptic, neither hyperbolic. So it is, in some sense, positive in the p-direction and negative in the k-direction. That's a second-order differential operator, and which is in the center of the vector space. It is in the center of the enveloping algebra. Now I introduce the clefted algebra C hat of G, which is the clefted algebra of G minus B. And as we know, this clefted algebra acts on the exterior algebra of G stuff. So I also introduce U of G, which is the enveloping algebra. So if you like it, the algebra of differential operators of invariant differential operators of the group G, and the clefted Kostalt-deroc operator is a section of the clefted algebra tensed with the enveloping algebra. So it's a slight but significant modification of the construction of the classical deroc operator. It is constructed via the fundamental closed three-form of the group G, which I wrote here. And here is a short formula for it. You don't need to know this, so this is just, in some sense, the classical deroc operator, where the EIs are just differential operators acting on G. And here there is a cubic correction in terms of the clefted image of the three-form. Now the basic result of Kostalt is that the square of the Kostalt operator, with my conventions, is equal to minus the casimir. You could have another convention in which you would get the plus the casimir, plus some irrelevant constant. So what this means is that the casimir operator has a natural differential operator square root. And this we will exploit in the context of analysis. So let me observe that the Kostalt operator, what it acts upon, it acts on smooth functions on G with values in the exterior algebra of G star, while the casimir operator itself acts on the smooth functions on G with values in R. So in some sense we get a sort of straight root of the Laplacian, but it acts on the wrong space. So it's like straight root of minus one, which was just outside the real line. Kostalt, because I had K for, okay, I thought of this. I had K to be the maximal compact subgroup, and I did not want to have the question, what is K Kostalt? So this was made for you. Okay, so the operator Db now. So let me construct a sort of generalized Dirac operator. This Dirac operator will act on G product-ally algebra. So this new Dirac operator will act on smooth functions on G tensor, the types of the algebra, with values in lambda of G star. Okay, so, sorry, this is going too fast. So here is the formula. So I have a first term, which is just a Kostalt Dirac operator. So this one acts on the infinity of G tensor with lambda of G star. This is for the first piece. This one, you should not look at it. Absolutely not. Do as if it was not there. So this piece here is just a version of the Hodg-Durham written operator. That is, that's a sort of D plus D star, if you like, but in the proper Hodg theory. That is, in the proper Hodg theory of P and in the proper Hodg theory of K, which involves a written twist. Now you see that these two pieces, P and K, are treated differently. I mean, this different treatment reflects the fact that we are on a reductive group. But essentially, the effect when you square this is just to produce a very good Hodg-Lop-Lasch. So in some sense, these D papers, D per star, it acts on sin infinity of G of P, sorry, with coefficients in lambda of P star. And the other one acts on sin infinity of K with coefficients in lambda of K star. So if you combine all of these, you find that this new operator, D B, which I will call, for lack of a better name, a Dirac operator, acts on the smooth function of G cross G with values in lambda of G star. So what is the high-polyptical operation in this context? So what I'll do for the moment, I'm working on G, on the group G, 10 times the l-algebra G. So I take the square of the previous operator, D B square, and then I subtract from this square, the square of cos dot. So if you like, I do A plus B square minus A square. And here is a formula for this operator that I will explain, I will show to you. But before doing this, I descent, I quotient, the whole construction by K. So in other words, for the moment, we've done analysis on the group times the l-algebra, but then we have to descend everything to the symmetric space. So as I explained before, the l-algebra G descends to Tx plus m, G with its cut-out splitting, and G cross the l-algebra to the total space x hat of Tx plus m. So our new operator, lbx, which is obtained by descent, will act on the smooth functions on x hat, with values in the exterior algebra of T star x plus m star. So again, using the Fibromois-Bartmann isomorphism, instead of viewing the construction as acting on a bigger space, which is a total space of a vector bubble, you can instead view this operator as acting on an infinite-dimensional vector bubble. The infinite-dimensional vector bubble is just the symmetric algebra of T star x plus m star, multiplied by the exterior algebra of T star x plus m star. So here is the formula for this lbx. So this lbx contains along the fiber the harmonic oscillator of Tx plus m, but in some sense along the fiber it is just the written population. What we see appearing here as an operator which just differentiates in the base direction is actually the judizic flaw. And we have a bunch of other matrix terms that I will not explain. There is also this extra term here, which is a term of order four, which means that our operator, we are quitting the unshattered word of harmonic oscillators to go to something slightly more complicated of operators with coefficients quadratic, but this piece is absolutely necessary to make the theory work. So this operator is hyperliptic. So what does this mean hyperliptic here? That means that it does a good analysis in spite of the fact that it just differentiates twice in the fiber direction, but only differentiates once in the base direction. So analytically it is good, it is part of the class of operators concerned, considered by Hermandem. And so what I'm saying, and I will not show this, is that in the proper sense this operator deforms the original Casimir operator acting on X, or if you like deforms the original Laplacian. There is a collapsing phenomenon which is relatively subtle, which means that analytically splitting this operator deforms the original Laplacian. But now the wonderful thing is that when B tends to infinity, it will be the judizic flaw which will dominate. And so when B tends to infinity, this will force the localization on closed physics when you consider traces of it cones. So the case of locally symmetric spaces now, so let me take gamma to be a compact torsion free discrete subgroup of G, and then we consider the quotient of X by gamma, which is a compact locally symmetric space. So all the constructions I have done, so I have done constructions over G, then I descended these constructions over X, and now of course I can descend the construction to the quotient, because basically all the objects I introduced are properly equivariable. And so the fundamental identity is to say that the trace, in some sense of ordinary heat kernel, so Cz, in some sense you can think of this as a heat kernel, now Cz minus C, as being the, this is a sort of, I mean it's just modification of heat kernel by constant, is equal to the super trace of the corresponding hyperliptic operator. So exactly like an index theory where you say the original index, or Fshad number, is equal to the super trace of a heat operator acting on a bigger space, we have exactly the same sort of phenomenon here, and the proof is in this context rather elementary, it relies on the fact, on the collapsing of this operator to this one as B10 to zero, plus the invariance of the super trace under deformation by B, which is essentially an index theoretic proof. However this is not good enough for our purpose, because in some sense we're not going to apply to play the Zellberg's trace formula in its algebraic version. So actually when you have the trace of each kernel, locally symmetric space, you know that you can re-express it as a sum of objects which are evaluated on the symmetric space. So you can go back to the symmetric space. So in other words, in this equality, we can as well evaluate the left hand side and the right hand side as being sums, infinite sums, over the district root gamma. And now the true miracle is, is that it's not only true that the trace on the compact locally symmetric space is equal to the super trace again over the compact locally symmetric space, but orbital integral by orbital integral, there is a preservation. So now we're going to move to different sort of geometry. So the identity splits as an identity over orbital integrals, and this is why we will be able eventually to give geometric formula for that. So semi-simple orbital integrals. Let me take gamma to be semi-simple in G, and let me call gamma, right, it's country AC class in G. Then you can, this has already been introduced here, introduced in the proper sense, the orbital integral, in this case of the heat kernel, which is an integral of the quotient of G by the centralizer of gamma in the group. So in the proper sense, this integral makes sense, in spite of the fact that Pt of X is just a heat kernel of X over the symmetric space. So what I'm going to try to explain is how first of all to geometries the orbital integral, how to describe it as a geometric object. And as soon as we have done this, we will see that naturally the closed-index or minimized-index will come in the game even before doing any analysis. So given a semi-simple element gamma in the group G, I introduce the displacement function d of X and gamma X. So d being the remanent distance on the symmetric space. So it is a fundamental fact, a consequence of negative curvature or non-positive curvature, that this function d of d, the distance in general, is convex on X cross X. Convex in this case means that when restricted to due this X, it's convex. And so in particular, the function d of X to gamma X, a distance of X to gamma X, is convex. And the fact that gamma is semi-simple just says that the minimizing set for this displacement function is a convex subset, non-empty convex subset of X. So this is just the same simplicity of gamma which tells you this. So actually in our case, X of gamma, the minimizing set for the displacement function is actually the symmetric space for the centralizer z of gamma. So in other words, if you take a semi-simple element gamma and you look at the symmetric space associated with centralizer, the symmetric space itself is the total idio-disiq sum manifold of the manifold X. So how can we understand geometrically what is the orbital integral? So I wrote here X of gamma. So I take a judizik at X0 which is fixed, which is normal to X of gamma. And I take its image by gamma. So X of gamma is preserved by gamma, so this image is another judizik starting at gamma X0. Now because of negative curvature, these two judiziks are going to go far away. That is, the mutual distance of Y and gamma Y grows at least like the norm of Y, like the distance to X of gamma. So what is the orbital integral? The orbital integral is just the integral essentially of the heat color of X. Evaluate on Y and gamma Y, and you integrate just in all the possible normal directions with R of Y here being coefficient, which is a Jacobian. That's a geometric interpretation of the orbital integral. The fact that the orbital integral exists just uses the fact that the heat kernel at large distance decays like a Gaussian and compensates the possible growths of the decoding. So the proof we will give for the evaluation of this orbital integral will consist in pushing properly the integral to X of gamma by a deformation process. So let me now, well maybe I skip this, and let me give now a second fundamental identity, which is the fact that in effect the original orbital integral for the heat kernel, the elliptic one, the classical one, is given by the new orbital integral for the high-politic heat kernel. You can think of this as being an orbital integral for a nifidic dimensional vector bundle on the raise X. So these two orbital integrals are the same. And now we will make B tend to infinity in order to localize. It will localize to X gamma. So let me now explain the limit as B tend to infinity. So when you look at this high-politic Laplacian, you find that the dominating term as B tend to infinity is given by the jubizic flow after rescaling, so which forces in some sense the dynamics of the heat kernel to concentrate on closed-edizics. So let me now explain the final formula. So to explain the final formula, you can entirely forget about everything I said before. So let me take gamma to be a semi-sepal element, which I write in a canonical form after conjugation as a product of e to the a times k minus 1, where a is in B, k is in the group k, and a and k commute. Let me introduce z of gamma to be the centralizer of gamma, whose le algebra splits over the carton splitting as p of gamma plus k of gamma. So what is the formula for the semi-sepal orbital integrals, at least for the heat kernel? So the formula says the following. There is an explicit function, j gamma, which is evaluated on the k part of the le algebra, the centralizer of gamma. So you take this element gamma, you take this le algebra, you split it with p part and k part, and the orbital integral will be evaluated as an integral on the k part of the le algebra. So this is the orbital integral for the heat kernel. So this is an integral which comes from the geometry of x. And the right hand side is... Okay, so this exponential minus a square over 2t, this corresponds in some sense to the lengths of the due to the length squared associated with gamma, but above all what we have is an integral of k of gamma. And this integral of k of gamma, well there is a heat kernel here, it's not too bad. Here we have... Okay, I changed slightly gears because I introduced an homogeneous representation of k, and now I view my chasmir as acting on the sections of an extra vector volume. So this I did not tell you before, I was working before just with a lot of function functions. I also introduced now twisting, I just twist everything by representation of k. So we have a certain term which looks like a chasm character, like an equivalent chasm character. And so what when k easily believes that this is like a chasm character, this function j gamma will be like a thought form, or like an a-roof form. So note again the integral k gamma part, which is something which is in some sense invisible to classical analysis, it's somewhat ironic, that ultimately the final answer is expressed in terms of integral of k of gamma. So this formula accounts for the first term in Zellbrich's Trace formula I gave in the case of the main surfaces. So what is a function j gamma? It's a function again of the k part of the le algebra of the centralize of gamma. Now it looks very impressive, it is not. Well, I mean, I told you that we have this vector bound on tx plus m, sorry tx plus m. I would like to say that we have actually tx minus m. So this term j of gamma, it's a function of course, but it should be thought of as a ratio of the a-roof for tx divided by the a-roof of m. So it's really not like the classical Lefcet's formula, this tx plus m smithing propagates until the very end, and so we have the quotient here. So to make things more precise, in the case where gamma is trivial, if gamma is equal to 1, then the function j gamma of y0 is equal to the a-roof of i-ad of y0 acting on p divided by a-roof of i-ad of y0 acting on k. So in other words, instead like in classical Lefcet's formulas where we integrate differential forms, what we now integrate is functions under the algebra, and we integrate them whole, I mean, on the full k of gamma. So again the obvious analogy with a t-adbott, I mean, the formula, a fixed point formula of a t-adbott, is a pairing of a-roof of tx with a convention character of e. What we have here is just a sort of fake a t-adbott formula where of course the content is entirely different. And so tx in particular is replaced by tx minus m. Now, the last part is the connection with the loose space of x. So let x to be a remainder manifold and lx to be its loose space. So the loose space will be just loose maps from s1 to x at that stage. So as we know, lx is a remainder manifold, we can equip it naturally with the l2 metric, and s1 acts isometrically on lx, and the generator of the action is k of x, which is x dot. So in particular the zero set of this generator, and the trivial loops, they are point x, that's x, so x sits in lx as a zero set of a vector field, of a killing vector field, and a normal bundle to x in lx are just functions. You just take a point x, you take the tangent bundle, txx, and you look at all functions from s1 into this fiber whose integral is equal to zero. That's a normal bundle to x to lx. And now what I shall do is evaluate the inverse of the equivalent overclass of the normal bundle to lx in lx. Well, I mean, we are in the context where you could apply equivalent chronology, all these vector bundles are acted upon by s1. So the only difficulty is that since this normal bundle is infinite dimensional, we would have an infinite product. But if the infinite product is suitably normalized, this was a remark of Witton and Thierre that you get exactly the A-roof genius of tx. So in other words, the inverse of the equivalent overclass of the normal bundle is just given by a known characteristic class. So index theory and localization formulas. So I'm going back here to a Thierre Witton and later work I did on this, which is to say that you can interpret the heat equation formula. When you look at the heat equation formula for the index, it just looks like a magic analytic trick. Now, if you view this from a physicist's point of view, you can rewrite this formula at the first stage in a rigorous way as an integral of some measure. But actually, if you do the physicist's way, you just rewrite it as an integral of an explicit differential form. This differential form does not exist formally, but still it has a proper symmetry. It is essentially closed. It is equivariately closed under the action of s1. This was the observation of a Thierre that if you apply localization formulas in equivine commonality, what you get is a true answer, which is to say that the index of the x is equal to the integral of x of A-roof stancher. So it's as if the mechanism of equivine commonality was already at work in implicit form in index theory. So the form beta is here, the alpha t, maybe I should just explain what it is briefly. It is an explicit form which exists universally on any space with an action of s1. It contains as a leading term here essentially alpha t. It contains as a leading term exponential minus k squared over t, e to t. There is an extra term of degree 2. Bj is also explicit. It is a lift as an equivine form of the turn character form of e. And so if you apply formally without doing a limit, localization formulas in equivine commonality, you get the index formula. So at the time I was upset with this sort of argument, but later on I convinced myself that what the heat equation is doing to prove the index theorem is just doing a loop space, what you would do on any space. So in other words, it's not a loop that the analysis has invented some new procedure. It is the opposite which is true. It is just exploiting the best of its capabilities what exists universally on any space equipped with an action of a lead group. Now, let me go to orbital theory of the loop group. So let G be a compacted connected loop group, sorry, group, compacted connected lead group, not loop group. And let Lg, sorry, sorry, Lg, is its loop group that just moves maps from S1 into G. So I will not here introduce a central extension. I prefer to introduce the, for the moment, L till the G which is a semi-direct product of Lg with S1. We're taking into account that S1 acts naturally on Lg. So the le algebra of this extended lead group is now an algebra of differential operators on S1. So it is spanned linearly, the first piece which is d over dt, and the second piece which is at, where at is a periodic section of the le algebra. So actually, essentially the coidroid orbits of the extended loop group to respond to connections. So this I will explain with a homonymy lies in an orbit in the group. Well, because if you look at d over dt plus at, this is exactly a connection. So you can view in some sense the elements of the le algebra as being connections of the trivial G bundle. And the coidroid orbit corresponds to connections of the trivial G bundle, whose an anonymy lie in a conjugacy class in the orbit G. So if instead of looking at the connections, you look at its homonymy, you just integrate the, by the parallel transport for the connection. So in other words, you integrate the equation d over dt plus at of gt equal to 0, which is 0 equals 1. What you get is a pass which connects in the group, the identity to an orbit in the group. So in some sense, the coidroid orbits of the loop group are just the passes which connect the identity to an orbit in the group. So these are naturally symplatic metaphors. So heat current localization. So if I might compactly group by introducing a very metric, delta G is a fraction, and I can see that heat current on the loop group. So pt of g, just if you write it the way that physicists do, and I'm sure the way that you would do it, you get the pt of g is an integral over this orbit space of exponential minus d energy over t times dg. So I claim that this integral can be transformed into a Bellin-Vell integral. Because indeed, the energy of a pass corresponds just to the Hamiltonian for the natural action of s1 on the coidroid orbit. And so of course we don't know what dg is, but for lack of a better measure, we will pretend that dg is just a symplatic measure on the orbit. So if you apply again formally the localization formula as the equivalent of homology, just at the formal level, what you get is a formula which expresses pt of g as an integral over the fixed point sets of this action of s1 here, of the action of rotation, and that are just one parameter semi-groups. So actually you obtain again a known formula, you obtain a formula which is correct, which seems to say that the equivalent of homology again applies in this context. So let me give a remark by Igor Frankl and Atiyah. So Igor Frankl had observed that the heat kernel appears in the numerator of the left sets formula for the, well I should say the central extension of lg. So when you compute the character formulas, essentially what you have in the numerator is a heat kernel, and he suggested that this reflects a Kirinoff to left sets fixed point principle. Well pt of g again formally is an integral group space, and left sets formulas for character groups is just a sum over the affine line group. And if you look at this, in fact that these two things are equal, you see that at least formally this reflects a Kirinoff left set fixed point principle. So Atiyah suggested that the localization formula should apply as to the integral, and so what does the formal proof of localization will consist of? So how will we deal rigorously with this question of knowing does localization formula techniques apply to these integrals over the loop group? So if you look at what k is in this case, so the points of the point order of it are just connections, v over dt plus at, the vector field which consists in looking at the shift of the connection is just a dot, so that k squared is equal to the integral of a dot square, and so ultimately knowing that a is equal to g dot, what you see appearing is integral of g double dot square. So actually if you just formally try to report the message known to put localization formulas in equivocal ecology, you're unavoidably glad to look at such a pathogen rule which is a regularised pathogen rule, and which corresponds in the proper way to a hyperlinked So this I would expect. So the formal consequences, introducing the costant Dirac operator is a way to prove the Fourier formula for PTRG using localization formulas. In the context of symmetric space of non-compact time, the same mechanism can still be used but it's subtler. In other words, the whole question of descent from a reductive group to its compIc quotient to its quotient by k is not incorporated in the traditional mechanism of localization formulas. Okay, so here are just a few references, and that's it. Thank you very much. Thank you. Well, I mean, you know, if you have taken down the question of taking non-semi-semi-semi elements, of course, I mean, it's fundamental. But in some sense, that's a different question. That's a different question. That is the proper way to approach it is in some sense to approximate a non-semi-semi element by some simple element, truncate the interval, and then see what happens. This method will not, it will not delete the fundamental analytic and geometric difficulties they are when you go to the cusp, and in particular to iterated cusps where the structure of the boundary becomes very complicated. So this you will still have to work. You will discover them in a different language, and they will still be there. Thank you very much.
The hypoelliptic Laplacian gives a natural interpolation between the Laplacian and the geodesic flow. This interpolation preserves important spectral quantities. I will explain its construction in the context of compact Lie groups: in this case, the hypoelliptic Laplacian is the analytic counterpart to localization in equivariant cohomology on the coadjoint orbits of loop groups. The construction for noncompact reductive groups ultimately produces a geometric formula for the semisimple orbital integrals, which are the key ingredient in Selberg trace formula. In both cases, the construction of the hypoelliptic Laplacian involves the Dirac operator of Kostant.
10.5446/59262 (DOI)
We are interested on the general hyperbolic manifold for simplicity for the talk. I'll assume that N would be equal to 1, so I'll work with the three-dimensional hyperbolic space. This way I'll be able to give the idea and keep in mind that it generalizes to higher dimension. So the H3 I can think of it as group SL2C mod SU2. And I can consider for the moment a less than compact hyperbolic tree manifold oriented. It will be of the form of this compact for any kind of body. So the fundamental group I can think of as gamma, which I can think of as a discrete subgroup of SU2C. And that rho m on my head SU2C to GLB, B at the M, symmetric power of the standard representation on the Rolz 2. So I think the M symmetric power of the standard representation. In particular I can take the standard representation, that's also my reading the first situation. And with this representation of SL2C I can restrict it to gamma and use it to construct a flat vector bundle on my three-dimensional hyperbolic manifold. Then this lead to the flat vector bundle of the form H3 mod, sorry, that's the form that leads over X. That's the flat vector bundle. And one important fact in this sitting is that if I look at the homology with coefficient in this flat vector bundle, then it turns out it's trivial, so the complex of form with volume in E is a signal. But in some setting, so if L inside B, so if the plant inside B is a lattice preserved by the reaction of gamma, which is my representation, then I can consider a bundle of three finite rank Z-module L. So I can consider L is the same thing, but I replace instead of putting B, I put lambda. And then I can look at homology with local coefficient in lambda in this case. And if I, you know, it's started, so the fact that the complex is acyclic implies that this homology, so L is small for X, so the homology of X taking value in this bundle of finite rank free Z-module, in fact, is not necessarily trivial, but it has only torsion element. And some of the various where interested in understanding this torsion element. And there was this result of Bergogon and Meikabesch, published in 2013. Let's see the following. So I can look at a sequence of group gamma. So if this is a decreasing sequence, subgroups such that when I take the intersection over N, I get only the identity element, then the result is that when I take the limit over N of the size of the group in degree 2, homology, maybe I'll denote it. And what is a congruent subgroup? Well, I don't want to enter in this detail. It's a sequence of groups. I would need to be more specific about this thing to give. Maybe we can discuss it later. I'll give an example at the end. But what is more important is this condition for the talk. And the statement says if I take this course, then this numbers to a number that only depend on the representation of the group, then the number is not to be negative somewhere. So here, this N is basically centered by the action of gamma gamma. And it's the log. Sorry. No, no. Yes. Let me I'll re-formulate it in this way. Yeah, it does depend on N. Yeah. So it's L on the point XN. And, yeah, okay. And, yeah, okay. So, and C of rho of M is a constant depending on the representation that is positive. And, yeah, okay. And, yeah, I'll give a brief idea of the proof or part of the statement that I'm going to give you. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the proof or part of the statement. And, yeah, I'll give you a brief idea of the we have an identity, so the random partial torsion, t is equal to the analytic torsion of xn with the coefficient t. So the definition of random torsion is using a combinatorial Laplacian. And it's a weighted determinant of the combinatorial Aucelar Laplacian. And here instead is the usual Aucelar Laplacian with the use of regular determinant. So that's the part of my talk that maybe you could think as a geometric quantization. So it's sort of going the other way around. So we already finite dimensional, finite, in fact, finite. And then we go in an infinite dimensional setting, so the Laplacian acting on function. And yeah, so it's, and it turns out to be helpful because when gamma n goes to infinity, though it principally seems much harder to compute an analytic torsion from the definition. I see the population is same. First one is my children. Sorry? First line is the observation of children. Here. Children. Here. First line is children. Yeah. Oh, OK. So from the line, the population is at the bismuth sum. Because in the same time. Well, yeah, well, yeah, it's true. There's a result of bismuth sum that generalize the result where you don't assume the representation as you need to model R. And then there's a defect. Yeah. And the third step. OK, but I didn't know the first thing was due to checker. I know it's in this paper, but it's just a lemma. It seems that it's well known. He's spitting. Yeah. It's in it was well known before checker, but not wrong. No, it's checker. OK. So I'm surprised. So the third point, so it's that, the first point, and then the dash. Compute it. That's where there was a minus sign. The limit like when n goes to an et of the logarithm of the analytic torsion. So it's over this time over the volume of xn. It gives you this quantity which turns out to be negative where this quantity, rho of m, is the local logarithm of the n2 analytic torsion, which is defined on the universal covers on H3. So that quantity, so Benjamin-Roy-Cantache goes that this limit is equal to this local logarithm of the n2 analytic torsion. And what is more important is that in the end, this quantity just depends on the representation. And using tools of representation theory, they are able to compute explicitly what it is. In particular, we find that it has a sign. It's strictly negative. And so combining 1 plus 2 plus 3 essentially give the result, not quite. But so it will give, in fact, the limit nth of log H2 of xn m i n. This is bigger or equal to 4. Let me put an equality to be on the same side. That gives the constant error. Because when you use this formula to block this ear, then there are some terms you have already assigned. You can discard them. And then you're left with H2. And H0 always has no torsion. So you're left with just H2. And then it gives you this lower bound. And to get the full result, Bajor and Gatesh look more closely at H1 and H3 to check that the torsion for this group doesn't grow too fast. And so the contribution, when you take this limit, all comes from H2. Then given the title of my talk, a natural question, as you may expect this, can it be generalized to finite volume? Verbally. 94. And let me give you a concrete example, a rocket-motivating example. And then I'll be able to maybe give you a better idea of comparison group. The motivating example, to construct this group gamma that preserves, to construct the lambda and the gamma that is preserved by this. In the co-compact case, there are results. In fact, I should mention that the result of Bajor and Gatesh has an analog where you fix the group gamma, but then you let and the amp power goes to infinity. So in this case also, there's a result, this time by Marshall and Merler, that the torsion goes exponential. And in their case, they focus on the group as such to see, to find a co-compact group and that are of aerithmic nature. You have to work a little bit, because the basic example that you would like to consider is not co-compact. So let gamma be a portion-free congruent subgroup. So I'll take the Gaussian integer. So that's complex numbers, imaginary and red part, or integer. Then in this case, x is equal to gamma by dash 3. So that's a lattice, but it's finite volume. It's not compact. And congruent subgroup, in this case, you can take, for instance, you put gamma is equal to, it's a matrix A in this S2. So you could look at the integer mod p, and then you want that k minus the identity is equal to 0. So that gives you a group. And that's an example of congruent subgroup. And you want to let the p goes to infinity, for instance, or something like this. In fact, the group like this is always torsion free? Well, I think, yeah, you have to be careful. I'm not an expert on that. I think, for instance, if you just take this, that's not torsion free. I think you'll get a finite volume or you probably can't or before. But there's this notion of norm. And then if the norm of your idea of defining the group gamma is sufficiently large, then it's torsion free. And so in this setting of finite volume, there's an important new feature. So one, you still have this acyclicity condition for the complex. So if you look at this, but I have to look at L2, g. So it's only at the L2, g level that the complex is acyclic. And in fact, for the Borelz-Serr compactification, for a picture of this. So I have my manifold with cos like this. And then the Borelz-Serr compactification is just to add at the tip of the discuss, to add one in this case. It would be, two dimensions would be a torus or a boundary at each end. So it would give something like this. Manifosa, I'll denote the Borelz-Serr compactructure as bar. That's x. So this now has homology. And essentially, in a way, it all comes from the boundary. More precisely, if I look at the dimension of hq of x and b, then while it's 0, if q is equal to 0 or 3, it's 1 half the dimension of the homology on the boundary. If q is equal to 1, and it's the homology on the boundary, q is equal to 2. So there's some homology way out there. So trying to generalize the three steps that presented for the proof of the term of the homo-ket dish, I already had the first stage. So relating Rade-Mascha torsion with the torsion of my group, it's slightly different. Maybe that could also be due to Chigar. You just erased your question, extending to where you wrote hyperbolic manifold. You stick to dimension 3 throughout, or do you also want to do higher dimensions? I want to do higher dimension. And the whole project is higher dimension. We do it also in higher dimension. Well, you have to replace. So you can take its higher odd dimension. So the SL2C you have to replace by this, or you can take also the screen group. And then many of the things generalize. You get it from an individual torsion group? No. Well, Bejourn R. Katesch also in higher dimension, they only say that there's a exponential growth of torsion, but they cannot identify at which degree. So there's a priority of degrees identified, but they don't know which one. But they do conjecture that it's supposed to be always in degree n plus 1. But using electric torsion or diffraction wouldn't be enough to go further along proving this conjecture. Right. So the first step is that now, if I look at the random torsion, since there is a commodity, in fact, it's the random torsion. We have to look at the random torsion of the boron classification. So there's a similar formula. We have this HQ of k. But now I have to divide by the release, the co-volume. Do you mean L here? The lattice coefficient inside? Do you write the size of HQ? Yeah. Yeah. So you look at HQ of the free part of HXL. So that's a lattice inside this. And you need to compute the co-volume. And then it's the same power. With respect to a choice of bases of this group, so if you define your co-volume once, you have the choice of bases there. So we have to choose the correct, good choice of bases. So what is the upper term? The size of the commodity? Yeah, sorry. Yeah, the size of the torsion part. Yeah, thank you. Because now we do need to put it. We only look at the torsion part, look at the size. And then we have to take a basis of the free part of this. Oh, sorry. It's HXL. The torsion part of this. Thank you. And so I won't say more, but we need to control this to get information about torsion. So that's an extra difficulty. Well, there's some work that has to be done for this, but I was not planning to say more in the talk. So Q prime, there was already a work by FAST, a former student of Werner Müller. That was published in 2017, where he has a formula involving so in this case, they are defect terms. So the analytic torsion is not quite equal to the Rommendorff torsion. There's a defect. And in this case, the defect is expressed. The one term is involving the analytic torsion of the cost and the cost. For those who don't know, it's using the gluing technique of flash to obtain this result. And in the formula, there's this term that appears. And it seems that that's the most problematic term to control to extract information about exponential growth of. This formula is for the correction term. Yeah, well, he has a formula relating rather than a torsion for a specific choice of basis with analytic torsion. And then one of the terms that appears, analytic torsion of the cost. And this one, it seems so far, was an obstacle to obtain further information about the torsion. So what I'm proposing is instead the four-rowing theorem of Werner and myself, where we use different technique. But I must say that the paper of Jonathan was very helpful, especially for me. Didn't know much about all this representation theory aspect that is very important. The paper is many important and it was useful for us there. But we use a quite different technique and we prove that the formula, the random pressure torsion, I won't specify the choice of basis, but it's the same as Jonathan, the same choice of basis. So it gives it, I'll write it the other way around. So it's threaded with the random pressure torsion of the order of complication. And then plus there's a correction term. I'll put the log. So minus is that the sign is not very important because I won't say more. So where this is the number of cos n's where the cos n's are not trivial. So here I lie a little bit. There's some situation where there's no common energy at all in some cusp n. So if you look at the complement of the public nut, then in this case there's no common energy if m is up. So there will be no defect in this case. m is valid. It's the complement of the public nut. And 0m is a, next to it, constant depending on rho m. So you? It's the same. You have something called 0m before. Or is it a different constant? Yeah, it's a different constant. OK, yeah. This is, yeah. Well, as a check to our formula, so John Alpame has another paper, it's only in dimension three, where they looked at ratio of torsion for two different symmetric power. And he does get the formula using other result of an alfair and morty about the growth of the mass of torsion for finite volume in public manifold. And he has a formula where he has this constant. I don't want to write it because I will eat all the time I have left for me to speak. But you get the formula that looks different from the one of Jonathan. But I had to use maple. And then checking, it gives the same thing. But we're not able to prove by inventory methods that it's equal, except using our approach in the one of Jonathan. It does give the same thing. So that's part of two. And that's also worse in higher dimensions. It's exactly the same sort of formula. You have the same quantity and a constant that depends on the representation in higher dimensions. And for part three, that was already done by Mueller and Faff in 2014. Publish in 2014. They show that the limit of the same growth in infinity of the log of an addiction of x and e over gamma tends to be 2 on the program. Provided the sequence gamma n is a cask uniform. Satisfy other conditions that are natural. So to get the result, you need to control the growth of the number of cask and goes to infinity. So the number of cask increase, but it doesn't have to increase too much. And the geometry of the cask shouldn't degenerate. It should stay under control. So it should stay uniform. And for this, you get the same result relating to an addiction. So combining one prime to prime and three prime. So what you get is another theorem of the same. We have many applications in higher dimension, but I'll focus just on this special case. I said to see. OK. So if an is a sequence of nonzero ideals, you can look at other algebraic. And we can look at modular Bianchi group. But for simplicity, I'll stick with this sort of setting. So the norm of the ideal goes to infinity. So that's to ensure in particular that we somehow the sequence will tend to the universal cover. And to this, you have a concurrent subgroup associated to this ideal. So it's this inside gamma 0. So it's the same thing in this. In this, our element there, that when you subtract the identity, you are in that identity. And is the sequence like one in the other? No, no. It doesn't have to. So in the statement of Bayes-Romain, I can ask that you have to if you just want to have the result on going to state. So I think they need that when they want to control the other commoners groups, show that they don't grow too fast. And I think they use that for the use of analytic torsion. You don't need that to be one for another. So then if you let, again, xn, it's gamma n mod history. And the result is that when n goes to infinity, the limit n of h2 xl, then n by 1 as you go, n bigger than a constant, and the volume of the 0. So you still have an exponential growth of torsion. But here it's really, you just have an inequality there. And due to the fact that this common is things of the boundary, the constant is not as good as in the cocompa case. It's the factor of one half that enters into it. So the growth is not as good. OK, so. That should be xn subscript n. Oh, yeah, thank you. So in the five minutes remaining, well, maybe I'll keep the, this time I'll focus on two prime, give an idea of how we proceed. So group this checker monitor up. The strategy is to use the approach that I used before with Pierre Rabin and David Scherer. Mainly you look at m is the double of the Borel-Cerre complexification. And very like this, that's m. You have to copy of its bar there. And then you let epsilon go to zero. And you want that in the limit epsilon goes to zero, the metric, away from this boundary, the surface it tends to the hyperbolic metric with a final value on both sides. And so near this it degenerates. So group like this. For positive epsilon you have the, well, I guess, since it's unimodular representation, it's the monoterium for unimodular representation. And then you take the limit as epsilon goes to zero and you look at what happened to an nth torsion. And we develop a strategy to construct uniformly the development of the Hoschalplacian and uniformly the e-carnal using this single, double, and triple space that we should talk about earlier on this morning. So it's a setting where it's very useful. And we construct it uniformly and we have model for how it degenerates in terms of the operator there and then we are able to relate the nth torsion for positive epsilon when you take the limit to the nth torsion there plus correction that comes from there. In my work with Pierre and David, one key thing we were using is that the flat vector bundle is defined, the metric, the flat vector bundle is defined on the Bauer-Werther compactification. So it's defined on the manifold boundary, smooth on the manifold boundary. And that's not the case for this construction coming from the hyperbolic manifold. So the natural metric you look at degenerates as you go toward the cusp. So everybody degenerates in the same way that the metric, the hyperbolic metric degenerates. So the way we make the metric degenerates is like this. So near the boundary it looks like this. So when epsilon goes to zero it converts to this hyperbolic metric. But the doubt is that though the bundle, so that's the Romanian metric, but the bundle metric is also degenerating, but thankfully we have this uniform construction done for general direct type operator. And we can just incorporate immediately the way the bundle metric degenerates in the construction and use the uniform construction of a resolvent and the kernel directly from the paper. We don't have to redo the network that was there. Thank you. Thank you. Thank you. Thank you. Any questions? We'll compare FAST and the your result with PASU-RINET. That one? Yes. That one, well we compare with the result of FAST and we get a formula for this Gaussian in terms of this explicit constant. So your analytical torsion is not a complete manifold. Is an analytical torsion a complete manifold? It's for the complete manifold. The analytical torsion is for the complete manifold and the rational torsion is for this boron cell compactification. And the funny thing is that there you don't have to choose a common energy basis because L2 common energy there's nothing, it's acyclic. But there you have to choose a basis and you have to take it carefully. So it's in terms of Eisenstein series. FAST didn't work on analytical torsion for complete manifold? Yes, it was also for complete manifold. So his strategy instead, in this case, his strategy is to cut near the cut span and then use the method of flesh to, the gluing method of flesh to try to compute an analytical torsion. But in the end he asked this term which is the analytical torsion of the cusp. Where is this concurrence hypothesis used? Well, it's useful to, I mean, really, if you go to the paper of Merler Faff of 2014, so they have this general result and then they look at various cases where their condition is satisfied because their condition is still somewhere to check that you have example that the regions can be applied. But one of their examples is if you take principle from the Retsch-Som group of S2M plus 1, 1 of the few, then for this there is a whole and then almost directly you get, combining with this you get the results. Then there's an exponential growth of torsion. But there's more and more detail to, you have to, there's a problem with duality, you have to take the bundle and it's dual to get the result, in that case with the boundary. You also mentioned that in step one you have to control this whole volume term. So, in general, can you do that? Do you also need convert subgroup conditions of special lattices or accessories? Well, in the end, now it's not the, we have basically the, we have a result as soon as the general result of Jean-Lathain and Bernard Merler works, we get a result about exponential growth. And this, the way it's stated, you don't need congruence subgroup. So, it's more when you want to find example that's a natural condition and probably for people in number theory that's the natural example to look at. It doesn't play here. Important. Well, it's time for the speaking class. Thank you. You're free to go. So, you'll be using the Pimo or the Planko? Oh, okay. So, we'll have to rely on something. Thank you.
Given a finite dimensional irreducible complex representation of G=SOo(d,1), one can associate a canonical flat vector bundle E together with a canonical bundle metric h to any finite volume hyperbolic manifold X of dimension d. For d odd and provided X satisfies some mild hypotheses, we will explain how, by looking at a family of compact manifolds degenerating to X in a suitable sense, one can obtain a formula relating the analytic torsion of (X,E,h) with the Reidemeister torsion of an associated manifold with boundary. As an application, we will indicate how, in the arithmetic setting, this formula can be used to derive exponential growth of torsion in cohomology for various sequences of congruence subgroups. This is a joint work with Werner Mueller.
10.5446/59263 (DOI)
I'm going to ask the organizers for your participation here. So, it was a wonderful conference. Oh, you speak up. Okay, so I'll try to speak up. So, many ideas in this conference. So, I'm going to bring some contribution here to bring some other ideas that haven't been seen in the conference either. So, let's say, half of the talk here would be a sub-survey, some historical part on this comment here. So, it's motivated a bit of what we're doing there. And the second half will be my own contribution on this comment. So, which is going to be with the new problem. So, motivation. So, let's start with some things. So, start from the index here. So, we start with some vibration. And over some base peak. So, let's say, everything is close many faults. So, compact fibers and compact base and the one around. And let's say you have this, you don't have the vertical tangent bundle. So, in the kernel of the bar. Okay. So, from here, I think you already know, but for the sake of bringing some motivation to what you do. Let's recall what this family is next year. So, let's say you have the x and b, which is some families of three decompriers on fibers. So, I don't know fibers, I would say. So, from here, you will know that if you take the symbol of t, this defines some element in the k-therey of the dual bundle to the vertical tangent bundle. And from here, the index takes values. So, it's not a number, so it's not an integer. Once you have the point, here it's a k-therey element. So, it's something which is in the k-therey of the base. Okay. So, roughly speaking. So, here, the index of d, it's more or less speaking, just the element in the kernel of d-co-kernel of the x. So, more or less speaking, so it's not quite true, because there are jumps. So, these things is not quite the vector bundle, but you can modify this thing a little bit by some compact propagation so that you can make everything like vector bundles and make sense of this thing. Okay. So, from here, so here's just the index coefficient. So, you can move over co-molotage to get some conflict formulas. Just some trunk hacker. Once you do that here, so here you land, sorry, so here you land in the co-molotage of them. Let's see if we can get it in the c. So, I'm just telling myself, I'm telling myself what is about to be. So, I'm hiding it in the trunk hacker. And here you land, you take the trunk hacker, simply, in the co-molotage. So, I should say, here, co-molotage of the base. And you have a formula here, which says the formula thing. So, you should take the trunk hacker, so here, trunk hacker, of the index of t. So, what we get here would be a differential formula on b. So, a co-molotage customer on b. So, you just integrate over the fibers. And then here you have the trunk hacker of the symbol. So, it's an index there. And you partially integrate over the fibers. So, here from here you get something which is already some differential formula on b. And then you pair it with the top class. So, top class of this number here. So, you can just complexify the practical tangent problem. Okay, so, this is things that you already know. So, now I just want to make a sort of little observation. So, before that, let me do some little discussions on co-molotage. So, co-molotage, so, rather speaking, so, k3, it's a generalized co-molotage theory. So, for that you have an associated JRS whole logic theory. So, what you do. So, what you can do. Here? No, on the fiber. So, the fiber on the fibers here. So, here you can put some x here. So, it depends on the, so, x here is an event in the base. So, it should form the base. So, it depends on x here. Okay, so, let me come back to co-molotage. So, it's simply a dual theory to co-molotage in a sense. So, in a sense you have a, so in a point sense here. So, you have a pair which goes this way. So, if you have some thing here. So, let's say, the k-there of b. So, here is the k-there of b. So, here is co-molotage of b. So, here is. Then, you have a pair between two instance in integers. And simply, so, it's a very simple pairing actually. So, let's say that you have an n-n there. Represented by some vector problem over b. Then you can pair it with some elliptical parameter. So, represent these guys here. They are simply elliptical parameters over b. So, let's say, so, take a co-molotation that's said, let's say, what should I take here? Okay, another q. Okay, and here, the method goes from here to there. Let's take the twist in the next of q. So, you twist the, you twist q by the vector model e, and you take its index. So, that's the index of the integer. So, here it's a two-spice. Okay, so, what we can do now from here is the following thing. So, a very simple operation is that you can kind of continue to follow the square here. And so, let's say you have the index of b, and it's here, here. So, you can pair this thing with, Kevon will do simply, and get a number. So, let's do that here. So, now what we can do here is to, so you can pair, you can pair index of b, which is in k here at b, with, and then you get the right amount of base. So, we have to say something q in the kernel model of b. So, from here we get a number, index of b, paired with q. And it's actually very simple to enter practice, so not very simple. But what you can show actually is that the index that you have here, so the integer you have here, represents exactly the, the index of a native pickup right there over the two-spice for the vibration. So, basically what you do is very simple, dramatically. So, you have something which goes along the fibers, and it's going to be on the fibers. You combine with something which is over the base. So, you kind of combine the two, and you get something which is at the total space. So, this observation here is kind of the starting point of KK here. So, that's all in here. So, just a little note here, I don't want to say. So, KK here actually, so, so you have a right-close neighbor actually, of two KK here, which is maybe more familiar to people there. So, super connections basically, which is, and this bit has used this technology for many, many years. So, there are two, it's like two phases of the same coin, if you'll. So, one which is kind of topological, which is a little bit of a corpology. So, that's really the starting point of this technology of Kasparov. And actually, so, one note I wanted to say, so I said, so it's the fact that I think Kukun had in mind KK theory when he invented his super connection for medicine. So, it's kind of clear if you go over his notebooks. So, his notebooks, you can see at some point, he's already tried to understand KK theory. And a few years later, there is this super connection thing showing up at some point. So, I really think there are some, these two are very, very tied together. I just wanted to mention that. So, now, let's see a very easy consequence of this principle here. What happens here? So, some observation now. Which goes back to the digital rigs. So, digital rigs, there is a theorem that says the following things. So, now you just put spin structures where are cooperating, if you'll. So, it says the following things. So, if you have the agent of N, which is nonzero. So, I'm not making a mistake here because there are some specialists of FSC matrix, who discovered curvature matrix. But, simply here, so it's a simple case. So, there, one consequence of this here is the N carries no positive scalar matrix. That's something well known right here. But, now let's see a very simple principle of what happens here. So, now let's take this over columnology. So, if you take this over columnology, what happens? So, it means you'll be simply, so what you see here. So, you have one index which is associated to this guy here. So, it's all clustered on the fibers. Or, 18-news-along fibers. And, one guy here, which is associated to 18-news-along base. So, what you can see here, actually, it's kind of just a sort of multiplicity property of plastic clustered symmetry. But, actually, you can say even more. So, what happens here is that, now, if you combine, this observation here, with this principle here, that if you combine one elliptic operator on the fibers and one elliptic on the base, you get the total index over the total space. So, you also have something which is kind of not really trivial. So, you get the fibers themselves. So, it's not really trivial, but it's a simple observation you can do here. So, you'll see, if you want. So, now, what you want to do is move on to another trajectory from here. And see what you can do actually. And how we can kind of extend this principle to something a bit more general. Not just a bit more general, actually. So, two, Ncg. I'm not sorry that I don't understand that. I mean, if you have a trivial fiber model, the metaphor M as a fiber, sorry, let's say F is a fiber and we use the base. And, if B as a positive scalar curvature metric, that guarantees that M as a positive scalar curvature metric. So, the fiber is not required to have. As it goes here, so if you take the index from here, so what happens here, it means here, you take the agent is here, along very impossible corresponding classes here. So, if you pair the two, and the product of the two is zero. Yeah, I'm sorry, I was thinking so. Yeah, it's not simply agent is times the other agents, which is zero. So, it's very simple. That's why I was saying that's very something integral. So, you have to, there's something with corresponding classes you have to work out here. But it's not so difficult. Okay, so from here, Ncg. So, now our goal would be to carry out this principle. So, carry out this. And then we would have to say singular situations. So, my singular situations here, I'll talk about just one today, which is foundations. So, we're going to have foundations. So, what we're going to see is that, what we're going to do here is completely true, is the base of it here, by the space of a foundation, which is completely singular actually compared to the space we have here. So, that's one thing. Actually, very simple. So, you can even carry out this to singular foundations. So, singular foundation, I just want to say a few words, but there is, let's say two things you can consider here. Two main words I would say. So, one which is from a scandalis, Android address, scandalis. I think it's about 2005, 15, sorry. And it's a leaf-wise next term for very general singular foundations, basically. So, we have one thing recent of Casparov. Casparov. I shouldn't scratch his name because he's my mentor, I think. If he sees the video at some point. Okay, so Casparov. Casparov, I think it's okay here. So, very recent work too. So, I think it's a new pretty minimum of work that he carried out in like 15 years or something like this. And he finally published something. So, let's see, 2005, 15 I think. And here it's a, so where the first works here are. So, where the first works here, it's kind of leaf-wise. So, it's Casparov, it's transverse. So, it's not for anything, it's for Casparov, although. So, it's for relations that are just defined by the attributes. But, in this case, it's kind of already very interesting because you can take our legal actions from there. So, actions of even non-compact verbs on non-compact manifolds. So, yes, I think we'll leave the video on here. And one consequence of this work here is something kind of very interesting. So, it says at the end that, so you know already that the index of any electric operator can just reduce that to the study of direct operators, or spin-sea direct operators, no, more. So, we'll see that later. But, so, one consequence of this work here and a little more work that I'm trying to do currently is that the model of transverse elliptic operator on four relations defined by group actions, it simply is an operator we've seen all along the week. They form direct operators or government type operators that come. So, what's at stake in this? In this theorem at the end, it gives a very simple model of what you should study for transverse elliptic theory. So, that's something very, very strong here. So, but now we're concerned, we just focus on relations because it's already very interesting. So, in particular here we'll see a problem that does not reduce to the case of direct operators, using direct operators. And we see a problem which we can try to cope with it. Okay, so, relations. So, so, more or less, you can carry out everything I've said before. So, more or less, you can replace vibration by relation. And you can replace also the base B with m of f, so here m of m f is a relation. So, this space here is highly singular. But we'll see how we can kind of desingularize it using non-diffusing geometry. So, that's a very general principle that has guided the field for a long time. So, let's just see a bit of transverse geometry. Okay, so, naively let's say, so you want a base for your relation. So, naively you could just take a transversal. Okay. So, take a transverse, so w here, it's a complete transversal. So, complete, you have to meet every leaf at this once. It can be disconnected by the way. So, second point here. Now, if you just naively take the transversal, you see that it's not good enough to reflect exactly the transverse geometry of the relation. Because compared to, so, when you have a vibration, things are kind of easy because you have the one fiber that goes to the base only once. But in this case here, when you have a relation, it can come back to, if you go around the leaf, it can go back to transversal again. So, there is a group which can close this thing, which is the homing group. And that's a separate of the thermophysicism of the art, right, the transverse. So, now we begin to see exactly why we call this title homing. So, why we call this talk here in different thermophysicism, the current, next theory. And now the idea here is to define a thing. So, if you can define spaces, so you would like to look at the orbit simply of this compassion. So, now, but it's kind of singular. So, just this thing right here, so you just consider, so, instead of the base, you replace this with, not the base, but I mean, instead of, so the base, you would like to see that as a portion, but it's kind of singular. So, instead of this here, you look at this actually, gamma is what we call homing groupway. So, here it's registered transverse. But actually what we get at the end, so the groupway that you have here is more equivalent to the homing groupway. So, it's simply either you take the whole formation and you get the whole homing, or you just try this to one transverse, so they knew, let's say, first return maps. So, it's equivalent. And so, in non-competitive geometry, what you do is to, instead of studying spaces like this, you look at the, so, you study functions. But this thing here is kind of singular. So, you replace this by the convolution and problem. So, that's the convolution and problem. Okay. And from here, you can do the same thing as what we've done before. So, one scanner is here, 10 different things. So, 82. So, it says that you have an index morphism here, which goes from the k theory of, so the k theory of, so that's the v-wise tangent bundle here. And there, so you have something, so now you see what to do. So, instead of taking the base, as in the case of vibrations, you just take this guy here. But you have to consider transverse instead. So, k theory up and k theory down. Okay. So, you have some next morphism like this way. So, you should have k-mology on the left. Should it be k-mology on the left? No, it's k theory. No, on the left. Left here? You can put this. Yes. You can put k theory, k-mology instead also. So, you can basically, they call that k-tart, you know. It's not an issue here. So, just for our purpose of simplifying the talk also. So, if you know, you can take the geometric chronology if you want it to. Perfect fine. Okay. So, from here, now you can just do the same thing as before. So, but it will be just a bit more complicated to, I would say, carry out some logical formula. So, from here, like I said, but there are some, you have kind of some circles of ideas. So, at some point, so just remember before we pair some elliptical pattern on fibers and elliptical pattern on base. So, here you'll be able to carry out the same kind of principle. So, you'll see that a bit there. But now there is a theorem of code if you want the formula. So, that's a bit more complicated. So, first, to carry out the principle I've just mentioned before to you. One, we talk about Fermi's in this theorem. So, there is something a bit more complicated here. The first thing that was done was code, actually, in 82. And he worked entirely in the chronology instead. So, here is a foreign thing. So, the theorem is kind of one of your first things. So, let's see here. So, let's say you take any x in the k-thoracic style. So, from here you can take a transfer pattern. So, here, that's something from the Bohm-Ecquan. So, you have a transfer pattern which takes this here in the homotopy quotient. So, here we have k-thoracic. So, we have a quarenthalmology. Here, a quarenthalmology of, so it's simply this thing. So, that thing here is that you're also bounded over a classifying space. And you take the homotopy quotient. So, it stands here, your thing, it kind of faces the whole space, the whole space of the equation. So, you can make a parallel with what we have in the case of the Freibese next theorem. So, we have a Neutronk-Apar here. And given, so I just hope I'll have enough room. So, you take any choice here of class which is there. So, you want to pair it with here, the quarenthalmology of this thing. So, it's all common, it's the right thing to call it here with that, as you can see. So, you have x here. And then for any, so here you have a choice of the quarenthalmology just there. What is it? Gamma, you're right. It's the group. It's a homomy group. Okay, I mean, m is w, the quarenthalmology. Ah, sorry, it's w. Surprise w. So, I'll change to something called m to be there. You're right, thank you. You have a choice, let's take any w, so for any w which is in the ring, directed by, say, trac glasses, and get full successes. So, for any choice of such a guy here, so you can find various, and then if you have a man here, say, ta, which goes from the k3 of the convolution at the point here, to c. So, that's where it's there. So, we have some men this way. So, for some reason. So, here how we're trying to get to c. And here we have this map here, which goes to c. Such that, so here it's, let's call it omega because it depends on omega. Such that you have the four-line thing, which is true. So, call this map here, w. So, if you just form this way, and form this way, you get equal x. Let's just write this down. So, we have ta omega of mu of x equal to trac of x, apparent with the castle omega. Now you see that's exactly the same kind of emerald as what we had before. So, you pair basically one nif wise, the index of one nif wise operator with some transverse Pompereon class or Tram class or any other kind of class you want. So, actually it's a bit more than just Pompereon classes or Tram classes. It's also secondary classes that are in different fluxology. So, that was something which is crucial to us for the pair. So, this is the four-line thing. So, how do you contract this map here basically? So, it's kind of not too difficult. So, you have this little Tram cacher that goes from the kp right there to C of c of infinity of this actual component here. So, it's here. So, click homology.com. So, here now you have a map which is important for us. So, it's called a five. So, it's a, it's required by code. And you can also pair this thing. There. So, see, homology of the action book, right? And now the trace here is simply the four-line thing. So, you make a choice of the guy there. And you hit that against someone which is here that comes from the chain character out there. And you just go this way. Yeah, is that clear? Okay. So, this will be from your important for us at some point. So, now on consequence of this there is a four-line thing. So, some queries. So, it's the four-line thing. So, it's the same thing. You put some spin structures where I've already. So, one is that if you just take same thing. So, if the h is of n is none other. So, n carries, so it's f here. So, the leaves actually. The leaves carry no p-semi-fers. So, we have our same proof now without using non-commitative geometry. But actually it was still so of a chain. But it was still fundamentally. It was like waking. So, what's fundamentally is perfect to read the same principle as here. So, you have to find some anything of our time that you can couple. And you can pair with the defy the electric operator. So, that's really the same principle actually. And because it's you don't have basically this tool here. Things are technically more complicated. But it's essentially the same principle. So, that's one corner here. So, the meaning of this as before is that there is no kind of global family of metric leaf-fers. Yes, yes. Yes, exactly. It's only the global thing. Exactly. So, you have to you have something non-compact. But there are still containing some compact things. You can control everything. Second thing which is also non-fragile. But kind of reformulates things in the sloppy way. So, it stays the following thing. So, when you have a full issue you have something called could be only classes. Which are classes which they don't really try classes. They are not the primary classes. They are secondary classes and they live there instead. So, in the sloppy way this term can say also following thing. So, the non-fragile thing of global only classes. Let's say GV classes. It measures how far one can equip. Equip. Equip. Let's say the solution of F here. With a transfer scenario. So, it's kind of very strong what it says here. So, it says for example that if you take a remaninflation. You don't see any good in your classes actually. That is something that is very strong already. And at that time it was even more strong at that time. Because at that time people were kind of worrying about finding some good interpretation of the volumetric variation of the GV class. And here this term here of cool kind of blue everything. Certainly. So, I think it was, I think to me it's still one of the most convincing interpretations of this class. Okay. Now we move on to the main topic of the talk. And I hope I arrived at this moment when I'm using quantization. Because it's the subject of the conference. So, here you see there's no quantization so far. But I hope to you. So, we should have time. So, 10 minutes, etc. Maybe a minute. Yes. Yes. Ah, so you're chairman. You were chairman before. Okay. So, now we go in here. So, the purpose of Koni Moskvich's work. So, now you want to carry out everything. So, let's say. To Camology. So, if you want to carry out everything to Camology. So, that means here you want to instead work as in Comodic there. So, you have to find some A-people right there in the conscious direction already. So, that's the first going in. So, you want to construct. So, that will give some class here. In, so you get something in the Camology. Of the convolution network. So, I should say here. W. Okay. And so, actually immediately here. So, because you have something which is in Camology. Your index before was something in k-theory. So, you can pair the two. And the pair of the two here will give directly some additive map as in here. Okay. So, now as you have an active map here. So, it's kind of a sort of inverse problem. So, you have an additive map which is given by pairing with the somebody here by the transverse here. So, then can you find a class which is associated to this guy. And, so now that's something. So, two. So, we have code snap here. This goes from phi from the current Camology of W. And to the, it's a quick Camology. So, here it's actually a void. So, W. So, this is the pattern of the convolution or the flow reconstruction or the question. That's a good question here. So, it's not known what's the range of the classes you can get from Camology. So, that's a good question in terms of where to put some. I hope to answer something. Probably the end. So, here we have a, we want to have a map here. Okay. And now you want to find some case. So, if you can construct such a name on here. So, then find the necessary W. So, same notation as in here. The number is, there are some difficulties. And you will reach soon on the subject of quantization. But before that, you just mentioned the point. I think so. Okay. So, what will you do here? You get something like this. So, natural thing to do would be to take some gamma equivalent adiabatic operator on W. So, that's a problem because in general, it's not a reaction here. So, reaction is something just by the formalisms. So, you have no reason to preserve an instructor or whatever. So, gamma doesn't preserve anything. And there are new gamma adiabatic. Gamma, sorry, gamma invariant. Adiabatic operator. Okay. So, there's some solution of code here, which is the following thing. So, here you have a replacement of W by, a vibration number of W, which is the matrix on W. So, you want to say matrix here, it's a reminded matrix. And now, by the Thomas-Arofism here, you will pull up the problem, so pull back the problem to, instead of W to N. That's the first step. And once you do that here, what you can find here, so that's the work of Konrad Moskowicz, you have. So, we can construct some cube, which is here, the Keho-Molzik element on this bundle of matrix here. So, there are some bundles, but I'm also going to put that here. So, Keho-Molzik is a hub, we have that first. So, it's not completely, it's not completely, so there is something here, which is a bit subtle. So, it's not completely a diptych, it's actually a hypodiptych. So, hypodiptych, hypodiptych symmetric operator. So, we have something like this. So, what's good with this operator here is that it's not completely invariant on the reaction of the properties, it's almost invariant, so up to this principal symbol, if you take a right motion of principal symbol. So, actually you have to take also the principal symbol and the sub-principal symbols at the same time. So, that's one thing. And now from here, we have some Schoenkraken, a kind of cube, which is dancing the C-Komologi of M log gamma. Okay, so now answer the two, that's a complete answer, but so one part of my answer is that the C-Komologi worked out, so two. So, you have some pair, so C-Komologi, 98, very hard work by the way, which says it's a formula thing, so the Truncator of Q here, so let's say you suppose also that the lift detection on the model of matrix is free, so we'll discuss about that if you want later, it's not actually transparent. So here, so Truncator of Q is free the range of five, so that back here, so you have a Truncator of Q that's there, and actually what they show is that the Truncator is the range of this map here. So it means you should be able to find a plus, which is associated to this thing. So now I'm questioning, so what's the frame rate? Now frame rate, so you have some partial results, nice to meet you, so the same paper. So, partial results, which is here 90 low dimensions. What happens here, so partial results, and actually they have to come up with some bad conclusions at the end. So what they wanted to see actually, so from the constraint, so you see already you could catch either Trunclasses or different classes. So they wanted to see, they do want to see secondary classes appear, but actually what happens is that they don't see them. So, these examples here, so that x is some conjecture, which is a following thing. So, it's some full email, in the profiling classes, different from all the other classes of this homogene question here. So, right there, is that the conjecture is true. And since I may not have time to talk about quantization, so unless you allow me to, but German. I'll dedicate to the organizer. One minute? We can't discuss it in there, so yeah, so thank you very much. Now, the audience can't ask the question. So can you give us a four minute version? Three, four. The formula for this one here? No, the version of the quantization. Okay, so I can talk about that. Okay, so quantization is a following thing. So, first you need to understand why this formula here can't be registered in the case of the arc of letters. So that's one very simple thing, so that starts from theorem of that here, chess part. It says the following thing. So, if you take an A, you can write a P. So, then the next of P is simply given by some, the pairing between the symbol of P. That's okay, three elements on the quantization model. So, k theory of the quantization model here. With the do-bo direct to this star, this star here. So, which shows exactly why you can reduce the symbol of the dns of any of the co-writers to some stuff like this. So, now they think, what's the problem? So, here you need to make the choice of the most complex structure or spin c or whatever you want. So, there's a choice here that you have to make. And here's the problem because now you have a group action that doesn't preserve anything. So, in particular, they do the one preserve and do your choice here. So, your choice is only valid if you want to work out the current and next terms in case you preserve this structure. So, that's not quite the canonical here. But now here's the idea. So, this time is simple. So, this thing here, it's on the front of the star. But now, this time is simple. So, what you're back to do here would be to trade almost complex with a symplectic. So, you find some of your co-writers which would take into account the symplectic structure. So, actually, it's a bit more than that in my case. It's the real improvisation actually. So, the natural improvisation of the cot engine here. And so, one good choice of spin structure that could go on with this is that something to spin c as here. So, you would like to trade spin c with spin n. So, that's kind of if you just think of two minutes, it's very obvious why you should do this. Because what happens here? So, you want something which is equivalent for any formalisms. So, in terms of, you look at bundles, let's say, in terms of structure groups, that means your structure groups should be something that contains gel, the gel linear group. So, this is not possible because if you take a linear group or something, it's compact. But if you take here, so this is here, it's the double cover of, so this is the more double cover of the unitary group. This is here, double cover of SLNN, which contains this general linear group, up to preservationist. So, now here's the operator, the overall composition will come very soon, so I'll try. So, the operator, let's say, if you have any postponement. So, here's the default you should have to do, so you have spin n and point step. So, what's a good quadratic form? So, quadratic form here. So, just simply pair this out, again, with the kin. So, here it's one forms, so here basically you have some bundles like this. And you can pair the two by just taking a turn of one forms and make a fit. So, simple as that here. Now you have 54 generators, so, so, key formulations. So, you have let's say psi and psi bar here, equal to one and psi equal to, so, square here, is equal to zero. Okay, so let's say let me do R2 here. Okay, so, no, R, okay. So, then here was the operator, so, maybe so you can see one thing here. So, you have spinners or this thing, so, you have one of those spinners, which is simply the X-ray forms. So, you have spinners, and then you can represent them in your spinners this way. So, this is here the X-ray product with the X, I here, so, psi here, psi bar, is the internal product of the X-ray. Okay, so, now here's your operator, so, you're iterating that to the, so, here you have a bundle over M, but now iterating to the content in bundle, so, that you have enough dimensions to build a dark operator. Now you have an operator here, which is D equal to... Yeah, this formula, the dopo by... I'm cheating a bit. It's on the tangent, it's on the manifold. The manifold is the cotangent bundle. Yeah, so, here it's a tangent bundle. So, now you're iterating. So, now you have, so, now you're iterating to the cotangent model, and I have an operator like this, C i, D over X i plus C i bar, D over D p i. So, here X p is the coordinates on the psi i. And now you see here what happens is that you have a hyperbolic operator. That's kind of the thing here. So, you can carry out this construction to arbitrary manifolds instead, so close to the accounting. And what happens here is that you should have some additional term when you lift this thing to the cotangent bundle, because this thing is quite kind of the cotangent bundle, but you can lift it by a choice of connection. And this thing here, it kind of, it still remains equivalent in the reaction of a group, because what you do, so when you have any deformism on M, you can lift that to some symptomatism on this time. And even more, you can do also preserves of your optimization. Okay, so now here's the point. So, now the problem is that you want to work out index theory or calculations of characteristic classes here from this operator, which is hyperbolic. And there is nothing you can do in terms of analysis. So, here in terms of analysis, what you do. So, there's a bit long here. And you finish here now. Sorry for that. 30 seconds. So, analysis here would be replaced by deformation compensation. So, what happens is the following thing. So, if you formally do the same calculations as what you can do with the Hickard method. So, here, if you square this thing, you have some hyperbolic processing. If you do formally the same calculations as what you have for Hickard method, you can still see the top class showing up. Actually, you can still extract the top class from here. And so, if you trade this thing here, at this formally, you can extract your top class. And now there is something that saves you to go back to analysis at some point, which is the fact that when you have content in bundles, you can deform them to semi-classical solution check items instead. And from here, actually, you can carry out this whole analysis to...
In the early eighties, Connes developed his Noncommutative Geometry program, mostly to extend index theory to situations where usual tools of differential topology are not available. A typical situation is foliations whose holonomy does not necessarily preserve any transverse measure, or equivalently the orbit space of the action of the full group of diffeomorphisms of a manifold. In the end of the nineties, Connes and Moscovici worked out an equivariant index problem in these contexts, and left a conjecture about the calculation of this index in terms of characteristic classes. The aim of this talk will be to survey the history of this problem, and explain partly our recent solution to Connes-Moscovici's conjecture, focusing on the part concerning `quantization'. No prior knowledge of Noncommutative Geometry will be assumed, and part of this is joint work with Denis Perrot.
10.5446/59264 (DOI)
OK, Laura is still not here. So he gave an introduction on Kehler quantization. So I just mentioned the following facts, that if x is Kehler, then l is a prefrontal line model. Then we consider, in my talk, we will consider this instead of power p, I consider power k, I consider power p. So that's a small difference. And of course we have this space, it's both robotic sections. And as in the talk of Laura, we define a many functions. We define the turntable operator with symbol f. And we are interested in properties of these operators when p goes to infinity. And in particular it is known that, for example, we have asymptotic wave, a semi-classical wave, we have a correspondence principle, namely that if we take the commutator of the two turntable operators, we get here the turntable operator associated to the Poisson bracket of f and g. But in a semi-classical wave, so we have this relation. So it's a version of the correspondence principle in quantum mechanics. So what the goal of my talk is to consider another situation, is probably to replace this space of polymorphic sections by a certain space of spectrum space of so-called Bokel-Avlashian. So let me, which appears also in some previous talks, so I will just define the Bokel-Avlashian. So now we are just starting the compact symmetric manifold. And I consider, as before, a pre-quantum line bundle. So a line bundle in doubt with some Hermitian metric, and with a Hermitian connection, now Laplace, such that its curvature equals the symplectic form where the curvature is like a square. It's like usual given by the square of the given connection. In the case of a polymorphic line bundle, this connection is standard, it's the chain connection. So here we are just given such a data, and there will be also an auxiliary vector bundle. Sometimes it's useful to replace this one by NP tensor, some fixed vector bundle, for example, canonical bundle or square of the canonical bundle or other choices. Of course, this can have rank r, which can be greater than 1. And then we can always, we consider compatible almost complex structure. Such that omega is a J invariant. And we can also introduce a Hermitian metric, again J invariant. So it's arbitrary, but J invariant. One choice is to choose the remainder metric, which is associated to omega and J, but it's not absolutely necessary. So we can just work with this data. Are you assuming that the almost complex structure is compatible with omega? I mean, also in the sense that you're going to be positive? No, no. But not assuming that you're negative? No, no, no. Just a guy. So if you want to, so we can, of course, one choice is to choose a, yes or no? No, what I'm asking is when you're requiring omega J invariant, do you also require omega UJV to be positive definite? Yeah, yeah, of course, it will be. Yeah, of course, it will be a remainder metric, but it's not the one I chose. So you're requiring it? Yeah, of course, of course. Yeah, yeah. So it's the usual. Okay, and then, of course, then this, there is a volume point, EDX associated to this metric. And if we have all this data, we can consider scalar product, L2 scalar product on the space of smooth sections. And of course, this will by completion yield L2 space and this is, of course, we just take the point by scalar product of two sections and then integrate with respect to this, to the volume form. Okay, and then having this scalar product, we can consider a form of operator, the book of magnetic laplacian. So, which maps the space of C infinity sections to itself. So first we apply. So this is, of course, the connection induced by L power P and the connection on E. This is a differential operator applies this maps this space to a space of forms and then using the formal joint of this operator with respect to the scalar product, we can form its joint and consider this kind of, so that's the book of laplacian. Which already appeared in the top of the low round. And then as was pointed out, we want to consider its spectrum. It is spectrum, actually spectrum goes to infinity when they go to infinity. So we have to collect a bit this operator. And for that we introduce the trace of curvature. And this is defined like that. So it's the trace of the curvature of it. So we have to consider the P1 zero of X and then we consider this read normalize the book of laplacian. Which I will denote by L power P, which consists in subtracting P tau from the book of laplacian. We can add here also some other potential in which is a permission section of the entomorphism of E. But okay, let's take with this. And then what happens is that as already mentioned by Vailoran. So if Xj is actually Kela, so J is integrable, then by this Mokner-Kodaira formula, we obtain that delta P is actually two times the Kodaira laplacian. Yes, so that's just the Mokner-Kodaira formula for Kela. In the Kela case, in the talk of Laura here, delta was constant N. So because he took all eigenvalues to be one, but it's the same thing. Okay, so this operator has good chances to generalize the usual Kela quantizations. In particular, of course, the kernel of, in this case, the kernel is really of this, okay, and let's write it like that. So it's kernel, it's a space of homomorphic sections. Okay, so we can go back to the general case. So now in general, we have the same theorem, proofed by Illuminant and Riepen, and we gave mine, myself, a lot of proof which a sense that actually some constants such that a spectrum of these three normalize both laplacian, it's contained in some fixed neighborhood of zero, and then the rest drifts to infinity at speed p. So the spectrum has this kind of shape. And of course, this is an elliptic operator, the spectrum is discrete. So this goes to infinity. So we have a kind of spectral gap. And then what we hope, so of course, again, in the case of Kela manifold, then the whole spectrum is concentrated at zero. So what can define the following replacement of the space of the replacement of the space of homomorphic sections, just take the eigen sections corresponding to eigen values in this small interval. And so this we will try to run the quantization, the Bresin-Turkitz quantization with this space. So that's a very concrete proposal. And we have to convince ourselves that this has properties which are analogous to the space of homomorphic sections. So in that we have this, the following observation that the dimension of the HP is actually given by the Atia index formula. And this is actually given by formula analogous to the Riemann-Rof-Hinsenbrug in the Kela case. So this is the same P power n rank of D and here the volume plus the term. So it has the same, so it's of course, first of all, it grows polynomially as P power n, so it has the chances to replace the space of homomorphic sections. So maybe let me say how one can move this. So let's actually both. So we have two relations. So one can use the Lichnerovitz formula to prove the following thing. First, that the spectrum of the, so to tell this data we can associate the spin-C operator. So we first show the following thing, so that there exists again some constant, okay, mu zero is actually the infimum of tau. So that the spectrum of the Dirac operator has this kind of structure of spectral path and of course the spectral space, the kernel of the Dirac operator is given by the index formula. And then one can show also that the two operators are not far away from each other. So there exists such constants. So there exists a constant such that this is L2 norm. We have such an estimate. So the two operators are not far for each other and then one can apply the minimax principle and what happens? Okay, so here's here. So for the Dirac operator, we have this situation. I'm sorry, could I ask a quick question? So in order to apply the Dirac operator, don't I need some kind of spinner with coefficients in it? No, it's spin-C, it's spin-C operator. So all the data... So you would apply the Dirac operator to a function or a section of the bundle I'm twisting with I apply to a spinner, right? Yeah, yeah, but you can define the spin-C operator acting on forms. So there is a formal spinner. So we can write a precise formula for the spin-C operator. Think of it in terms of acting on differential forms and you evaluate the function. So this is our section. Okay, and then minimax, so to say, so this is the eigenvalues living here, they split. They can split in a small neighborhood, but this stays fixed. So the same happens here. So let's say that's the explanation of this... Okay, and that may be another... Is the difference between the operator bounded or something more subtle? Is the difference between the operator bounded or what's up? Yeah, yeah. I don't know what it put that... This is a formula for the... In the... Yes, in a formula. So it's a thing that applies to the index of... DP plus actually, so the fact that this formula actually contains also the fact that... Yes. The kernel of DP minus is zero for DP, so it's somehow the vanishing theorem. And then we apply it actually for DP plus, so we show... Yeah, as a DP act on something big that contains... Yeah, of course. Yeah, yeah, yeah, yeah, of course. So actually the... How to say it? So actually the common... The kernel of DP concentrates into forms. So of course, kernel of DP consists of forms, actually, excellent forms. But sorry, it concentrates on sections. So it does L2 cents. So it's... Yeah, this is somehow contained in... So it's just... Okay, so maybe let me say that actually another reason why the HP is a good candidate is that one can prove that... Okay, maybe let me just say like that. So projective... There's a projective embedding of X using sections in HP as in the codirind embedding which we also proved with shoutout. So this HP is basically an ample space of sections. So that's also another reason to work with it. Okay, and then let me... So we can run our... Quantization based on... Let's say based on HP. So as before... Okay, my rotation is a bit... Yeah, so P comes from... I have to always explain this, but that's our traditional rotation. We won't change it. So this is HP. It comes from projection. This is small p is the parameter. So pk would be the better one. So which is... We don't want our projection to this space. And we call this in analogy to the complex case, we call this permaprojection. So it's a projection on a finite dimensional space. Okay, and then of course we have this... That there is a turbulence operator defined by f and we have a section with a normal distance of e, which means that as defined before... Okay, now we want to act on L2. So I add this p here. So... So basically we multiply by f and then project again on HP. And we have this... So we have a family of operators. And then the statement... The result is the following. So is that this usual turbulence package holds for this case. I need to use some chopper. So our result is the following. Again it's our theorem with yours, and Shao Nan Ma. So first Simon, at least, there is also... If we have two symbols, two functions, so sections in the enomorphism of d, then there is actually b differential operators, which can be applied to this CR, this is a CR. fg for R is at zero. And maybe let's put here that the first one is just a product for composition of a point-wise product of fg, such that we have the following expansion for the composition of the two triplets of operators. So here is one fg plus R over p. Okay, put k here. Yes, so... For each k, it's a sense of operator norm. So if we have the distance and take operator norm, these are bounded operators, it's bounded. So here we go up to k, so it's an asymptotic expansion. Here we go up to k, and then here we obtain... We have a high order rest. Okay, and... Okay, can I write this? So you see now our goal is to consider the commutator of two operators. So to have... To consider now dgp composed with tfg, to take the difference. If we do that here, we have already fg, so this part disappears when we take the difference. And we are left with the difference one over p, dc1fg minus c1gf. Yes, so that's... These coefficients appear in the correspondence principle. Why do you say the c0... I mean, this c0 multiplication is not commutative, right? No, no, of course, of course. These are big differential operators. So we'll see now. I mean, c1, I'll give you the formula for c1. Okay, first, there's a general theme. So the question is, these are endocrines. So f and g are endocrines. If e is not trivial, then... So you're thinking... You have the bracket of f and g as the first term. When you take the bracket of tf and tg, then the first term is t... Yeah, yeah, so I want... Yeah, yeah, so now f... fg is... Are functions, yeah? Yeah, so this is k0. This is k0 to the 0. Yeah, well, I mean, it's always the same. I mean, it's... Yeah, it's... It's just a composer. How? I mean... Sorry, what's the name? It's a sexual issue. No. It's in... No, sorry. Sorry. But we can think of the functions. So... Okay, so... Then... Okay, so here there's... Okay, i. And we get the Poisson bracket... of f and g. Well, this is Poisson bracket or not. So we normalize this in this way. Otherwise, one can... Should put some 2 pi somewhere. So it's Poisson bracket on this... For this... For this symplectic structure. And in particular, of course, then if we have this... So in particular, we have the correspondence beam sequence. Because when, as I said, when we take the difference, we will obtain the teclitus. So of course, t of f minus t of g is t of f minus g. So it is... We have a homomorphism. So we have the... Correspondence principle. And then, okay, there is a formula. So let's assume now, as I said that... So one can calculate this coefficients, but it's... Well, to... To make this hypothesis. So that the metric g is associated to omega. And then, everyone can calculate that c1 of g is... Okay, so... We apply the connection, the 1, 0 connection, the part of the connection of e. And here we have the varying difference by the metric. So in particular, if fg are... Are functions, then we get that c1 fg. So it's... Okay, this is the pairing. So it's... And of course, then we... This explains, or this formula again. So if we take the difference of these two, and then from you here, we really get the... My definition of the... So... Okay. You had two metrics. There was one constructed from j and omega. Yeah. It's another one. So this pairing is with the one... We do this... Now there is only one. So we have this g. We consider g to be associated to omega. Sorry, it's right there. Okay, and then maybe something, okay, which somehow shows that this procedure retries the... f is that... If we consider the operator norm of tfp, then this gives the sub-norm of f. There is one obvious inequality, but the other one is not trivial. And five, maybe this kind of expansion gives rise to a star product, which was one of the main motivations in physics. So people like Berezin, Fedosov, and so on, conservative. So we have this associative star product, which is a formal series. So I should say associative, which we consistently just taking this... These coefficients as coefficients. So here we have a plan constant, a formal variable. And this is... So let me say maybe some remarks on previous work. So if, again, if xj omega is scalar, and they consider the case when e is trivial, this theorem was proved by Hordemann. I'm not sure if it's double n, but I shouldn't... It's double n. I should write for it the name of the organizers. Actually, the most difficult one is this one, she. Okay, I hope I put it right. Okay. And again, Schliefer-Meier has another paper in 1996. Okay, and then Loran shall calculate it, in this case, this C1 fg. It appears in one of his papers. Okay, maybe then to... Of course, there was some other work in the symplectic case. So one idea, of course, is to consider the... Instead of... It's just symplectic, in general. So then one idea is to consider the kernel of the dirac operator, which already appeared. Yes, that's a very natural idea. Nowadays, one can discuss what is nicer. So the dirac operator is very natural, but it mixes forms. I mean, applied to a section, sends this to forms. So I mean, the Böckner Laplacian just sends sections to sections. That's one. But of course, this is a very good choice. So there were works by Borthi Kouriba in this direction. And Nshounan also carried out this quantization. This case, based on work of I.D. Vaughn, I will probably have a bit of time. Our method is to use the expansion of the Berman kernel. So Daljouma did the expansion of the Berman kernel for this space. That's okay. Maybe I should say here, the method of Borde Mardemarek and Schwieckermeyer was based on the theory of Bude de Movell-Zostrand, going to the circle bundle associated to L, and applying there the description of the Zegur projection, which is a Fourier integral operator. In our case, we will use the local index theorem of this variable. And then let me say something about how much time to have an approximate time. You have five minutes? Five. Okay. I reached a half of the talk, so it's okay. So the proof is, as I said, this expansion of Berman kernel associated to HP. So let me say, of course, that this also appeared in Mardem to show that if we define this. So to an operator, we can define the Schwarz kernel. In our case, it's an integral kernel. So they make the operator applying sections to sections. And this is given by this formula. So all operators will have a kernel. And maybe what is used quite often is the final easy formula for the composition, which is the following. So if we have two operators, then basically by Fubini, we obtain this formula. And so the kernel of the templates of operators, kernels are given by the following formula. So I introduced the Berman projection. This is the kernel of the Berman projection. It's called Berman kernel. So we write it in this way. So that's the easy formula. And then it's clear that if we control the Berman kernel, then we can say something about the templates of RETA. And then there are two things, which we have an expansion. So again, this is based on the work of Shounan and myself about the synthetics of the Berman kernel for these spaces, for this HP. So this was done by value of mine. Then we consider this Berman kernel for these spaces. And basically it's a, let me say, first of all, it's an heavy decay outside that Berman. And then B is an expansion here, diagonal. So maybe this I can at least write down here the point A. What happens is that on the set, so far away, okay, let me put like that. So this is the neighborhood which comes close to diagonal, but when P goes to infinity, we have this Berman kernel. Okay, let me put it like that for each head. It's less than P minus N. And then on the diagonal, here diagonal we have some, I will just write down. So here diagonal, we have a model operator. So let me try to, if we fix X, then we can identify a small neighborhood of X in the manifold to a small neighborhood of zero in the tangent space. Okay, so here we have capital Z, coordinates here, small z coordinates. And then let me write the formula for the expansion of the Berman kernel. So now P is, yeah, it's our Berman kernel, but we have already this on Z and Z prime. By exponential map, we can take this as an R. And then we have the following kind of expansion. Where J are, okay, I should put here X. And then we have some Berman homials. Okay, I finished it. J zero X is identically one. And this P, P is actually the Berman kernel of a model operator on CN, actually associated to this Berman space. So we write it in the following way. Okay, here's a drive minus. And on this formula, we can see the salient features of this expansion. Then if we evaluate it for Z equal Z, Z prime, we obtain here basically D minus Z. So then this exponential of zero is one. And if we put here zero, which represents actually X and Z prime, then we get here an exponential decay in Z prime. So the Berman kernel is actually decaying outside diagonal. And on the diagonal, it's basically PN and then an expansion. And then okay, what one does, one can, using such expansion, one can compose an operator. For example, also if it takes the Taylor expansion of F and obtains the expansion for the Berman, for the Turbidz operators. And then one can again compose them using such formulas and obtain the correspondence principle and all the other properties. Okay, so thank you. Pressure. Yes, you can do this. Well, you can define, I think this was done. You can define this space is, so you can pass through the circle bundle. And then one can define, it was defined by already by Gilema, because you can define some complex which mimics this micro locally Db, like Db. And using that, you can declare that these are the equivalent to CR functions. And the equivalent CR functions would be the polymorphic sections of your bundle. You can create a space of sections which actually behaves like here. But it's a different approach and different operator. We have an operator, given operator, geometric operator, and we work with it. So. Is the situation when you have F and morphism added to functions physical? I think so. Yeah, you put in a bit like matrices. You can think that you have matrices instead of functions. And by that, you can create correspondence, then you have the bracket matrix bracket. And actually, one day, for the case, we have written some commutation formula with double brackets. So one can define something. You have some interpretation. Okay, let's thank George again. Thank you. Thank you.
We study the Berezin-Toeplitz quantization using as quantum space the space of eigenstates of the renormalized Bochner Laplacian on a symplectic manifold, corresponding to eigenvalues localized near the origin. We show that this quantization has the correct semiclassical behavior and construct the corresponding star-product. This is joint work with L. Ioos, W. Lu and X. Ma.
10.5446/58303 (DOI)
What I would like us to achieve within BISARW is to understand the relevance of time series analysis of disease data to learn how to identify and take into account periodicity or cycles of seasonality you can use as many different names as you wish, how to identify your trend in your disease data and how to investigate a relationship between two time series we will be very very short on that I think it will be just one slide but that's the simplest stuff once you understood the rest. A disclaimer is that a whole week is not enough so you'll just have one hour and I hope you will get some nice information and enough knowledge to try and get started doing some time series analysis with data that would make sense after this hour and a half I've inserted a few bits and pieces of our code here and there that you can copy paste around but I guess you all know are much better than I do and I've stolen that most of those slides to some colleagues from ECDC so I really acknowledge them here so I had to be a pain in the ass with Willy through that slide and now we get to the real stuff so time series data is data points collected over an interval of time like so several data points at different time points the successive observations are usually not independent if it's 35 degrees today it is very likely that yesterday it was not minus five so those observations are not independent they kind of follow each other each others so what do we use it for in EP we use it for surveillance description of data and reporting to generate some hypothesis regarding seasonality or transmission cycles transmission periods and once again information for action we use it to detect outbreaks and respond early we use it to allocate resources healthcare facilities and stuff we can also use it for forecasting and for evaluating interventions we will not be touching forecasting today so a few examples here the evaluation of an intervention and in 1998 they screen poultry stocks for some salmonella types at regular intervals and if positives they were they would slaughter the whole flock he'd treat egg clean disinfection and of the breeding area that's an intervention which according to our time series analysis was clearly effective on the serotypes that were of salmonella that were targeted as you can see in the stuff below you can also use it for outbreak detection and you're going to use some models some forecasting models with some prediction intervals that will tell you whether you have more cases that you should observe or that you should expect so you can have some threshold for early detection and response and there is an example of an art package which I have never used but it this one was recommended to me you also have other activities like heat wave preparedness that can be conducted with time series but once again it all comes back to the quality of the data that's the surveillance and epidemiologist in me saying that you need to assess whether your data is good if it's representative of your surveillance system the idea like will access to diagnostics will be similar in all the area is the reporting as timely across the whole country I don't know if you've ever used a register like infectious disease register data but you usually have several options for a date you've got the date of sampling you've got the date of notification you've got the date of suspicion well if your register has that stuff I'm interested but usually you can have the date of first consultation for seeking of care you can you can have that in some registers so you need to think about the date you wish to use and this is something to keep in mind especially if some in some areas the reporting can be delayed and there's also something a bit weird and funny is that we're attracted to round numbers in the late 90s in the US they looked at data onset from diseases notified and you have twice more on the first day of each month that's a reporting issue and that's something you will have to think of when working on your analysis with time series data if you're using surveillance data and if the quality is not that great so garbage in garbage out the quality of the data thank the epidemiologist if the data is good thank the data managers who run the surveillance system do not call the surveillance system a cow with makeup some colleagues will let you know about that so how do you want the data that you will be using to be you want it to be aggregated you will need to think about the time units you want the time unit that is of relevance for your work so I like to use month because I usually work on small stuff like tickborn and cephalitis cases in Finland but for influenza transmission or for something like covid weekly basis can be quite interesting you will need to think about your areas also especially if you want to do some panel data analysis and if you're considering using covariates such as weather if you have a very large area you will totally dilute the effect of that far of that parameter so you always have to take everything into consideration there are a few art packages of interest that I've listed here you can play around it during the crash course and I will stay for the crash course maybe depending on my luggage gets here so the time series has four components three and a half you have the trend which is whether an event increase or decreases the frequency of an event decreases or increases over time and then you have what I call what we can call periodicity is whether you're the occurrence of the event follows some cycles cycles are supposed to be defined as above a year and then you have the seasonal component which is what happens during a short period less than a year and then you have the random component this stuff when you will work on your model will most likely be your residuals daily so if we're thinking disease data you will have it would be a c if you're into disease data we're not going to look at the variations within the date which is why I've never prepared for that question so it's it would be a something like a periodicity but I would not call it a cycle for me they're both periods but in AP they like to call seasonal something that's below a year and cycles something that's over several years so if you decompose your time series data it's built up of three things the trend of a time which here is a bit of the influx the periodicity because we're looking I forgot the scale here shame on me because we're dealing with something that's yet that appears to be like a 52 weeks cycle and then you will have your residuals your random component so what's the trend that's a bit too simple for you maybe so it's the long-term direction of the time series you have a trend if there is a long-term increase or decrease your data can also have no trend you can have stuff that's super stable and a trend is not necessarily linear so how will you identify your trend super easy you plot your data how do you analyze it plenty of options we look so it can be a bit complicated if you look for example at weekly cases of legionellos here legionellos is here it can be a bit complicated to see if there is a trend in that situation because it's it's rather flat but if you have a quick and dirty linear regression linear regression you see that you have an upward increase over time and well this was too simple we can skip and that was the same again so still as I was saying when you don't have your linear stuff it's a bit complicated to have a direct idea of what your trend would be so step one you plot your data that's fairly easy I think you all know that already step two you use you can use different techniques to analyze the trend and the easiest would be to smooth to smooth it up using a moving average what's a moving average will be the next question you in EP we use centered moving average so for example here we have monthly cases of the disease you will use here it's a five weeks moving average that you're using so you're centering of the week of interest and take the two weeks before and two weeks after and you do the average of that this will give you something that doesn't have those very very that's those way too many variations this gives you an idea of the trend over time so it also means if you want to investigate a disease over the year 2020 to 2022 when you will ask for the data to your friendly data manager you ask for two months before the beginning of your period of interest after because if you come back to him two weeks later when you want that and if you've called his register a car with makeup he will not be happy so that's a fairly simple example moving average it will not be a problem for you and the wider window you choose the flatter and stable your curve will appear this is just for you to see what's happening for your plotting and you can see that with a 51 weeks window over a period of more than 10 years you see a clear very slight increase over time which might be very difficult to to see with a five weeks window or with that 25 week window are you still following okay good now simpler solution if just like me you can't see anything when you look at those kind of graphs go for a very simple linear regression where you will have well you will just put your time variable so year week if you're looking at something after weekly data or month into your model and then you will have a coefficient for that and this you don't need I think and this yeah so and you will get your coefficient that gives you the coefficient of increase of cases over time I have a personal preference for using negative binomial regression instead of linear because it's less cumbersome regarding assumptions that will just touch it up a bit later then you have the difference between your observed and your predicted your residuals and we'll discuss it a bit later so you're have a few assumptions when using a linear regression that setting and but also when assessing a trend you want the association between the two variables to be linear except that if you have an intervention that occurs at one point you might want to add this into the month into your model and that would be another topic you want the your errors to be independence you want your horrible horrible that you have to respect normality so for each value of x the value of y are normally distributed and that's where I'm a little bit out of my depth being more an epidemiologist who will use and look at the stuff and be like it doesn't look like it's supposed to look and just go ahead and do something else and then you want to have an equal variance over time so here is a simple linear regression where they looked at assessing the trends and your coefficient will be what you're interested in I'm sorry this is a stator output I didn't have time to redo them with r and that coefficient will give you the slope of your trend over time and here you will not look at p values so this is how your simple linear regression actually looks like for that data now we have to talk about the residuals you know probably all more about modeling than I do but your residuals is the difference between what you've observed what you've seen in real life and what your model estimates and you want so much stuff from those residuals you want them to follow normal distribution you want their variance to be around zero and for example this I'm not happy with it so you need to find other options other models that have less assumptions than issue or you can log the stuff to reduce the spread of the residuals and you end up having something that looks way much nicer and that's the type of residuals graph I'm happy with but the problem with that logging is that the interpretation of coefficients will change and it will get a bit more complicated so with each unit increase in the dependent variable you will have a multiplicative percentage change in independent variable seriously log transformation I'd rather go for negative polynomial always so some key ideas in your trend analysis you plot your series and you're moving at you look at the structure you describe it you decide whether a log transformation is needed and you decide not to do it you fit your regression model you interpret you check your residual and make sure you interpret your coefficients correctly the numbers need to make sense in the real world 0.0001 percent increase over time per month or year is not that interesting except if you're working on a million years data okay so next component the cycles and the seasons here you can see in that data that you have some cycles over a period of three to four years this is exactly what you want to look at and take into account when modeling time series data and this is how a model would look like and how we observe with your flag the seasonal variation is within a year and they tend to repeat themselves each year that's Campylobacter for example and you can clearly see it and that's a brand new one that we had a quick and dirty look this week it's chlamydia infections notified in Finland so you can see that COVID restrictions didn't do much to the spread of chlamydia over the past two years if you know anybody who's into zero COVID dogma you can tell them that if well yeah just let them have a look at that and then you will have your random variation which I wanted to talk about a little bit later but so your classic time series is the trend the cycle if there is a cycle if you have long enough period to see that because if you only have two years and if there are cycles that are way larger you won't get them the seasonal periodicity and then the random stuff it's every time series data plot can be decomposed as follows and by adding all that stuff the trend the cycle the noise and everything you might end up having something not too far from the real life and this is an example of something we did for tickborn encephalitis in Finland over 11 years and as you can see we had that trend over time not linear we had those trend plus a 12 month cycles that you can see here and as you can see we are not too bad at predicting the incidence of TBE except we don't really know what happened afterwards in the two years that followed we had way more of an increase and and now my model sucks so how do you assess the seasonality other than by plotting and that's where I've copy pasted some R code the spectral analysis which is a very fancy word to say periodogram is what you want to do first and there is that very no yes so if you have a data set with cases per week weekly cases or monthly cases and but keep in mind that this doesn't work with panel data you need whole aggregated so one observation per period you have that very easy formulas from it's actually from base R that will allow you to plot a periodogram and you will have to decide on a period so here for example we decided that we wanted the scale of the periodogram to be 52 weeks and if you look at that I have that periodogram with two peaks plus a third one those two peaks are the main period which is actually because we decided of the 52 week scale 52 weeks divided by one so a 52 week period and a 26 weeks 52 weeks divided by two another 26 week period that you get in there and that's when you start with the fun stuff because every curve that's that is a cycle can be decomposed in sine and cosine so when you want to add a periodicity or a seasonality to your model then you will just generate a sin a sinus and a cosine of those periods key message you plot your series you detect and describe a pattern you play around with sine and cosine then you interpret your coefficients in terms of something that has to make sense those 0.0.0.0.1 increase per 10 years you're not really interested try to keep it simple and then check your residuals but then because of all that crap with residuals that always make me very unhappy you have another possibility you can consider other models that linear regression and I would say that I have a strong preference for negative binomial regression because you have less assumptions to bother with and more veteran statisticians would tell you that now that we have the computer power to run them quite easily on almost any laptop it's quite easy to go straight to that and it allows epidemiologists to pretend to know what they're doing which I really like so what I will talk about also time series but and now it's a space time so it's a time series but we work basically with the both space and time and there are basically two versions of analysis can do when you have both space and time you can do simultaneous analysis of trends so one of the option is this spar package in R you can do you can feed this space time densities and but this will only work with the current only data so no quantities and then the second option is to first harmonize everything in space and then do the time and so I'm going to show you well tomorrow I will show you the spar a bit but our main focus is on this number two so the version number two which is first harmonize the space then analyze time series and then you focus only on time and then what we do basically we apply type of modeling that you saw from team we applied for every pixel so it becomes very very computational okay super computational and also the what's important to discuss is this so there's the seasonal components there's the cyclical components I'm not going to talk about it for me it's all kind of this sinusoidal or whatever curves then there's also breakpoints I think you you don't have that maybe in the health data but in vegetation data there's a breakpoints like when there's a fire or natural hazard and then also there are the distributions they also change and so that's also important when you run the analysis and let me see this is from the Wikipedia by the way these things from Wikipedia if you if you open it in a let's say so if you open this in Wikipedia you go to this image it's as usual in Wikipedia if you look at the meta information this looks like some real data set right no it's not a real data set it is let's see more details it's been generated through our code it's a simulated data so it's been generated it's a simple it's a it's a dummy data and they they played a bit so they wanted to show so if you use the raw data the simple thread line it's almost worthless so it's like r square 0.0 0.08 right but if you do the filtering then the model is 0 r square is 0.97 so the same basically same data set but you see there's a huge difference how you implement modeling going from no correlation to to low r square to high r square so yeah you can take a look at that but that's a classical data set and this thing also T mentioned there are these components so this will be a original signal and you can extract the seasonality and then you can maybe detect the trend and then the remaining things is the random component and random component is noise you don't want to model never fit a model to noise please that's a physics that's not statistics but yes seasonality and trend that's something we're interested and so as I said the team also mentioned that I was I will still say this noise noisy component yeah be very careful and I only put here one so I cyclic hollow seasonality I put it together but I agree there's more components here so number two there's a ABC I don't know there's more versions this is the global temperature measures for the last 10 000 years and also you see this I think uncertainty the the blue line is uncertainty and so what you see is how how the scale that you look at you know like a annual temperature the scale that you look at actually the world's been warming up a bit I think since the glacier period 10 000 years ago there was a cold cold period here and then the the world's been warming up and it's been actually cooling down it's been cooling down but now we have industrial evolution and it's the red line and you see it's a one degree and it's completely off the scale it's very difficult to even visualize it it's a really hockey stick but so that depends also this time series analysis depends how far you look at okay now the space time as I said we do the we interest in space time analysis of trends and so now we have one data set my colleague Leandro here he's been running this analysis and we downloaded all of the modest data set actually it's almost 14 terabyte of data so we downloaded and we took this data set and we prepare like a data cube and then we run the time series analysis because we would like to see where is the the NDVI you know the normalize the difference vegetation index it's like a vegetation index it's a measure of primary productivity of landscape we would like to know where is the negative trend in NDVI where is the positive because that's the measure of land degradation let's say so we wanted to analyze that and we prepare like these data cubes as I said that's really can be really large especially if you do global we also prepare for Europe at 30 meter we have global at 250 meter and then we want to estimate this trend and then the first thing we realized that there's many missing values for example there's a lot of lot of missing values then also there's artifacts there's some outliers and and then okay we have to do some gap filling and so after we remove the outliers and we do gap filling then we can do trend analysis and as a result of trend analysis this is just fitting a linear model to data we can get the map of the slope this is the alpha the intercept so that's the beginning temperature the beginning NDVI at 2000 and then the slope it's a positive or negative negative means decrease of NDVI and decrease and positive red color here it means increase but as I said in between we have to we have to derive all this time series again and we have to actually convert all the layers first to make to fill in the gaps after we fill in the gaps then we remove the trend or the seasonality component because the NDVI changes you know winter summer for every pixel winter summer is a bit different right in the world and then once we remove this component then we can calculate per pixel and imagine this is a huge image it's like 150 000 by I don't know 100 000 pixels they're huge images and we take the time series of pixels and we run models through time for every pixel and we have to run it in parallel and it will take like a one day of computing with full capacity right Landro this NDVI will be like a one day and then so this is one this is one row data this is a big monthly image for for January and you see there's a lot of missing pixels if we will like to compute any trends on this data it will not be possible because we have too many missing pixels so we have to do this gap filling and then after gap filling trend deep trending and then the trend analysis so this is the one image this is our computing so this was with the 400 threads we have not 1000 threads so we can speed it up a bit but this is you see running on on servers processing in parallel every pixel time series running in parallel and this is the NDVI after gap filling you can see how the world our world breathes basically right so this is the vegetation changing Landro made this realization it's really mind-blowing right many people interested in that and it's nice to see it after you remove all the gaps you know and everything so it's nice to see it but you see there's a seasonality effect so if we will fit a trend line it will be difficult like in this first example to show this Wikipedia example it will be difficult to fit a trend line so we do this is one point with the NDVI this is the original NDVI this is the seasonality and then this is after removal after removal you see we have just the trend and these are the residuals of the of the model and so here we estimate that there's an increase in NDVI for this pixel in the last last 20 years there's an increase in NDVI so it could be that they started planting reforesting I don't know and this is the same image of the world the same the same months but after you use remove the seasonality do you see a difference I mean if you watch carefully you can see that there are some differences and these could be differences for example due to El Nino La Nina effects you know it could be also differences because of land degradation so there's a there's a multiple not things but this is now the seasonality is taken out so you can see now also a feature which is basically something that the team explained only temporarily now we looked at it in the space we look at the space what happens with vegetation last 20 years taking out the seasonality and then we can compare of course the beginning and the end NDVI that's also one way to detect but this wouldn't wouldn't work because by accident you could have a difference between begin and end by by accident could be small difference but through the time it's actually bigger and then this is the result ordinary least square fitted for every pixel red color red color means it's a decrease in NDVI a green color it's an increase and so what happens what is the conclusion what do you see immediately the world is getting greener and it's been published in a big journals and and the world is getting greener because of actually two countries China and India exactly so you see there's we're getting greener Europe is kind of let's say neutral but there is Eastern Europe is also looks like it's getting greener but there are places where there's reddish pinkish and you see there is some spots where obviously the NDVI has been dropping this is for every pixel and fit models you can also very cool you can save the R square for every pixel so this is R square of the trend analysis and now we can mask out only show only the pixels where the R square is higher than 0.25 so it means there's a not only there's a trend we fitted but this trend is really let's say significant and now you see there's many pixels drop out and then this one is if you take the probability for beta to be significant so less than zero so we can compute all these images actually Landro computes usually from this trend analysis like 16 images can be different components of coefficients of I don't know some if you want to this periodicity you can compute these images and visualize them so yeah and then we can zoom in we can zoom in somewhere in France I think Landro found two spots one spot is highly reddish and the other spots highly green I don't know where these areas are now but you can see when you zoom in on the NDVI data it is clearly this greenish there's a clearly increase in NDVI and here clearly there's a decrease and I think this this area here it's close to some place I'm not sure so you we will have to check but it could be that there were forest plantations and then they cut them down or something so there was a clear cut but in Brazil also you know there's a lot of conversion of land into agriculture if you zoom in also you can clearly see that places where they convert forests to agriculture that you lose NDVI and so we analyze this data and this data is available and you can see that the cities as they grow they cause NDVI to drop of course and so it matches lots of things matches what we expect so sorry can I just start off why would India or China get greener? Because they intensify in agriculture they put more fertilizer in the soil and they had to produce more food and they get greener the question is why did China and India why the greener is that correct Leandro right? It's more intensive production so they have also they rotate through year you know you can do two cropping season or two harvest or three harvest season so they intensify in production this is what we did for the NDVI then we said okay let's play let's do the same thing for the temperatures so exactly the same you remove the seasonality you fill in the gaps you remove seasonality and we did the same thing and people went nuts about this and again you can see we can do it for the daytime temperature nighttime temperature and you can see that the China India is getting cooler because they may get more vegetation so daytime temperature get cooler but the nighttime temperature are more neutral so they're not they're not changing so much but in general you can see here what you see as a trend blue is a negative red is positive what is your first impression? There is a global warming obviously there's more red than blue and where does the global warming happens the most? It looks like it's here Russia and Alaska and all the places where there's for example if you look here this reddish areas this is a change in temperature because they cut the forest and they put the cropland and you see there's all this basically kind of a front line of the conversion of the land all this front line you see reddish so there's a really change a change in temperature it's also interesting to see difference between a big difference between daytime and nighttime temperature and this is a zoom in on Montpellier I have this map I will show you that tomorrow you can access it and you can see the red is red and blue again and you can see there's a in general most of France is the same temperature but somehow this place here for example where they had these plantations of eucalyptus or something there's a wetlands here I think this is the French Basque country and looks like there's an increase in temperature for some reason also the mountains are warmer Pyrenees seem to be warmer and all this area here seems to be getting warmer but gently not not a huge increase okay tiger mosquito so tomorrow I will show you now this was a environmental data time series so tomorrow I will show you the whole case study we get the G-BIF data on mosquitoes I think we start with like 50 000 records of these mosquitoes from G-BIF then we subset to Europe then we finish with 28 000 I think and so we have 28 000 points it's a significant data set for each point we know the the date the year and we know the individual counts so they even have the counts of mosquitoes and that's the stuff I put also in this dashboard that Francesca was showing and so what we do then we analyze this mosquito spread using a space time machine learning so we do this model B we produce the time series of images of occurrences of mosquito and once we produce these images of occurrences then we can visualize them and you see this is now different years going from 2000 to 2021 and you can see that it is spreading the mosquito is slowly spreading towards the inside I'm just showing Spain but so we get that red color is of course high probability of occurrence and a yellowish color is slow and so you can see it spreading yes so I can show you that's the thing I will I will talk to more about I can show us today so this is all in this this tutorial and so we well it's tricky we had to generate this pseudo absences about 10 percent pseudo absences and once we create pseudo absences then we can this is the pseudo absences we generate them by using the max like package and it is highly the mosquito is climatologically it follows specific areas very clearly so you can generate this pseudo absences these are the pseudo absences and then when we do the modeling when we do the modeling then we take about 45 covariates so we have the temperature rainfall nightlight images human impact yeah lots of lots of images snow etc and then when we fit the model the model is significant and what comes up is the most significant covariates actually night lights night lights time series of night lights and then travel time to cities and ports that comes on the top of the list and then once we make that then we can do predictions and we produce predictions based on sorry question so the occurrence only data that's what the biologist like the most it's basically this is the occurrence only data right basically it's fine except you cannot do any machine learning with it I mean I as least I don't know I don't know which which what would you use so because the anything in machine learning it's like either classification regression survival problems if you look at for example classification problems you cannot have only one state you have to have at least two states to start or multiple states so you have like a zero and one state you know where it comes and where it doesn't come so with this thing you only know where it comes but you don't know where it doesn't come so the machine learning cannot train cannot be trained so we need to insert pseudo absences and the pseudo absences there's lots of methods in the literature and most of them they they suggest to be very conservative so only insert for example 10 so small amount of points and to locate it very conservatively in places that you know the mosquito doesn't come one example is I don't know top of the alps right minus 20 degrees I don't know then you know places like Iceland I don't know so you say these are the pseudo absences and then model can at least know where it is absolutely not coming and then once you make a map you get you get this you get these predictions and then it's fine then you can because you can introduce the bias you can introduce if you the pseudo absences are you know they're difficult to validate and we do them only to kind of bridge that gap so we can do machine learning so and most of literature suggests to do to be conservative and limit to like say 10 10 percent maybe if you do 20 percent I mean that's not a strict number but if you look at the the literature so here's the this is the key reference if you look at this literature you can read about it but number one they in this paper they they prove that if you do good quality pseudo absences that it helps with modeling that's number one but they also say you should limit to limit the pseudo absence only to something which is you're highly certain right because imagine if I put a pseudo absence and let's say the mosquito does appear then I confuse my model and I introduce a bias so but these are the pseudo do you have experience somebody has experienced pseudo absences yes yes okay you can do hundred you could do hundred pseudo absences it for machine learning would be enough I hear I do about 10 percent and it's a it's a really no problem I get my you know I can see with the models when we do modeling for these mosquitoes we got really distinct models and a good match you know we have a probability space we have 0.1 0.1 errors plus minus so that's a relatively good match you know you cannot put it in our square because it's a binomial variable but when you when you look at the maps you can see it's really distinct patterns and it's a it matches the the point data very well and we have the covariates you know the the covariates we get also very clearly night light images the then the travel time to cities you see night light images travel time to cities and so it kind of matches what we will expect yes I have 150 I have 150 covariates I have 150 but it's computational to do selection so we went and we went to just put 45 just for the exercise but there's a lot of covariates and covariates are prepared as cloud authors you this I will show you you can connect you can also do your own analysis for them you don't have to download them so I will show you they are available as a geospatial database about 25 gigabytes of data we can put more if you need and they cover a time series from so from 2000 to 2021 but we also have from 1985 we have the data so if somebody wants to go back in the time it's also possible yes this one yes I think the ability of the satellite is by your that could have affected so the about the NDVI analysis Landro was there any problems with the modus in a sense that the images maybe make a bias so maybe the NDVI it's overestimated because of modus sensors it's the same sensor now yes yeah modus it's a high quality product so it's the same sensor running since 2000 so and it's like there are several reprocessing steps to like remove the effect of the atmosphere effect so this NDVI was based in what we call like a surface reflectance so it of course it tries to simulate how would be the response of the land and conceiving like a sensor close to the surface so I don't see like bias in this product but that's a very nice question because this product is the collection sticks so it's running for for several years now and there is a paper that actually explained that there was a significant difference between the collection five and the collection six so and they fix it in the collection six so you couldn't see this kind of pattern in the collection five so because every time that you have a new collection of the modus product they reprocess the whole archive so and they are doing it for quite some years now so but in the collection six as far as I know it's a high quality data so the answer to that question then to Frank is that most likely these NDVI trends are correct and they do match the literature also they match also LAMSAT data NDVI etc so I paper that you mentioned they actually analyze about three satellites I think they also consider in spots and the probe of the so when all the three satellites are presented a greener trend for the work and then there's a question about the was there any ground routing so yes for NDVI it's there is for example extreme events they are documented for example fires clear cuts of forest they're documented there let's say you don't have to go on the field you know you can find this fire fire data sets and clear cuts and then you can compare and you can see that you have a negative NDVI right the way you have a fire and or like a clear cut you will have a negative negative beta in NDVI so yes there is ground routing not it sends that you go on the ground but you could get for example high much high resolution images up to today you can get up to 30 centimeters satellite images did you know that 30 centimeters and then with that you could see that there is a really somebody cut the forest but there was a forest fire or landslide or flood and then the NDVI drops please send us the link to the covariate sources no problem the covariate sources there is a CSV file there's the CSV file I just have to open it in the github so under inputs and then CSV somewhere not here mood layers one kilometer all the layers are perfectly stacked they're all 100 percent gap field and they're all perfectly stacked and they're all cloud opt-in geotips and you can just grab them from here you can open them and put them and you can see the source ID you can see what what are they what do they mean and you can read usually they're all published in the publications at the moment there is about 500 almost 450 layers 450 but as I said we have much more it's just I didn't want to overload you I mean we could put one and a half thousand layers probably easily or two thousand layers and then these are all cloud opt-in geotips running on S3 so you don't have to download it you can just make in a code connection and then you can do extract crop overlay extract so you can do you can manipulate and it will only pick up the it's like SQL query it will only pick up the data it needs it's a proper data cube so okay this was my little contributions to time series analysis we look forward to showing you how to do this in an hour and I have also I put actually two ways you can do this trend maps one way is to use a package that will do it so there is the trend roster function in the green green brown package and so is anybody using that package maybe it's for the for the images time time series analysis image you can detect break lines you can set up the seasonality effects and you can say that if you know for example if there's like 10 break lines you know you can you can specify them so that's the package sorry break line is let's say when you have a graph change in values in the time series data so that will be a clear cut of a forest or like a fire or flood yes what they say break point yeah so that's the green green brown package but it doesn't run in parallel and you can run it it's also a bit trouble to install it it's not on it's not on cron anymore I think so it's a bit more trouble to to make it so what I did here this is horrible code but look at this this is running linear modeling in parallel for every pixel so you make a data cube and you have time series like let's say 30 years I don't know and then you can fit any model you like for every time series and it will run in parallel so I analyze here like 1 million pixels something like that maybe with less analyze it in 20 seconds with with 32 threads so in 20 seconds poof done if I had more trends it's two seconds done and this is the code to do parallelization but it's a bit abstract it's you know and you know and when parallelization there's yeah there's steps you have to it's a bit abstract so but the the important thing it's in our it's possible to crunch really large datasets and you get this code is fully scalable I can apply it to any type of problem I have to do time series analysis of you know data stacks so I could put also different models maybe something that team was showing these models with this seasonality conversion I don't know so I could put a lot of stuff to model okay so yeah that's for me
In this session, Timothee Dub (Finnish Institute for Health and Welfare, Finland) & Tom Hengl (OpenGeoHub, Netherlands) discussed the basics of Time Series Analysis, including with panel data. We looked into how to take into account seasonality, how to identify a trend and how to investigate the relationship between two-time series, with a focus on practical tips and R packages. By the end of this lecture, participants are able to analyze surveillance data, identify seasonality and investigate potential trends.
10.5446/13750 (DOI)
Hi everyone, I'm Francesca, I'm an environmental engineer and I'm working as a researcher at Ergonma Foundation in Northern Italy. So before doing this actually I've been working in many different companies as both data analysts and data visualization specialists. And so I'm here today to talk to you about what we've been doing for the past months in my institution in the framework of the Mood project. So specifically we retrieved from literature and sent it to papers data and information about covariates and then to better explore them and visualize them we created this dashboard that can be shared and will also be part of the Mood platform which some of us is our findings and we will see this better in a while. So first I will give you a little bit of introduction on what we did and some basic knowledge that you will need to then complete a tutorial that I prepared where we will let's say play with Google Data Studio together and I will see you the most important functionalities for this software to help you get set up and then you can just try and play it on your own. So what we will do today is learn how to recognize information about covariates from scientific literature and how to put them into say complete spreadsheets to then visualize them with this software which is Google Data Studio. And the basic skills that we will acquire are how to extract this kind of information. Some basic statistical knowledge was just to know what these covariates are. The relational data model which is the theory that is related to the connection of data sources and tables and then some knowledge of data visualization software. The only requirements that you need to have is a Google account and also if you check the Mattermost channel related to this lecture I put there a couple links to how to set up. We will see them later together but if you haven't done it already you can go there and there's the link to Google Data Studio and the link to the Excel sheets that I prepared for the workshop. So these are the concepts that we will go through in the first part. I don't think it will take much maybe 20 minutes something like that. So first I wanted to tell you about covariates because I don't know if all of you are familiar with this topic and let's say that these covariates are the factors that are related to disease emergence and spread and so they are variables that are connected to for example the incidence of a disease or the number of cases. And these kind of data are needed to map disease risk to map the spread of a disease and we can summarize them in some microcategories let's say like that and you can see here I divided them in environmental, ecological, human and climatic drivers based on their specific field and we have for example land use, vegetation cover, the changes in wildlife and vector distribution, human drivers such as the population growth, the lack of education, unemployment, human mobility. Then climatic drivers such as for example climate change which is one of the most important drivers of the last period and then factors such as precipitation, temperature and relative humidity. So what we've been doing considering the Mood project we were focusing on tick-borne encephalitis and our task was to identify the drivers that connect to this disease and how did we do this? We had to go through a systematic literature review. So we searched and collected all the papers that were relative to tick-borne encephalitis and the drivers and in the end we extracted data from almost 70 articles so it was a time consuming job and we ended up with a lot of information so we had to understand how to summarize this kind of information. This is just an example of what are covariates in scientific literature so as I already told you they are variables that are expected to change so vary with the response variable of a study and here I put just a couple articles that were part of our literature review as an example. This is a work that has been done by my group in Italy and here they studied how tick-borne encephalitis hazard was affected by a host and forest structure so the variables in articles are mostly put into tables that summarize the statistical evidence so statistical models and here I underlined the variables that were a part of this study so what we did during our work was to take all of this information and put them into Excel spreadsheets and we will see them later together and another work because this information was also found not only in tables but also sometimes we had to go through the results section of the papers to extract this kind of data and here is another study about tick-borne encephalitis and rhodi rebundance. So we read all of these papers and then we had to find a way to come up with some useful data spreadsheets that we could then summarize and then give this information or use this information for models in order to predict the risk of this season. And this is just to tell you what we did concerning the distraction so these are the same sheets that you can find in the Google sheets I put into the Mattermost channel so if you go into the Mattermost channel you can find there a link and you can see these spreadsheets compiled with some data. So what we did was we had the first sheet which was a reference one and we put here all of the information concerning the article, the author, the reference, your publication and so on. Then we had a couple more sheets and they were all related to covariates so we did different extraction sheets for each type of covariates that was identified by the cove type field so here we put either environmental, human and animal. Then since the variables that are found in studies are really different and it's really difficult to group them let's say we define this cove group category which can be for example temperature and precipitation and hosts just to group them and to have by the radii of what we had. Then just to go really quickly through this we extracted values when they were available the data sources which are another important information you want to get when it comes to modelling because you have to know where this data come from and how to retrieve them. The type of analysis, the response variable which is the variable we're interested in and it can be for example the vector presence or the disease incidence or the number of cases and so on. And lastly we had one sheet containing geographical information and here we put all the geographical information we put in the articles. Some articles had also the latitude and longitude data so they were really good because we could then map them to see where the studies were performed but some other only had maybe the country or region information. So I will just keep this part because we will have a couple minutes before the tutorial to retrieve the data and we can do this later. So via dashboard what we did after our data destruction was compile these spreadsheets and you can see an example here and they were really really long like the one I prepared for this tutorial it isn't that long because it's with fake fabricated data but we ended up with more than a thousand rows and different tables so we then wanted to summarize this evidence, write a scientific paper about it which we are currently doing and but we had to find a way to summarize this information because we couldn't wrap our heads around that. So we let's say we took this Excel file and put it into Data Studio and we ended up with this dashboard which is much more user friendly and this summarizes the variables that are found in the articles and this is really nice because it can be shared among people working in the same group and those are the same people can work on the underlying data tables so it allows for collaboration and integration and this is what we will hopefully build today. Okay so just a quick comments on why we did a dashboard I know this was controversial this morning because there were different opinions about dashboards what I can tell you is that it's really important to have a reliable source of data to build this upon like what is tricky is not the dashboard but the data you put into it so of course if you give a dashboard with some data that are not that can be biased such as the one we saw this morning to some users that are not experienced that can be dangerous because then if you don't give the right information to people it can be a dangerous instrument and one can draw wrong conclusions from it but if you prepare the data underlying that in a really clean way and you are sure of what you're showing and you also give some instructions maybe to the people that will be using that I think it's a really nice instrument also because as there's a nice infographic say we are visually wired so we are more prone to acquiring information when it is presented visually so this helps in like understanding information a quicker way and to share it among different people in a really efficient and fast way. So just to get this, the word GIS functionalities from the book but it's only limited to like countries right you cannot do two or three years. No you can actually. You can? Yes. Okay. You can also have your own data. No I don't think so. No because we will see it later when you import data into Data Studio you then have to change the data type and you have to put it to geographical data but there are predefined categories in Data Studio such as the country, region. It's dynamic data like special temporal data can you make a slider or something? Yes yes you can make sliders control whatever you want. So yeah and yeah there are many tools and vendors and I chose to go with the Google Data Studio because it's free and everyone can access it so it is a little bit more limited than the other ones but I think that either you have a big company that's willing to pay for this offers for you or just go free and go with something like Google Data Studio. And what I like about this is that it is online and it has a great integration with other Google sources such as a big query so Google databases, Google Sheets and it's all reintegrated and it's also code free so maybe for people who are not that good with R and programming because of course you can do dashboards also with programming and R as we saw before but maybe there are some people that don't have those technical skills or you want people that don't have those skills working on the dashboard and this has a really nice friendly and easy user interface so everyone can access it and learn how to do this in a short amount of time. It has some cons like the graphs and visualizations are fixed and there's a limited selection so like there's no word cloud for instance there are just basic graphs so if you want to do something complex maybe this is not the right direction and it can be slow sometimes this actually depends I hope today it will go well because it happened to me probably depends on the Google servers I don't know there are times where it goes really really smoothly and other times where it's super slow and it freezes sometimes but you have to just refresh the page and it will work. So lastly before going deeper into this Google Data Studio I just wanted to tell you about the relational data model this is just some informatics basic really simple because this software and the other visualization software are based on connections between tables so you have to give tables of data as input and connect them through a different case and here is an example with the files we will be using so we have the first table which is the reference one and this table contains the information about the article so each row is one article that was collected from the literature and so this column which is the blue one here is it's called the primary key because it is unique so there are no duplicates and not null and so this identifies the table so each table has this article ID as a key and this can be connected to the other table such as the one containing data about covariates through the same field which is in the case well this is called the foreign key because it is the field of the second table that's connected to the first one and you do this kind of connections based on relational algebra the mostly used is the left join so we will just see this one left joins means that you take the first table and you connect it to the second one and all the rows of the first table are kept and only the rows of the second one that correspond to the first table are kept so here for example in table two there's no article named A2 and so we will just lose this row this just to let you know how these joints work because we will see that later okay and so now I think we're ready to go and see how how this worked so what I did for this part is to divide it in five steps and for each one of these steps I actually recorded the screen while I was doing the things so I can show you what I recorded and there are really simple passages to guide you through the use of this tool and I can stop the video so we can talk about each part that maybe wasn't clear so just let me know if it's okay or not okay so the first step starting with the with the report so this this is the home page of Google data studio when you access it and there are some predefined templates by google but I always prefer to start from scratch actually so you can just define your own graphs and your own graphical aspect of the dashboard so we needed to click on blank report here and these opens the this pop-up that allows you to import data so we're working with google sheets but there are a lot of connectors that can be used they are prepared by google or by some other collaborators and we need to pick on google sheets to import our data okay so this opens your list of spreadsheets you should find the one you copied here under the spreadsheet tab you just click on that which is the covariance extraction table and one one drawback of google data studio is that when you're working with google sheets this is not nice but unfortunately you have to import each sheet one by one so we have to just go through with that the first one is the one called ref which is the reference sheet with all the information about the articles so just click on that click on add and this is automatically added to the to the blank page click on add to report okay so and this is the place where we will build our dashboard so uh wait so a little bit it's free is them it's thinking but what you see there is just a blank page and a table pops up by default so you can just cancel it because we don't need it at the moment yeah okay so we have one one table by one yeah exactly so this is the data source yeah exactly and then the second one and then you click one by one on the the sheet you're interested so the first one is the reference one called ref okay okay so by default the table appears we don't need that so you can just click on that and then cancel it and we will work on graphs later on and so this is the the thing that appears when you when you go to the data studio and here on the top there's just some basic functionalities such as adding data adding the chart controls such as filters and so on and then on the right you can see the data you imported so we have the the reference table here and all the available fields which are the columns of our file so and this is the first the first source we added we have to do this with the other ones too so to add the data sources you have multiple choices you can either click on this thumbnail here named add data or go to resource and manage it edit data sources so you click on that and you add well you can see what you have already added which is the reference table and you added the second one so you click on add again on google sheets again on the x05 we need to use and okay so the second one is the one named the old cover because I put all of the covariance data into that and the third one is the one named the geo chord which is the one with the geographical coordinates so you have to import all of those files yeah I always say that that appears because google wants you to to tell you okay these are the data you imported and you can see some examples here but it's not really informative and you can do it better later so just delete the table that appears so we need to import under resource manage data sources the geo what is the geo the geo yes exactly okay next okay so if you manage to add the data sources the three tables we needed to add are the ones named the ref the one named the old cover that contains all of the covariance data and the one named the geo chord so here just to summarize you can click on resource and manage edit data sources to import the data sources and manage the one you already added and here on the right you have the data tab where you can find the sources and fields that are available in each connection so one thing we will need to do because we will create a map is to change some data types because the data you imported have by default either text or number type but you can change them and make them suitable for the graphs you want to do so you have to click on resource manage edit data sources here on top you can find your three sheets that you imported and for the map we have to change the the count we field to a geographical type field so you have to go under this sheet click on edit here and okay when you click on edit this is what happens and you have to click under the country field change the type to country so you will see that by default it has the text value and you can change it to country and this is done then just click on done and you're set with that and this variables now has the country value. Yeah because I didn't record it actually for this tutorial if not you people set it up to whatever spatial information you need. Anything else we have to change? No this is just the only one but you can play with data types and you can change it because sometimes there are maybe value fields that are imported as text but you want to compute calculations on those so you have to change it back to value and that's it. Okay so now that we have the the three data sources we have to blend them and connect them because now they are just separate tables so to do this you have to click on the resource and manage blends and blends is how data studio pulls connection of tables so you can click here and this is the view that you will get so this is after I edit all three tables you will see this better in a while and you you see the three tables and you have to set up the joints which is this little block here in the middle and to configure the joints I just click on that and this pop-up opens and this is what I told you before concerning the type of joints we will just use the left ones which is the most common one and if the the key fields have the same name do the do the test with your will just set them up by itself you don't need to do anything more than and this and just click on save and now the tables are joined. So you can see this here the resource manage blends click on edit blend you see here you have the first table actually the order of tables doesn't matter in this in this case so you can import all three tables you have the first one the second one and the third one and one thing I didn't tell you about is that you have to drag and drop the fields you're interested to under the dimension tab otherwise they will not be inserted into the joint so just drag and drop all the fields you're interested in which in this case are basically all of the fields so what this does is take these fields from the three tables and putting all together in a macro table let's go ahead like that. If you have any questions or issues just talk to me and ask me. How was the query data extracted manually or true and how? No it was extracted manually because when it comes to covariate sometimes you find an in-table but sometimes they're just like written in the result or discussion section so we had to do this manually and we had four people working on that it was a little bit time consuming it would be amazing to set up an algorithm to do this maybe this is space for some further research because it would help a lot but we had to do it manually. Okay so you just go this first on the left is the rest first is the rest then you add the view yeah the order in this case doesn't matter all the tables they all have the article ID so we can all be associated with one another so in this case just but usually yeah the first one is the most important one so the first one is the one from which you don't you don't know what to lose data from that one so it would be the reference table for us. Okay so once you've imported all the fields under the dimension that you just click on configure join and save it because this should appear by itself so the article ID column will join from the two tables. Just by thread and drop. Yes. There's no way to go up. No, unfortunately not this is what's a little bit steep. And we are the only dimensions. Yes in dimensions because these are categorical fields because we are working with the metadata so information and it can be that maybe if you want I don't know like the private dash just that we saw before maybe you want to sum up some numerical value so in that case you would put it under the metric and compute a calculation but this is not the case. Okay so thank you. Yes. Yes. Yes I included all of them yes. Yes. Yeah on the country yeah because that's what we're interested in. But yeah then when you when you do your own dash what you can choose actually which fields you want it to see but in this case you watch. Okay. Okay so now we import it all of our data and next step is the fun stuff that's what I'd say because now we import the charts and visualizations and we can play with it. So this thing I cannot say when I do the path length I cannot say for some reason. Maybe you're missing something. Yeah if you cannot savor the blend it may be that you're missing something such as maybe the joints. So you have to go to I show you again you have to go to here configure join and you just click on save because it should appear like this. Yes let's like this. Check out her and article it yeah because if you don't set up the joints then you cannot save the the blend because it doesn't know which which fields from the table future point. Okay. Okay so with respect to charts they are under the other chart button and once you import the chart on the right you see two different choices which are data that are the data that go under the chart and style where you can put some style to eat collars and so on. So we will now create a map a really basic map and to do this you have to provide the free information to the studio which is a rapid dimension this is the one we set up earlier in the country and if you didn't do it just let me know we can go through that again and a metric which is the value over which we will classify states in this case and how to zoom them out. So let's import the map you just have to click on a the chart you see multiple options and I always I go with the geo chart because I find it more stable than the google much one so just click on geo chart down here and you can add visualizations by drag and dropping them. Okay so once you import the visualization google by default tries to fill up this this data values here on the right but you can send them according to what you want. So he recognized the geo dimension as country so he put that by itself here and this is correct and but as a metric we don't want this value because this doesn't say as much we might want to classify this according to the number of particles let's say number of particles per country. So to do this you just have to drag and drop the figure is to think or just click on it and select it. So you can click on article ID and google by itself computes the metric so you just have to pass it to the field over which you want to to compute the metric and in this case we put here article ID and what this function does is compute that count distinct this means that it counts the number of particles per country but without duplicates so this is just a single number of particles for each country. Where do you see that she's doing the sum? It's this if you look in the blue spot where there's in this case article ID on the left there's this CTD which means count distinct but you can see the formula because it comes to this place so if you click on it you might change the calculation type. That's not a problem just when you create the blender you just don't have the of what paper or you can just delete it from there. Yeah you see there are all the different calculations that can be done. Yeah no there are if you go to zoom area there's a world though western apricot middle apricot you can zoom it to whichever part of the world you want and if you click to europe you can zoom it to Europe which was our focus for our research. Where is the zoom? Under the data there's a zoom area if you scroll down a little bit. Yes and you select it from there. You said we could also add two counts three. Yes. Yeah but that depends because we are a little bit counted but if we had some more specific dimension you would be able to see them. The zoom would be Europe but you would be able to see Jessica by now. There's a better angle of zoom. Yeah that's the lowest level of zoom you can do. Okay so this is the map next we might want to add a table. So just go to add a chart select table and insert it and you can play with it. You can add whichever fields you would like to add and what we did we wanted to show in this table all the covariates that we extracted. So you can change the fields that you put into the table by dragging and dropping them or just by clicking on as dimension and selecting the ones you're interested in. So you can add the article avi or surname type of covariate, detail of covariates whichever you want you can change it and build your own table actually. You don't have to do it exactly like mine and this is just a way to to see your data source in a row format. Okay and one other interesting thing that you can do with tables and such is that you can compute your own fields or change the information you see in the field. For example here we have this null value in the detail of covariates column but this is not nice to see let's say so we might want to compute a new field without the null values and you can do that by clicking on create field when you try to add a dimension you can create a new one. So if you go to create field you can name it as you please and then you can insert a formula to compute a new field and the formula I put here was that if the detail was null I just put the blank in it and then this is just a normal statement as you can see in itself and if the detail was not null I just wanted the value of that field. This is just to show you that you can play with the fields and change the information you see inside it. Okay. Let me know if you have questions. We have time. Okay. I'm afraid to exit the presentation. Oh there was one but it was 30 minutes ago unfortunately. It was can you show again how to import the second sheet but you just have to go through the same procedure. So just click on add data and go through it again. But anyway this presentation this tutorial will be shared with you as a video I think. I don't think for the table it might be 500 if I'm not mistaken there's a limit for sure and I think it's 500. Yeah. No no for rules to put into a table. So yeah I don't know there's no limit on the on the main sheet. I checked the second dimension. Yeah I'll show it. Yeah I'll show it. Okay. So I'd say that now we have a table. We can play with it and we can now import a bar chart. So I will go through this later. Just click on add a chart and you can select the one with this symbol which is the horizontal bar chart because now what we want to do is to have a grasp of how many types of covariates we found. So if you add the bar chart. So yeah by default we will put a dimension into it and we can classify it by the category of covariates. So we have the bar chart and we can just drag and drop the dimension we're interested in. And again you can customize it and put the dimension you want into it. You don't have to do exactly what I did here. Here I just took the covariate group and put it into as I mentioned. And so here I can see how many articles because the metric is article and this is by default how many articles per covariate were found. So this gives an idea of how important was this covariate for this specific disease. So but one thing that happens here is that we have all covariates put together. So I can see that we have climatic variables such as temperature, precipitation and then deer which is a host wealth which is human related and this is not really nice to read. So we can duplicate this graph and filter it according to the to the aspect we're mostly interested in. Okay. Okay so now we want to filter this graph. And to do this you can see under the data part of Google here that there's a filter part where you can add a filter and so filter the visualization according to your needs. So this is really really simple to set up. You just have to click on add a filter. Okay. Okay this is what happens. Create filter pop up appears. You can name it according to your needs. And for example we want to show in this graph only the covariates that have the type environment. So this is how it is done. The function is include because we want to show to include this kind of data in the graph and the field name is cov type. That's how it's named in the underlying sheets. The relationship contains and then we have to write environment there because this is the column with the cov type and it is identified by environmental value. Okay. Okay. So once you set up the filter you just have to save it. And the graphs updates accordingly and now as we put the environment filter on this graph we see only the environmental variables here. So this is how filter works. Okay. And we can just copy just right click on the graph copy it and paste it. And these allows you to duplicate this graph and setting up different filters. You can see different types of covariates here. So you just need to click on the graph to copy it. Delete the filter and create a new one with the type you want to put into this graph. So for instance in this second graph I might want to put human covariates. So I just create a new filter named human drivers and then put the human value into it. Okay. Is everything all right? We have any questions? I think she got stuck with maybe. Okay. Okay. And we can repeat this procedure for the last graph as well. So just copy and paste it and change the filter for this graph. Okay. So this basically how filter works. And another full functionality of this type of visualizations is the drill down. The drill set we can move on to this. Let me know if you have questions if everything's okay. Okay. So drilling down. So this is a full functionality because it allows you to drill down actually to explore the graphs adding the different dimensions to it. So when you look at the data type tab on the right you see that you have the dimension which for this graph was code group but each type of covariate such as temperature includes other type of like at the lower level variables such as maybe the temperature of January or the temperature of November and so on. And you can find these kind of variables in papers and scientific articles. Temperatures doesn't give much as an indication and this is nice to have an idea of the macro class but if you want to really know the specific variables you have to go deeper let's say. So to do this you just need to add all the dimensions you're interested into and this goes from a higher level to a lower level. And for example you can drag and drop the covariate field into the dimension field and select the drill down option here is a blue button and this allows you to explore the graph and you can see it now how this works. So just activate the drill down put the dimension into it. And then you can see that if you click on the bar such as the temperature bar and then you click on the thumb pointing arrow here on top. You can see all the details of the temperature variable. So this is recognized then you can reformat it and make it look a little bit better but this is nice because it allows you to interact with the graphs in the visualizations and go from a higher level to a lower one within the same graph. So this also saves us space and it's pretty useful to have. And then when you click on some visualization the other change accordingly but you can reset the report just by clicking on the reset here on top and this resets the dashboard. Okay so we are almost at the end. The fun part is customizing your visualizations and graphs. So you can change the style of the visualizations because we just work with the data part of the tab but there's also the style one. This is an example of the chart we just saw. So if you click on to style you can change everything you want such as the colors, the labels, fonts and stuff and make it a little bit more visually appealing. So for example you can select the graphs, go into style, change the number of bars. I suggested to just play with it and discover it because you don't need to do exactly what I did and it's pretty user friendly. So well if you have just some questions just ask me but it's really it's like a word of excel so it's not that difficult to use. Now this is finished in port like a status. By longitude and latitude. It was a bit too awesome. Because you cannot import longitude longitude. See what we have to paste it into one column. No one column says longitude, longitude and then if you import it you get a bunch of columns. So I don't know how to do a long scale. The long scale I think you can work on the metric. You can build the one metric. Yeah I think that's good work. That's nice. Just a few clicks. Yeah there's a limit on the table you see. Okay so let's just keep this just one last thing I want to show you because we're almost done is the controls because it's useful to filter the data as you would do in maybe some spreadsheet files and so on. So you can add these controls here that they allow you to filter your dashboard depending on what you need. You can add for example date range controls and sliders or input box such as search functionalities. And what we will do now is to add a control. You can click it here on the top bar. We can go with the drop down list which is the most simple one and this allows you to filter the report. So to make it a little bit more interactive from the user point of view. So you drag and drop it and you add here in this bottom the column over which you want to filter data. So for example I could hear the response variable and if you click on it you can see that there are different response variables and you can select only one of that one of those. Yeah you can change the name of course. You can choose the name of the filter as you please. And if you click on one of these filters here all of the visualizations are filtered accordingly. So you can just try to put some controls and explore it and customize it as you please. So let's just watch again drop down this. These are the filter. Okay. And one last feature that I promise we are done with this tutorial is the track cross filtering because all charts interact with each other. So for example you click on one country all the visualizations get filtered according to that country. So you might not want this and you can deactivate it by clicking on the cross filtering function here under chart interaction. This is just on the same data part here just scroll down. You can find this and if you don't want these cross filtering between graphs you can deactivate it and you can also select group of graphs to be filtered together just by selecting them and doing the right click and grouping them. So if you group graphs then they will be cross filtered. If not you can just deactivate it. Okay. Let's keep this. Okay. And so finally how to see what you've done and how to share this report because this is really easy to share. You just have to click on the share button up and you can either add people with their emails or manage access. If you go to manage access you can find the link of the dashboard. You can specify either if only specific people can access or if everything with the link you can see it. You can copy it and you can share with whoever you want. And to get a view of the dashboard you just have to click on view up here. And then the three dots present and you can see a final version of the dashboard and how works. And I can share this link with you so you can have this and you can explore it and see how how the visualizations were made and play with it. So don't worry maybe you you didn't manage to follow all of this because I know it's it's allowed to take it. But I will share the link and we will have it so you will have this and you can play with it. And finally just some best practice when you create these kind of instruments. It is important to keep it simple. So this is one of those fields where that is more. You don't want it to be too over complicated or the white people get lost. And you have to choose the right charts so the information is not presented clearly and you just have confusion. The colors should be used strategically so you can just maybe color according to some categories or according to the values. And also it is important to identify your audience because people need to be able to understand what you do and what you share with them. So it depends on what your target audience is. Are there technical people or not? So you have to to build this accordingly. And I'd say we can stop here so we have maybe some minutes if you have any questions or if there are aspects you'd like me to repeat and to go through again together. So I think I can share the question or the name. Okay. Do you know how long this data studio has been going? Yes, it's been seven years. Okay. So it's two. Yeah. It might be. But this is a free source. Yeah. Yeah. Also the other data visualization software such as Power BI, it could sense Tableau. We are proprietary software that make up with much more features. But you have to go there. There's a tableau software in public. Yeah. It's supposed to meet a public criteria. But 100,000 rows come to it. Where to? No, you don't have to call the tableau either. There's the user interface. But from my experience, the other software, the proprietary ones, are a little bit more complicated. So they have more features, more options. But it's trickier. Yeah. No. Yeah. So I think that data studio, they're keeping it pretty simple. I think they wanted to be widespread probably for the moment. We never know if we will want to see my changes in next years. Okay. Okay. Where in the math? You have to add a dimension. I think it's done. Thank you everybody. Thank you very much.
This lecture, Francesca Dagostin (Fondazione Edmund Mach, Italy) gave an overview of how to extract relevant information from published literature, with a special focus on metadata related to covariates affecting disease emergence. Since data retrieved from literature are often complex and tricky to explore, the practical session showed the participants how to organize them into relational tables in order to build customizable and ready-to-share dashboards, which allow to efficiently visualize and summarize the information collected.
10.5446/13751 (DOI)
Thank you for the invitation. I'm glad to be here talking about how to implement some reproducible practices in R in particular, but as we'll see, it is many of these ideas are also applicable to other languages like Python or others. So let me start by switching slides, by putting some context to reproducible research, what we are talking about. And I will just mention this publication of Roger Penn, who establishes, let's say, a framework, a conceptual framework for what is reproducible research. And I will just extract the extract, the main idea here, which is, let's say, distinguishing from the original or let's say, classical situation, I can point here, where we used to have publications, manuscripts, papers, where methods and results are described in text, but actually nobody could access the real data, the procedures that were followed, you had to believe that the methods described were correct. It was done as it says, the results were actually resulting from those procedures and so on. Well, there is the process of peer review, of course, that guarantees some, at least some completeness on the description of the methodology. But then today, when the experiments and the research is done more and more in computers, we have the possibility of making a more throughout, out of what is being done. And of course, the golden standard, the best, the ideal would be to be able to fully replicate an experiment and have with another set of data performing the same methodology and getting the same results, the same answers to scientific questions. But that is not always possible, of course, because sometimes experiments are very expensive. So we have a full spectrum, a continuum in between, where we can share more details and more specific details about what we have done in our research, like sharing the data or assimilated data or similar data that can, that allows to get the same answers, share the code, or even better, share some linked code that you can execute online, let's say, without having to install, to set up every, the full experiment in your lab. Okay. So there is a full continuum here. And essentially, what we are interested is in making sure that we provide sufficient materials and information for reproducing your results. And of course, an important thing that is sometimes forgotten is that sufficient information is not only data and code, but also having sharing it in a way that is, that is the procedure of verification and auditing and reproducing. So making it easy for people to being able to understand what you have done and to reproduce. So this is what we are aiming today. And of course, yes, some, a comment is that it's not always for other people as well that you want to do this. It is also for yourself sometimes because sometimes you get to, you retake some experiment that you have done in the past, let's say six months or a year in the past. And many people is unable to reproduce what they themselves have done in the past. They don't remember why they choose to do this or that. They don't understand their own code. This is very common. There have been studies about these, the percentage of studies that one, the same researchers are able to reproduce themselves and the percentage is very, very low. So today we are going to go through a number of tools that will help in setting up an environment and practices for improving this, this situation. We are going to start very basic with the R script. And we are going to add some layers and some packages and tools that helps us improve this. If we could just start with these slides, you know, you made the slides also with R, right? Yes, the slides are done in R. In, yeah, we are going to talk about that when the moment comes. How the slides have been done. But of course, yes, they are done in R. You see Markdown, you talk about Markdown and they are easily put online and we'll see how we do that. So for the first part, I will just introduce a small case study. It doesn't matter much. It's just to have an excuse to think about it. And let's say that we have a script and that we don't compare some alternative confidence intervals for a ratio of variables. And we have some variables here. We have, let's say that we make a survey on household somewhere at some point in time. And we record the number of ducks that there are in each household and the number of people and whether the household is in urban or rural area. And well, one question might be, for example, whether this, the ratio between ducks and humans in a household is different from in both in different areas. So this is a look at the dataset of the survey. And let's prepare a small script to do this. And this is the initial script. Let's say we start with this. This is, well, it's not very bad, but it's quite understandable. So makes a few, well, here, it loads the data, which is in Excel, and it performs some cleanup. And then it makes some plots. It doesn't matter if you don't understand R, we are not going to look into the code very, very deeply, just to have an idea of what the script does. So it is a sequential set of instructions that perform some tasks, reads the data, makes some cleanup, some calculations. For instance, here, I compute the confidence interval as you see here from the mean of the ratios and so on. I make a table. Here, I compute a bootstrap confidence interval, which is a different way of computing confidence intervals and so on. That doesn't matter much. So I want to focus more on the structure than on what it does. Okay. So the first thing that I want to focus on is on code that only runs in your computer. This is a very common situation, and it happened to me all the time. Someone sent me a script. Okay, can you look? I have a problem with this script, and I don't know what's going on. Maybe you can help. Yeah, no problem. But then there is some code in the script that only runs in the other computer, but not in mine. Why? Because for instance, it uses absolute paths, the routes, the files. Well, maybe I'm not user as Faku, I'm not in Windows, I'm not in my documents. That's their documents. My documents are different. So I have to start changing the script to adapt it to my situation. And that's maybe not very problematic. I can find quite quickly where is the problem and fix it. But then I send back the script to the other person, and then it doesn't work for the other person. So she needs to start modifying things, or maybe I just point the problem and send it back. And then she says, oh, yes, but then I have this another problem and sends the script back, and then I have to fix the problems again. So that all that, maybe if it's once it's okay, but if it is repeated in time, it takes finally takes a lot of time, and it makes difficult the collaboration and interaction. It will be so much easier if we could work on a script that worked for both of us. And we just got focus on the problem itself and not on the all the side problems that arise from these things. So this is absolute pass then fires that are missing. Of course, they send me the script. And yes, there is a file missing. So he made back, then wait some time, file come back, and so on. That takes time and makes collaboration more complicated. And some practices that this is very common that people wish to do this and hope that everything is start fresh and clean up. But it is not true. You know, when you use these remove instruction from our, it will actually remove all every every objects from memory, but not necessarily all objects, some objects that are hidden remain options that are loaded remain packages that are loaded in memory remain environment variables remain. So many things that are there environment remain. So doing this is that's not guaranteed that everything starts fresh. So what can you do for ameliorating these things? First, the first action I would suggest is to use a proper organization of files in a one directory project directory for this project where every file is in there, data files, documents, code, everything is within one directory that he or she can bundle zip and send back to me and everything is in there. So that makes sure that all files are included. And also that I can start the session from this directory within this directory when you start are from from one directory, it becomes your working directory. And then you can use relative paths within that directory. So forget about absolute paths. You start a project within one directory. And then everything is related to these projects route. So that will work for both of us. So that's the first basic thing. Second basic thing is to be aware of code that runs only on your platform. That for my mean, for instance, operating system, okay, or operating system with a set of environment settings. And there are some packages that work only in windows or work only in minutes. There are not many, but a few like packages that deal with connection to databases, or with parallel computing, which, you know, parallel computing is quite different in Linux. Linux when I say Linux is also not, which is Linux in yet, and windows. So parallelization works different. So every package that works with parallel computing has some difference for windows and Linux. So, well, this is this life, but you have to be aware of that. And I guess the other thing that differs from platform to platform is the standard for encoding. The standard for encoding special characters, like accents or special symbols is UTF-8. But windows will decided that he wants to keep all the methods for encoding characters. So they use Latin or ASCII extensions from ASCII. So this causes some trouble when, for example, someone, well, code is text, okay, so if someone sends me some code that has special characters and variable names, then if I work in a different system, these special characters will not work for me, will be different characters, so everything is problematic. So what can we do for this? Well, use some good practices. Being aware of this is the first step. And then try to favor cross-platform functionality. So for instance, if I want to read Excel files, okay, I can read it with RODBC, which works for Windows systems. I have to have a system called ODBC install, which is Windows thing, but it doesn't work in other platforms. Or you can use a package like read Excel, which is actually cross-platform and works the same in both platforms. So being aware of these things helps in making it easier to share things and to collaborate. Then favor standards. If there is an international standard for something, then use it because it will make things easier for sharing, of course. So please use UTF-8 if possible. You can configure your RStudio or your Windows system to use UTF-8, which is the international standard for coding right now. And of course, avoid special characters or spaces in names of things, names of files, names of variables or objects, including special characters and spaces most of the time it works, but sometimes it doesn't and it causes problems. So it is better to just avoid it. This is a good practice. And there is a nice presentation about Jenny Bryan, which I recommend to, I won't go into this, it's just naming things and it's, oh, it seems I can't see it. Okay. Sorry. What have I done? So never mind. You can go through it. It's a nice presentation by Jenny Bryan that stresses the importance of having good practices and good procedures for choosing names of objects and files, which seems like a very trivial task, but it's very important and it causes a lot of problems. So another thing that you can do for improving reproducibility and helping others understand and read and reproduce your code is to have, take care of the code structure. Okay. So sometimes the scripts are very messy and you don't see things there that you don't know why they are there, what is their function at that point in the script. So let's, it takes some time, but it makes life so much easier in the future for yourself and for others that reading a code that it has a logical structure and it's easy to read that to understand. So one thing to do is to structure your code into sections, logical sections like set up things, load data, prepare your data, make your, your cleanup, your calculations, then your analysis, and then sum up your results. Use comments for explaining, well, for dividing sections and for explaining what are you doing and why are you doing things. Remove receivable code. Sometimes yes, there are some code that you have, just try something out and then you, you decide that it's not useful and you start with something else. Don't leave it there. Just remove it. So don't, that causes confusion there. Or maybe interactive calls. I just want to look at the data and put it into the code. That is something that is interactive and I want to do in the console, not in the script. You know, just leave in the script. Let's say the operations that are necessary for making the computations and to producing the results. Also, something that makes the code, the code much more readable is to put it, to make sure that it doesn't extend beyond 70 or 80 columns. So I can read it without sprawling right and left all the time, you know. And using meaningful object names. If you name a variable, um, Z, well, I, I will not know exactly what it means, you know, what it represents. Much better to use a meaningful names of like, well, this is the clean data set. Even if it's longer, it's better that you learn how to type a little bit faster and to use meaningful names, which make life much more easier or even repeating variable names for different things along the script. Oh, that is terrible. What this X means, what, what value it's holding at this point in the script, which is different from, from the original one. So this is, for example, a possible final script, which I transformed from the previous one. So you see I have a little header here, which says, what is this script for? What does, who did it and when? And well, I define some sections, like packages that I'm going to use, setup options and things, um, set up some parameter values that I can, I might wish to change in the future. So rather than how close these parameter values in the code that are lost in function calls are difficult to find. So I can put them in the beginning, these are the parameters that I can eventually change. And then I load some data or simulate some data. And then I have my results. You see how it is much more easier to follow. This is in the right. Yes. Yes. So you can use different ideas for, for writing your code. And this package package, I never heard of it. It's like, Package Manager. Yeah, it doesn't matter. It's just for load. It's like library instead of doing library, library, library, library. Okay. I want to load all these packages and that's it. So it will install any package that is not present in your, in your repository. So a final recommendation is, it's about using functions. And this is very important. This is like a superpower. Using functions is so important because first it will implement the basic principle in computer science. Do not repeat yourself. If you are doing something more than once, you put it into a function and you call that function. So that makes the scripts more concise, shorter and easier to understand because you, well, I will show that. Concise and modular. You can, if you need to update something in the future, something that is not working, you want to improve your function, your, your, your calculation, you can improve it. You can modify your function once and not, you don't need to find every instance in your script that does this calculation that leads to, to, well, more time looking for that and also errors because maybe you forget something and you update your procedure somewhere. But not in the other place that, that leads to problems. And also, if you put it, if you put your procedure into a function, you can document the procedure. You can explain what, what that function does, why it does the way it does, what are the arguments, what is the, the expected output. You can test the function that you have coded it correctly. You don't introduce mistakes in the function. So putting it apart into a function helps improving the quality of the script of the work. And it's easier to read. For instance, let's say, let's take this, this chunk of code that is from the previous script. And here I use the clean data here. You see, I select some variables and I compute. Well, I say that I name why the variable, the ratio that talks to human ratio, which I'm interested in. And then I make some computations, including the, this confidence interval. And so you see that here I perform almost the same calculation in both lines, just with a sign difference here. It's much more readable to have it a function code instead. So I don't have to immediately understand that this is a confidence interval and how it works and why I use a square root of something here and what is it doing exactly. I'm not sure. Well, I just read confidence for a minute. And I understand immediately what it is doing. I don't need to know the internals of how it is computed. Well, I can go to the function and see how it is done if I want. But if I'm looking at this code at this point, I just need to understand that there I want to make a confidence interval. I don't need to understand everything at the same time. So separating things is useful for understanding. Then I can have the function here and I can put some documentation. I can say that this is a function for computing the confidence interval for a mean and how it is performed. Well, this is one CI. It's not any CI that I came up with. It's this one and it has some parameters. I can set the confidence level here as an argument instead of having it hard coded here in the script somewhere here. This is the 0.95. Anyway, so much better. In summary, when you have a script, we avoid code that runs only in your computer. We use relative paths and within a project structure. I guess I didn't say that when I mentioned that the remove function R doesn't completely clean up the session. The way of doing it correctly is to just restart the session. So you close R and you start it again. That's a fresh start. There are still some issues if you have, you know, you can have some parameters, some files that perform some actions at start up that other people might not have or have different way. So it's not always the same session for everyone, but it's better than the remove function. So we want to structure the code into clearly defined sections and remove unnecessary code, make it organized and clean and commenting out things. And that's it. So let's pause for questions. Yes. You mentioned the putting the bytes between the project. She's fine with words or any text bytes or megabytes. But when you have gigabytes, when you can push it on the page. You're talking about the data. Then you have it somewhere. We're going to talk about it tomorrow. But you use the S3. On S3. And it's available as the HTTP service. And then you can do all the programming. And the data is similar to the PPU. So that is fine. The data can be somewhere else. But then you have in your project structure, everything you need to connect to the data. So there's no files missing. I'll show that. So are there any further questions about scripting and R? Okay. All right. Let's move to more interesting stuff. Maybe this was very mostly trivial for you. But nevertheless, it's the most basic things you can do for. So Fakundo, just one thing to say. When you make R scripts, people have different operating systems. They don't support open source. But in R, you can also program. You can check the operating system. And you can even check how much the RAM has a machine. And you can say, don't start computing something if there's not enough RAM. So there's codes for all these things. Just because you don't know about it. But there's a code for all these things. So you can make also script. And you can also make a script. So you don't create your system and then don't, you know, start running on the error messages. So just a question of code. Okay. So I'm talking next about RAM, which is an R package. And let me explain the motivation for this. You know, any script will depend typically on a dozen, a few dozen packages, which in turn depend on other packages and so on. So in the end, you can be depending on maybe 50 or 100 packages. So when you send a script to someone and yes, he wants to run it. So the first thing, apart from what we have already seen is the, well, the missing packages. So I don't have this package. I have to install this. This is not always frictionless. You know, well, most of the time it works without problems. But sometimes, well, you have these platform dependent packages or packages that need some libraries from the system that are not present and you have to install. And then you have some version incompatibilities because you have some package installed that is using something which is outdated and unit update. So that can be tricky sometimes. So I can tell you, well, I spent hours trying to fix dependency problems like this. Well, you can do it in the end, but it takes time. And it is always the initial problem. We want to make things as easy as possible to be able to share code and to reproduce code. So these things we want to optimize. And yes, the other thing is that, well, okay, you can update your package, which is update outdated, but then you are using a different version for a package which is using another project and another project that assumes a different version. And then you break the code and then you'd have to go fix the other. Well, can get messy. So this is what rent is designed for for improving. It's quite easy to use. Initially, you want to initialize. When you have a project you have a directory with all your files in your source code and so on. And you initialize a local environment for that project specifically specifically for that project. So every package that you are using in your project, you will have a specific library installed for this project. And the next thing is that if you have already the package installed for other projects, and you have the group versions. It will share these packages so it will not start install the same version of, of, I don't know, the player for 50 times in your life. Okay, so it's optimized as a space, but at the same time, you can be using different versions of the player or whatever in different projects. So once you initialize this, it will create what is called a lock file. So I will show what it is, but it's a file called rent lock, which just keep trace of every package that you are using. And it's usually install, remove things, and make your scripts or whatever. And whenever you want to update your library, you can take a snapshot, and it will just scroll all your scripts in your project and record every package that you are using actually. And then eventually you can restore. So if you want to go back to the record the state of the project you restore the library and everything goes back to the recorded situation. And that makes it makes it easy for, for other people to, yes, to read to install all these packages that you are using for your project, but because when you send your, your project directory to them, they will receive the lock file with it. And they will initialize their own library local library, and immediately it will install the packages that are used and the versions that are used for that project. And that's very interesting. And for yourself, it is also very, very commonly the case that you go back to a project six months later, and things don't work because packages have a lot of versions. And so the code that you wrote and written before is when you have to fix some things. Typically not much, but when it takes time you have to identify the problem. See how it, it was working before, and how to fix it. So this is how not file looks, it's just a text file, we say well which version of art I'm using which repositories and the list of packages below so you see for this is lights, I'm using, I don't know, many. So, these are all the packages that this is lights depend on. You know I didn't put every packages manually, it's dependencies of dependencies of dependencies so it's recursive, and it takes a well in the end it's a lot like 780 800 lines of packages. And you can see that they are the versions of the packages recorded and everything. So this is a look, look file. So yeah for collaboration you share the log file it is within the project directory so you can share it, and collaborators just start initialize the repository and it will install the appropriate versions of every package. And it is, it also supports Python environments and packages. So it understands this, this, this Python world packages so you can. So, are the unit, you have to put it in the background or if you just understand the session and this is something that you do interactively when you receive the, you know the project, and you want to initialize initialize your repository it's not something that stays in the code. Okay, so you just started the session and then. Yes, so this was. So, you know, in the Python interaction you know there's a package and our package which is reticulate, which allows our to understand Python code and objects. So, if you are working collaborating with people working in Python or yourself, you are more comfortable doing some things in Python and some things in our well you can you can combine both. And anybody Python, Python, Python. Okay. So, I haven't just. Yes, yes, yes. Yes, or the only thing is. And then you work with Python in the art, art studio environment. And you combine, or you combine the whole whole options. We have this summer we have a summer school it's on cross pollination summer school cross pollination is the Julia Python and our. And then you build also bridges like the thing like, and then to say this whole process I want to use Julia for this I want to use Python for this I want to use our for this. And then, and then it's magic. But sorry, yes. So, are there any questions before moving on to armour down. Well, we have discussed a little bit already. I haven't tried it. I don't, I don't work with Python, but it says that's that. Okay, let's talk about our mouth down. So what is the motivation for this. It's all about reporting and writing things. Let's say that in the typical workflow, you might have another script that does things with your functions and everything. And then you write a paper maybe or may write a technical report or produce some slides, or something. So what do you do with the results you copy and paste figures and tables and so on to your report or what what is your procedures. Do you do that. Yeah, so you write up, for example, an article in the world. So that is fine, of course. But it takes some time right and the worst thing is when things get updated, because things get updated a lot of times, you fix things, you improve you add something, and you want to make sure that your report is up to date with the latest results, and you don't want to make mistakes. So either you copy and paste everything each time to the report, which takes a lot of time, or you try to guess which things were different from the latest version and update only that, but then you have to be right and you can make mistakes everyone makes mistakes. And so you can do that ideally, to make sure that the report is up to date with the latest fixes. And yeah, this is what I've said. So this is what our McDonald's is for. This is an implementation of what is known as literate programming, which is a very old concept from the 80s. But it is an implementation for our. So essentially the idea is that rather than writing a report that you do work, you write a program that does that compile so that produces the report. Okay. So, the idea is that whenever something changes in your code, you can just run a function a procedure a script or something, and produce out update the report automatically. Well, our McDonnell is a document format. It's a, of course, it's a text file, not binary, just like a script, but then it has some other components. And the structure into different sections begins with a header, which is written in a language and markup language, which is called the animal, and just this metadata about the report, let's say the title the author date so on. And what type of output you want for this document in this case is an HTML document, it can be a word document can be a PDF can be a slides, it can be many different types of documents. And then in the main part of the document you have different sections. You have cold chunks, which are great here. So, and they are delimited by back ticks and this is what Tom showed before in the matter most, and then you have some text in the middle which is the text the manuscript that you're writing. And then you have that. Yeah, maybe I do a demonstration now. The idea is that yes you have your manuscript with the code that produces the result in it. And you compile it by, well, launching a procedure that produces the final report. Instead of the code, the results are will be present in the document, the final document. And then you have to make. Are you all familiar with our Mac down otherwise I'm stupid. Is there anyone who is a pink user. I'm a man down. It depends very much on the field. And so communities are very, very used to using it and some journals for instance they accept later documents or PDF documents and others are more stuck in the world. And then you have to make a template for future knows like PHA there's a template. I will talk about that to make a paper. You write it as a markdown and you compile a PDF, and you can even take all our code out. But for example, if you want to make a pot, you specify with the code. And then if you change your points, you have to read what you like to just press compile. All the numbers. Yes, but just a problem. I can't, I can't show it here in the screen, but I'm transmitting it through zoom, the session. So if you want to follow. I'm going to show demonstration but if you want to follow, you can connect to your to zoom the sub session and we can have a safe one. Yeah, but then I cannot. Howard, we are connected into some. Yes. I'm going to share my screen. Okay. You don't have to come back to zoom, but you will see this. We will be back to them. Thank you. Okay, so this is what it looks like. I'm using our studio here. And I have my, my document arm are down document you see the extension, R and D. So I have my journal header here. And below starts the document itself, which has some chunks code chunks. You know, here I load some packages. I do some setup parameters, and so on. And here, there is the text. And the text is formatted in markdown. This is why it is called art markdown because the text uses markdown for formatting so this, this dash here means that this is a title. And I have something here that is bold. And I have some mathematics here formulas. And then I have some more chunks within another section and so on. Then what do I do with this I just push a button. It will compile the document. So it will essentially, I will show what it does later, but then it produces a PDF for that. So I have my title, author date, contents introduction. They have some results. They got description. You know some equations tables. I can show some code if I want, I can choose whether I want to show the code behind or not. For manuscripts, you don't typically show the code. And yes, then the end they have some pictures, conclusions, reference. So this is a manuscript, and I can use any time. Well, this is a PDF document. And I decided it in the Yammer header. You see I'm using PDF document to which is from a package called book down. So anyway, it's a type of markdown document. And you can see that I can use also templates. I'm using a template for Sirad, which I developed, which just adds the logo here. Just an institutional template. So it is very easy for, for example, if you want to produce the same document for different things, you can just change the template and have it look different, or I can say, well, instead of PDF document, I want to produce an HTML document. I hope it works. Right, then I have it in HTML format, and I can put it in the web, and I can share it online. Okay. So this is how it works. Okay, just, yes. So, so tell us about, you know, many of you don't use a Linux, right? Tell us about the software system. Why do you use this? Just a short one. Which one do you use and why? I use Linux Mint, which is a derivative of Ubuntu. Well, the reason I changed, I switched to Linux in 2009 for my PhD, actually. And the reason was that I was using each every time that I wanted to do something. I had to download the program to let me do it. And typically the program that you had to find a password or something to have it somewhere because there was not very common to have free software available. So you had to, it took a lot of time, and it was for many simple things sometimes. I wanted to program something, and I had, I needed a program to do, just Linux has everything, and you can, you know, you can automate many tasks, because the system itself is prepared for automating tasks and for combining multiple tools for performing some. So when you have this, well, this interest in reproducibility and making things more efficient and automated, it's a convenient operating system for programming essentially. Does that answer your question? So you use the Mint. I use Mint, but yeah, I could, yes, just aesthetics preference. And then the whole installation migration is really cool, you know, it's not like point and click or something, and that's no other than Windows. So you have your Windows, let's say your desktop like in Windows and you have your menu and your applications and everything there. So it's very useful. I mean, you can use it just like Windows, but you can have a terminal up there and you can easily, you know, code things quickly. And I will show some examples later on in the talk of how I use the system to improve reproducibility. I have an example later. Okay, thank you. All right. Yes. Yes. So I think it's a limitation of overly, I think which I think it's a premium feature that lets you connect via Git. We will talk about Git later. And to have your, your files, you work locally and just update your files online via Git. I think it's a premium feature, but I'm not sure. So, you can work with other projects that you use. Well, overall if and the source use latex is not my down. I haven't seen a system like like that of online collaboration using Armour Now. I haven't seen it, but I would very much like to see. Usually you use Git Club. Then you will build a paper. Yeah. That is what I do. Yes, that is what I do. I will show that also later. Thank you. Yes, we will talk about Git and GitLab later. Thank you. So, essentially to sum up, I use RStudio, but you can use any other system. There is, well, there are others like Visual Studio Code or Veeam or Emacs or whatever. They all integrate in one way or another Armour Now. For with RStudio, it's very well integrated. You just, oh, I didn't show that anyway. You can just open a new Markdown document, go to file, new file, Armour Now, and you have a template to start with. Okay, you have the demo header, you just update your metadata, your code and everything. It's very easy to get started. And then, well, you have seen how I rendered the document, how I compiled the document, was just a click button, click. But you can also use a command line. If you don't use RStudio, you don't have the button. So you have to call this render function from the package Armour Now. And of course, you can produce besides PDFs, you can produce presentation slides, this slides are done in another thing that I will mention later, but it's almost Armour Now. You can do websites. And I did, for instance, a training course on the statistics and I did all the whole website for the course in Armour Now. And we had, well, menus and sections and the slides and everything in there. And the first thing is that you can update, you know, you can update just, you want to update an example, maybe you update a little bit of code and you have, you are sure that everything is up to date with this example, nothing is left behind. You can do dashboards, those that the team didn't like. And you can do books and more. There are many things you can do with Armour Now. And it supports R, supports Python, and other languages, SQL, Julia, Bash, you can use code from different languages as well. It's not limited to R. I want to go through them, but I put some links here to examples. Yes, this website is just my course. This is my course that I was telling you before. For some statistics and this is a website. This online, and you have, yes, this is in French, but then you have all the sections and the program and the slides and everything. And this is for also in Armour Now. That is for Mac down. I'm using another package for the website, you have to combine it with another package which is this deal in this case. So sometimes you have to use other packages to provide some additional functionality. But well, that's not problem principle. So, examples. Okay, how it works just a brief overview of how it works. You start with your arm and down document here. And the first thing when you push the button or you render the document. And then you go through a series of steps. The first one is using a package called neat are, which will execute the code. Only the code within the chunks produce the results. It will execute it in sequence as in a script. It will produce the results. Well, just the objects, the figures, the tables and so on. And then it will produce a document in markdown language, which contains the original text and the results from your computations. And then it will use another system another program that is called back, which is a converter from different from format to format. And then it will turn the markdown document into whatever you want. HTML PDF, etc. So this is a series of steps that are going on behind the scenes when you push the button. And then you can see that it's well awarded about Markdown Tom already highlighted it. It's a lightweight markup language used for writing the text. It is, it also has multiple variants and extensions for different systems or different purposes and so on. And then there are some, there are some maybe functionality specific to matter most for instance making references to threats or something like that, or two people, and this is allowed in Markdown adapted to matter most then are Markdown it has its own variations. And basically the basic formatting is common for every every variation. So this is demonstration of how it works, how you do italics bold, how you write code, how you use links, latex equations. And you add the geographies and so on. Well, there are no figures here but also figures, etc. So this is a very basic infrastructure. Well, lightweight markup language. So this is a way to write today. You understand you cannot make it easier. So that is the point because you can use for instance other markup languages like HTML or latex are very powerful and you can do everything with them. And you can see it, the source code is very heavy in markup. So markup occupies important percentage of everything you write and writing it manually is is, well you have to type a lot, whereas Markdown, well it's not that powerful but it's very easy to type and to read you can read it like if it was formatted almost. So, we are a panda, which is panda panda is a program that is a universal markup converter. So it converts everything to everything. Markdown to HTML PDF latex to markdown. It's just a, I don't have a link I should have, I should have included a link. So, of course, you can take a PDF converted to Microsoft Word, we can take the same document. I'm not sure about PDF to work. Because PDF is a binary format. And it's not completely standard. But like for example, my god to work. Yes, yes. And then we're going to reach text. Yes, we're going to mark down maybe, but it's a universal system for public. So connecting all the yeah this is a very interesting software. And universal document converter. And so you have, yeah, all lightweight markup formats here. We have Markdown there but there are others. Work process of formats doc, rich text format open office, notebooks, HTML, ebooks, documentation, text, native, wikis. Yes. And it's very interesting. So it is used under the hood when you use our markdown. But of course you can use it alone and stand alone and you can use it for yourself when you have I use it alone. I sometimes I write a letter or something in markdown and I just converted to PDF before sending it with Panda online. I can't demonstrate here but I don't want to switch screens. But it's very easy so you stand up. It's a common line. You can use in the, I don't know if it works for windows. Yeah, there must be something. And this is the things that I like about Linux, you know, when you have a software like this is very easy to install it is always in the repositories. I want to install and I have to under a minute, and I don't have to look for downloading things installing in windows because it's just more easier to install packages, open source packages at least in, in Linux. But then your projector doesn't work. Because it's prepared. Well, what if you want to write with using markdown or our markdown scientific articles. And this is stretches a bit the format because markdown as it is it doesn't support complex things like reference cross references between figures, equations. Or, or things like that. So, there are extensions to it to allow mark to extend, yeah, markdown to be able to write a paper, and the this package articles is one of these extensions and provides a suite of templates and additions to markdown in order to be able to have the authors affiliations, the abstract, all these things that are more, more academic. Yes, so. So it was the ancestor of meet our rather than not. So it's the portion the part that that takes the code and executes and substitutes in the text. No, it's just the evolution things. I'm tempted for this business. When I write the paper for back to we've actually. But I'm wondering because it's getting more than I'm not so sure so supported and more. I think it is. Yeah, I'm not sure it's probably not that maintained anymore. Maybe the efforts are shifting towards armament down because well, what things they go, it will get better. So, I think articles is a package on our package that I don't know sure if it supports with maybe some armor. People still use it, but it's fading out. It's everybody's moving to our markdown book. But for example, the journal statistical software still is. It's very similar. Well, you have instead of that takes you use these angles and I don't know. Our mark need our has more options maybe for for cage using cage, you know store parts of the souls or, or showing display in call in a way or another or, I don't know. It's very modern and probably has more options. I'm a time is it. I will skip this demo otherwise, I'm afraid I will not make it to the end. I've already performed a demonstration about our Mac down so the idea was to show my original script the one that I showed in the beginning, how I converted it into our Mac down. And Yeah, but basically I did this. I want to show you a little bit more reference about this. This is the main side of arm and down the side of need are which is the tool that runs the code and has many options within it. And it's worth a look what when you start using it, you will quickly find that you need something else. I want, I don't know I want to show the code here to show only the first two lines of code how do I do that so you go to the site and you see how it's done you can do everything essentially. I put that into an arm or down tutorial we don't have time to make a tutorial for everything here just another view, but these are things that are that have a lot behind, and it is worth that you take some time if you are going to use it to go through some tutorials. So, I think some nice material for a workshop, a three hour workshop from Nicholas, our Mac down for scientists, particularly, and a couple of reference books for our Mac down and book down, which is another package, which extends our Mac down and provides for instance capabilities and figure numbers and things like that, and producing books and many other things. And the final word about another system, yet another system, which is called water. And it's, let's say the next generation are my down. And the system that I'm using for the first time for my slides. So this is the first time I tried this. Essentially it's an evolution of our my down. It's an standalone program, no need to call it from our it's not, it's no longer an our package is a standalone program. The advantage of that is that it supports in out of the box other tech other languages like Python, our Julia and observable observable whatever it is I don't know. But the nice thing is that if you work in Python, you don't have to have our installed to produce slides or you can use quarter and have this procedure using without using our own. And it's implemented within the program already all the improvements that all these extensions to our my down have. So cross referencing writing papers or scientific articles, figure numbers and everything that's that's all already already in there, you don't need to well look for extensions and and just, you know, use different parts is everything Yeah, what next generation of this, this program. And the nice thing, yes, another nice thing is that you can almost use the same arm around files that you have before. They will work with part of almost any modification. So it's very easy to switch. Do we have any comments or questions before moving on. So the questions like how do we now connect the first thought, which was like health data, you know, the problem is civilizations. How does this connect with our. And the reason why is the more than. And so it's the same problem that I mentioned when we are trying to produce a scientific paper, and we have to be coping and based on things from one side to the other. And we have to do it 100 times a week. That takes a lot of time. You will make mistakes. If you do it 100 times maybe two or three times will make a mistake. So you need to have a pipeline of going from the original data, counting the data, managing the data, you're having your results published. And the same for the paper goes for the dashboard or website that showcases some results. So, as you can see with these systems, you can produce websites or dashboards or whatever. And the nice thing is that you can have everything connected from the data, even the data collection, if the data comes from sensors or from public health institutes or something where you can have a script or functions that describe get the data from the sources and process the data and then you can calculate things until the end. This needs to be, this to be streamlined needs to be, I mean, it's good that it is like that. Otherwise, yeah, I will, I will discuss that later but otherwise things get very easily unmanaged. So we are using, we are using data and code and things from so many different sources. And we have to keep track of all the these combinations in your head, it's well, you will fail. At some point, you can't do it. You simply can't do it. I will show an example later. So, according to you how much if you notice, use this pipeline in your daily work and how much time it takes to change your, let's say, old style practices and how difficult it could be for us. Well, I don't know about epidemiologists in general, I will not bear to generalize, but in our unit, very little people use these things. There is some friction to change. And I think the most important friction is work. We can work with Excel. But okay, we can work with Excel. But work, work is a different story because a word is designed to work within it and to not connect with anything else. And that's a completely different philosophy. And this, this let's, let's call it Unix like philosophy which, which goes by using small specific tools that work very, very specific and do things very well and connect with each other very well. And so you can combine tools to produce results into a pipeline of whatever and do something. And it's like the conception of, we have everything here, this is the unit, your universe and you do everything here. And you don't go out and you don't connect with anyone and you don't need to be compatible or interoperable or anything like that. Well, that's a different philosophy, but I think when you work, when you go through towards open science practices and open data and open software as well. Well, the other philosophy is more appropriate. And to respond to your question. In our unit when I came. Well, nobody, maybe nobody used this and little by little, I will I start collaborating with people and trying to, well, to introduce these practices little by little. And well, for some people it's easier with some people is less easy. Okay, let me move to targets which is a very, very interesting step in this journey. The motivation is to well precisely pipeline the analysis in the sense that when you are using arm and down, and your start your project starts growing and gets a little bit more complex. And that necessarily using many different arm and down documents for instance let's say you have a document that does the descriptive analysis and scripted of your data, and then you have the computation heavy computations of modeling and so on then you have your report, maybe you have some slides and all these different documents. And then they want to use the same objects in the different because they are not able or the data. So you start getting the issue that you are pre computing things. Many times in different documents, the same object, you're doing your repeating things in different documents. And then, well, it's like with functions, you know you're, you try to not repeat yourself and try to do things once. And then it takes some time also to to compile, you have seen the demo it well takes a few seconds to say a few seconds is nothing. But then when you are doing an analysis, you are repeating a task and you are compiling those sorts of times a day. Okay, so these few seconds accumulate and starts being annoying. So you can be next day and you want to start over, you have to recompute all the objects again from the beginning, or recompile the document just to continue working that takes time. So we want to optimize that as well. And this is where targets comes. So you want to try to separate the computations, especially the heavy computations imagine you have an MCMC sample or some modern some big takes maybe minutes, you don't want to make the same computation that takes minutes, only for updating a comma in the text, you know, you don't want to recompute everything from the scratch, just for updating the text. So you want to separate the computations from the manuscript. And yeah, you can say, hey, but wasn't the idea of the arm and down to put together the computations and the manuscript. And what now we are wanting to separate. Yes, we are wanting to separate them again, but without the coping and paste. Okay, so without the process of having the results from a script and trying and manually putting them to a document. So we are going to do that automatically. The idea of targets is to have a more abstract layer before that, and define a list of targets of objects to compute something that you want to do. And then you have a report, one of the targets of your analysis is performing or producing a report or a slides, or, or maybe you're loading the data, or making a model. And the thing is that these targets will depend on other targets. And the layer that that computes all all your targets that are dependent on each other, and they are stored locally in a, in a directory. And so that when something changes, it will not need to recompute everything from a scratch, it will only compute the targets that need to be updated, because it keeps tracks track of the dependencies between the times. So that's a little bit complicated I will go slowly. This is the. Yes, this is the targets file of, of my original script for computing confidence intervals. And you see that here we have a list of targets. So each one of these elements in the list is a target. This targets perform things like they have a name. This target is called data file, and then they have a script, or little bit chunk of code that does something. This is very sorry this this one here is very easy is only a string just the name of a file. But this one for instance here, it performs an action with it, it reads that file. So you see that I use a function here, and I use the name of the previous target. And then then I produce another target, another object, which is the clean data. And the action is to take the raw data and to call a function, which does the cleanup I have this function defined somewhere else. And then I can have the reports. This is more complicated this this is script. This is the report in HTML. It will take the armar down document, and it will render the thing with a lot of options here. And I can have another target for a PDF document, and I can produce both documents at the same time. You could just so I don't understand this. So you have the functions that will be called the same. Which functions. So this is the next. This is a list. You know, this is a list of I'm calling a function, giving a name. Yeah, exactly. So this is a, let's say a file which is called targets.r, which leaves in my root directory of the project. And essentially it will keep track of all the dependencies between the different targets. So you have targets are files as well. So I have the data file here. I have some parameters, and I have some, well, the raw data which is the object that results from reading the data file. And this is used then in the report HTML. You see I have a target, which is the reports in HTML or PDF, and I have the clean data. So this is a graph. Okay, of dependencies between targets. And the main thing is that, you know, it keeps track of what is up to date or outdated. And let's say that I change something in the simulation parameters here this target. And it will not need to update the raw data because it is already computed and it's up to date. It will only update the targets that are downstream. And it will be in computation time if I have a model that takes a few minutes to run. I don't want it. I don't want to run every edit of my text. So how do you use these very easily you start you install the package targets and you can use this function use targets to create a template for for this script targets are. And then you define the list of targets with our make use a report and arm around document you don't compute the objects in the arm around document you want to load the computed objects from your targets and you use it. This function, download here like this. So you start your arm around document by loading the results from your computed targets. And then for the document you don't need to recompute every object every time, and you can work on your documents separately, and yet having things updated. And it's also nice that you can, when you go the next day and you want to resume your, your work. You just load everything, and you have everything computed. So you can continue working in matter of seconds. It also provides parallel or remote computation of targets, some targets can be run in a in a computation server for instance, in a high performance computing server. And the rest of the targets you can they can be run in pilot, those that are not in sequence of course, so it can figure out how to how to parallelize this target so to make computations faster. And even more, each target is run in a separate our session with only the inputs that are needed to ensure reproducibility, anything that you can have in your memory at the time of running the target will not affect the results. So that's very important. And you can also create targets dynamically, you don't need to define every target one by one but you can write some code that defines a list of targets for instance if you want to have a target for the data. But this is an example I will show next we participated in a challenge that we had data from different periods in time. And we wanted to produce analysis for this. Using the data only up to those periods. And so this is, you know, repeated work we want to do the same thing for different periods, so we can program targets once and make them create automatically targets for each one of the periods. So this is for scale, you know, for when things get a little bit more complex and and Chevy. It's very interesting. I will skip the demo because I'm running out of time. I would very much like to show that. Yeah, it will take some time. So I will take up short questions and move on. Yes. Yes. Yes. I show it later. In the end I will provide a link to the template. Okay. So what do we do, what do we do with get the idea is to integrate contributions from contributions from where from multiple collaborators, but also from yourself in the past, or in your different personalities. Sometimes you read your own code is why did I do that. Oh, this way. No, that happens. And so this is another you that do did something. And you want to see when you did that and why. And so this keeps tracks keeps track of changes to your code. And the different versions the evolution of your code. Of course, since your manuscripts is your text is also called here, you can use it also for writing for writing manuscripts or dashboards or whatever. So essentially it will keep the story of the project change by change you decide, of course, not matter by matter but you, you work, you work in your code. And from time to time decide okay I fixed something I did, I wrote the introduction I did something meaningful. And you say, Okay, I will put up an inversion of my work. And these are each one of these steps in the history. But then of course when you're working with different collaborators, what everyone will want to contribute. And sometimes they contribute to different sections and well this can is able to integrate the contributions from everyone without track changes. So it's like track changes but doing done right. So it will integrate automatically the contributions from multiple people. Sometimes of course, well let's say how it works in the best case scenario, you will initialize a repository in your project folder, get is a common line utility as well, but it has many interfaces to work with. Commits, which are the language for, yeah, snapshot of your project status at some point in time. And you write a little commit message which are these, these messages here that say what did you do here. You updated the interaction okay I updated the interaction so that's your commit message. So you can look at the history of your project and look for something. I say okay when did I do this, and you can find. Yeah, by context you can understand why did you do something. You can fix things you can remove comments and now I prefer not to do this I will change it somewhere else. So you can do all this kind of things. And go back to any point in history and say okay what was the status of this object at this point in time you go back, you can analyze everything at that point in time you can. You can of course, push your changes to a remote shared repository with where other people can take over and become and get your changes and introduce their own changes, and so on. And then you can pull the contributions from the remote server from the other collaborators and merge the their contributions to your own. So essentially goes like that. This is done with github which is one of these repositories remote server interfaces to get that are available. So one person, these two people have the same document here, but this person introduces a couple of changes and pushes their changes, and this person pulls the contributions from the other and gets his local repository updated. And from time to time, it's a conflict. Well, two people change the same paragraph or the same line of code in different ways. So what happens there. Well then, get when you are trying to merge the contributions from other people, it will warn you and say, Hey, there's a conflict here. This is your version. This is the other person's version. Tell me what to do. So you can edit, you can write what to what you want to keep. Maybe it's one of the versions, maybe it's a combination of both. And then it's okay continue. So, most of the things are automatic. The things that cannot be automated are taken care of. So our studio has also a basic interface for essential operations like this commit push, pull, and so on but there are other clients also I'm using get ahead for instance for making more complex stuff. And then there are the hosting services, which has to be the online services like it have good lab beat back it, that allow to get to push your, your, the contributions from everyone, and to share the results but also allowed to browse the project and edit the files online so everyone needs to have good install or to understand how it works. This is something that I do with with some of the colleagues in the unit and then people who are not familiar with it, but they are familiar. Well, they they are able to go online to a website and change a file and our math down file so they can read our math down is easy to read so you want to to fix something in the manuscript, you can go to this website and change the text. And you don't need are you don't need it you don't need anything just go online somewhere where the file is and make a change to the manuscript. So this is one way that I found that we can collaborate with people that do not use all this. You can use this repository so for also for distributing your software or your research code, because they provide also the capability of assigning a website to every project. So you can have like what I showed you the, the website for the statistic course that is hosted online in good lab, and it provides a website for them. And there is also functionality to handle problems issues and documentation you can put associate a wiki for instance to a project, and you have a ticketing system, so people can ask questions and have threats and have a list of tasks to do things like that. So this is used sometimes for project management. This system so it's very, very useful. And finally, they provide capabilities for testing and deploying automatically in the sense that each time that you push some changes to your repository, you want something to happen something to be triggered some actions some operations to be treated like testing. So you can run every test that I wrote before and make sure that everything works well, then build the website and publish it online, then build the package and put it online also so people can download it for different versions of the operating system so on. So these are operations that you can code and program for them to happen once you push contents when you push a fix to some, even if it's a comma in the text. I don't have a demo but I will skip run out of time. I provide some links to tutorials for git and get lab and we have. And, well, maybe I skip the last two parts. And very quickly, I wanted to go through Docker which is another system, another layer for ensuring reproducibility. The last thing that the last thing that we want to make sure is that the computation environment is the same for, even if you have the same package versions that I have. So all the files and everything. The operating system is different. The your environment environmental variables are different your compiler, like linear algebra, algebra libraries and everything can be different and Docker allows to include all these in the system. So you can reproduce the entire computer environment with that producing Docker images that you can share with someone which doesn't even have our install. They have to have Docker installed, but they can receive a file and reproduce your result we have without even having our system. So you can create a Docker image which contains the whole operating operating system. There's a distro box, right, you can even have a talker with the food. So you have a system inside your system but they have to be. I have some more things prepared for that. And the last thing is geeks, which is step further. And I think it's beyond the hour, our interest here but it needs to be mentioned because Docker, even if you just with a Docker image. Since it relies in other images. So geeks is yet another system which allows to have entirely reproducible system to the last library. So everything is version, even the C compiler or everything to the last beat. And you can distribute things like that. So this is ultimate reproducibility I haven't used this in life and I don't think that I will do it as soon enough. But it's, well, it's the best you can do and I think for, for very crucial situations, it's the way to go. Have to finish right. No, I'm almost done. Just a few conclusions. So the question is, sometimes people make is, yeah, but does this save any time or you spend more time hacking things or trying to figure things out because you're using multiple systems and trying to combine things it takes time you have to learn to use them. And they generate problems as well. Sometimes things don't work you don't know why you have to find out it takes also time, but it depends on what you want to spend your time on. So you want to spend your time on coping and pasting things or you want to spend your time in trying to fix all these pipelines that so everything works fine. Well, I prefer the latter. I think that for open science is required. So, essentially, it improves the quality of the work make my making my programming things like this, you make sure that well, maybe your results are not correct. That's, that's the methods to say, but at least you are not making silly mistakes. Well, the instrumental tools for open science and to make you being able to share and to let others reproduce your results and allows scaling up when projects grows and gets more complicated. You start to need these tools. Otherwise, it's impossible I was going to show this asf challenge, which was we would collaborate with another person in the unit. It's a 3000 13,000 lines of code with hundreds of targets in your depend idea is here. Look at it. Yeah. So this is the tree the dependency tree of the project you see. So we have each dot of here is a target. So this, this would have been absolutely impossible to do it would have script to a crazy. So any of these well it's up to you depends on your context in your needs. I think the first chapter that we talked about how to make an R script reproducible and good practice and all that is, that is useful to everyone and everyone can do it very easily. And the rest well depends. So I have links for the question that was in the chat that provide links to my project template that I use for for starting over any if you send me a data file I start a new project with a common line. And that is why I use my my Linux for for instance so I have a dash script I start I type new project with a name and it will, it will prepare my project directory with place for data documents code with my targets template, everything prepared so that I start very quickly with a new project. So I have my my latex template and everything. So feel free to use any of these or adapted to suit your needs. And, well, I have no more time. Sorry. Thank you.
In this session, Facundo Muñoz (Cirad, France) describex tools and workflows to cumulatively improve the reproducibility of analyses performed in R. R is a mature, world-class, open-source statistical computing and data-analysis platform with a huge community of users from all areas of science and industry. Yet, most researchers rely only on its most basic scripting features, missing the opportunity to unleash its full potential, in particular concerning reproducible-research workflows. Specifically, we discuss encoding and platform-specific packages, the advantages of organising code into functions, using project-directories and relative paths, reproducible reports with RMarkdown, controlling package versions with Renv, organising code into a pipeline with targets, keeping track of changes from various collaborators with git, reproducibly publishing results with Continuous Integration in Git(Hu|La)b pages, reproducing the complete environment with docker, and controlling versions of the complete software stack with GNU Guix.
10.5446/13752 (DOI)
So I'm Tim, I'm a medical epidemiologist based at the Public Health Institute at the Finnish Institute for Health and Welfare, Terveiden Jarkivin Bonilators, which is the only finish I know. And I'm here with my colleague Hannah, while we're here here. And we will be telling you about the basics of surveillance and epidemic intelligence activities. And we will follow that with a focus on tick-borne and civilized surveillance, which is one task that Hannah is our national expert for. And I suggest we start right away. So for your information, I am not a data scientist. I am not a trained modeler. I am really an epidemiologist and working on surveillance. So we have a little bit of a different background, but I'm sure we're going to enjoy meeting each other. Excellent. So the aim of these days is that you understand epidemic intelligence and surveillance, including the differences between event-based surveillance and ED indicator-based surveillance systems. And I would also love it if by the end of this hour, because we only have one hour, 45 minutes on my end and 15 minutes on Hannah's hand, I would like you to know when not to trust surveillance indicators and or dashboards. Yeah, that's a little bit provocative, but I hate dashboards. And I've stolen great material from many, many great people, including Esther van Clef that you will all get to meet for the ACA firm. So surveillance definition is quite simple. It's the ongoing and systematic collection, analysis and interpretation of health-related data that is essential to planning, implementation and evaluation of public health practice. That definition says it all. It's information, systematic collection, analysis, interpretation for action, health data that we need to plan, implement and evaluate. Information for action. And that will be the main part of the lecture. You can all go for coffee if you wish now. If you remember that, you remember everything. So we have two types of surveillance systems. We have indicator-based surveillance and event-based surveillance. And I was thinking more of starting, sorry, there's a mix up with my slides, but so epidemic intelligence is the merging of two types of surveillance systems, the indicator-based one and the event-based surveillance system, which are two different ways of systematically collecting information. In indicator-based surveillance, you have a specific system that is designed for this and you measure indicators like rates, number of hospitalization, et cetera, et cetera. This is data that you will collect, analyze and interpret whether event-based surveillance system are the monitoring of any source, any report that could come up. It can be ad hoc reports. It can be media news, Google News, Google Alerts. And this is something that our colleagues from Paddyweb are working on and you will hear a lot more about the development of web scraping and even for event-based surveillance. So if we compare those two systems, with an indicator-based surveillance, you will be able to detect outbreak, assess trend, seasonality, burden and risk factors, while with an event-based surveillance, you will detect and locate potential threats at a quite early stage. Your information sources will be structured and are very trustworthy in an indicator-based surveillance because you have a case definition and it's even more credible when we have lab-confirmed cases that are being reported to your system. While on the other hand, in event-based surveillance, we're using unstructured stuff. It can be rumors, tweets, blogs. It can also be a phone call from a clinician and I'll tell you about that a little bit later. One of the key issues with event-based surveillance is that information will require verification. You can get some very, very early information, but it will always require information, potentially beyond because you have a very wide variety of sources. The timeliness of indicator-based surveillance can be an issue depending on the organization of a country. It is not uncommon that certain diseases are reported by every end of the month or just weekly. So you can have a delay. On the other hand, because even based surveillance comes from any type of unstructured information, you can have very, very early warning. The problem with indicator-based surveillance regarding what disease you will be looking into is that because you need to have a structured system, it will be mostly stuff that we know, stuff that we are looking for in usual practice. While event-based surveillance will allow you to catch stuff, new syndrome, events that would be caused by unknown diseases. And finally, you need an infrastructure when you have an indicator-based surveillance system. You need a health system. You need a reporting system. You need a system where the lab that would test patients and have a positive finding have a system to report to the public health institute or to the regional authority that is responsible for surveillance. And now we're back on track, sorry. One important thing when you talk to epidemic intelligence buffs is that will be those three words that you can see here, collect, analyze, interpret for indicator-based surveillance, whereas for even-based surveillance, you will be interested in capturing, filter, and validate. And that's the keyword. Collect, analyze, interpret for indicator-based surveillance. Detect, filter, validate for even-based surveillance. The idea is that you would use information from both these different type of sources so that you can analyze the event and assess a threat in order to decide on whether to implement control measures or not. Now once again, information for action. This is what epidemic intelligence and surveillance is for. The information. You'll be interested in knowing what a problem is or if there is a problem. It can be whether the, it's only about the extent of the burden of the disease. It can be questions on the pattern or the distribution of the disease, whether it's more commonly specific geographical area where you should maybe implement more control measures. It can be also about identifying exposures, looking at outcomes so that you can prepare your health capacities. It can also be used to describe the causative agent or a syndrome, the clinical severity, the typing of the pathogen, and this is why as part of surveillance, there is also some genomic additional surveillance for molecular epidemiology so that you can assess, for example, if you have more and more numerical disease cases to assess whether the vaccines you're currently using are covering the serotypes that are the most frequently circulating in your territory. With an indicator-based surveillance system, you can easily detect any change of the times so that you can see if there is an outbreak. If you have more cases that you should expect based on what you know and previously observed and we'll chat a little bit about it in the time series analysis this afternoon, you can look at longer time trends, geographical spread, so change in distribution we've mentioned, change in the causative agent, and also, and that's quite relevant, you can see there's a decrease in the incidence of the disease by using your indicator-based surveillance system, and that's quite important because you want to know if the control and prevention measures that you've implemented is working. So once again, information for action, I will be repeating that through the whole lecture I think. What action will you do thanks to the info you get? If you need to stop transmission, you will look into controlling individual cases and this is something that we did quite a lot with COVID. Test trace isolate, test trace isolate, that is control of individual cases. You can adapt or change your control measures, for example, switch to a different vaccine if it doesn't cover the serotypes that are the most frequently circulating anymore. You can plan health services, devise and revise a policy, and generate hypotheses to support research. And this is something, for example, that we are, that's a problem for most public health institutes. We gather a lot of data, we have all our indicator-based surveillance in most public health institutes, and sometimes we realize that we're not using it enough for research purpose because we do not have the time, we sometimes don't have the staff to look into that further. So this is research opportunity that is sometimes missed. So information for action was the very, very most important sentence to remember from this lecture. And this is the second most important slide. Indicator-based surveillance is about detecting cases, but you will typically only find what you look for. Depending on the severity of the disease, depending on the symptomatology, depending on access to healthcare, you might not detect all cases. And this is what we call the surveillance pyramid. Death is the only thing that's certain, like taxes. So if you are looking, if you, this, the tip, the top of the pyramid, the death and the hospital admissions, the hospitalized cases, are the ones you will be aware, be the most easily aware of. So depending on what you want to do with your surveillance system, you can focus on hospital-based surveillance. For example, if the disease is severe in most cases, but if you need to look into transmission in general population, you can also need to have a community-based surveillance system to identify cases that would be seen by a general practitioner. And then you have all those cases that you will never find. The community cases that are not seen by a general practitioner, the ones that were very, very mildly symptomatic or the ones that were asymptomatic. For these, you can use all the proxies to see whether there's transmission of the disease, like absences in school. And this is something that can be used, for example, during gastroenteritis season to assess how many, what is the current situation in the region. So, oh yeah, and I was planning on being on site. So unfortunately, I had some like, I was like, to use your mind to use your, to pick your brain for a second, but then it's going to be a little bit complicated or on Zoom, but you can, yeah. So if the disease is severe and we expect all cases to seek care, what type of surveillance system could you look into? You would actually look into a hospital-based surveillance system. Now if you would expect that most cases will present symptoms and seek care. And if you are not interested in stopping transmission chains, so if you do not need to have a test-trace-isolate strategy, you would rather go for a community-based surveillance and associated to your hospital-based surveillance, where you would detect the more severe cases. So the more severe the disease is, the more likely you will be to detect cases. And this is what I meant. If you need to identify all cases, then you will have to associate some screening to your surveillance systems. So I've told you I don't like dashboards. Why do I not like dashboard? For a very, very simple reason. Right here you have numbers, but you have numbers that are not telling you about testing strategies. What is country X or Y, suddenly reducing access to testing to cases that have less than X symptoms and have been vaccinated three times, like it has happened in some countries? Is access to testing the same in all those regions? Very likely not. So when you're using only those numbers, it is foolish to think that you can compare the COVID-19 situation in all those countries due to the fact that the public health system access to testing referral, etc., etc., are not the same. So you cannot proceed to comparison. Just like also over time, we saw a very sharp increase of COVID cases in early 2020 in China. From one day to another, there was a sevenfold increase. But this was due to the fact that the definition used in diagnosis was changed in order to capture more cases. So looking at that graph without this major information about the broader definition used is diagnosis will not be of any use. Monitoring of mortality is something that is conducted also for surveillance and that can be used quite easily due to how easy it is to detect mortality. We know when people die. And there are some initiatives like the flu-momo and the euro-momo, if you want to look into it, that actually run some time-series analysis and look at excess mortality during the usual influenza season, as in 2018, 2019. And then starting from 2020, there was another public health event of international concern that some of you might have heard of. And these type of systems for surveillance allows you to assess excess mortality, for example. There are several types within surveillance. We have some passive and some active surveillance. And it's important to keep that in mind. In the passive surveillance system, you have the data that comes from the lab, from the health community, to the public health institute. The reporting is quite often made by laboratory and primary care. Then you have active surveillance, when there is a specific data collection, that system that has to be set up. It is more costly and it will, for example, include active reminder to health care units, forms that are sent to them so that they do their reporting, or even an online system of automatic transmission of data. And I have this example of the, sorry, I lost the word, notifiable diseases. So in Finland, there are approximately 50 diseases. Whenever there is suspicion, the physician has to request microbiological confirmation. And from the lab, the lab reports to the national register directly. Only in 60% of cases in 2012, but now 100% in 2021. And so you see, once you set up a system like this one that was set up in the early 2000s, you have to update it, create, make it easier, progressively evaluate, make it work better. And this is also something, if I can share a more personal found, we use quite often, you know, surveillance public health institutes are doing aggregated data that's made available nowadays. And we quite often forget to acknowledge all the IT back office work that is made to have a surveillance system that works. For Finland, the current system, I think we have three full-time data managers working on maintaining it constantly. So the quality of the data also relies on the quality of the IT you have around here. And those people are very rarely acknowledged in publications, but that's just a personal thought. Trying to say something nice about data managers once in a while. And so this is an active system because there is a reminder system, an automated reminder that is sent to the treating physician because the treating physician also has to participate to the reporting of the notifiable diseases. So here you have two examples of data collected through indicator-based laboratory, so indicator-based active surveillance systems that are laboratory-based. And if you look at, so this is COVID-19 by week in France and COVID-19 cases by week in Finland, you can see waves that appear to be very, very similar, but just like previously, and if you do not have information on access to testing, testing strategy, everything, it is very, it is foolish to compare or interpret those curves. And so these are lab-based surveillance systems. You have other options. You can also use a syndromic system instead of a case-based system. It will allow you to do some detection before the diagnosis is fully made. It works for early detection. And you can use data from existing activity like GP consultations and emergency world visits. And you have the, so these are COVID-19 related hospitalizations that have been used in France to monitor the COVID-19 situation. And this is syndromic surveillance. And you can also have, and I will have another syndromic surveillance system example in one second, you can have sentinel systems where you will select some healthcare providers that are as relevant and as representative as possible for some frequent or less severe disease that will give you an idea of the current situation that you can extrapolate when you work afterwards with some data scientists to have an idea of what would be the situation on the whole country. And so this is a sentinel system as opposed to the exhaustive systems we've just talked earlier where the whole territory, whole country, all healthcare units are under surveillance. And this type of surveillance system is really useful, but it also has an impact on the, on how your findings can be generalized. And I think I have an example here. So this is, in France, there is a system called SOS medicine, which is like, like a doctor, calling a doctor hotline, and, and have an emergency consultation but not in a hospital setting. And these are the numbers of consultation for suspicion of COVID-19. So these are cases that are not confirmed yet. And using that, you can monitor a situation as a sentinel to see how, to see the trend and evolution of circulation of a disease. So using a sentinel surveillance system. So you cannot directly know the number of cases that occurred weekly, but you can know how many patients were seen for a suspicion of COVID-19 by this sentinel system. So another example, so now I'm switching back to some examples of information for action. When using their surveillance system, our friends from England and Wales identified that there was a strong increase over time of purchases in the less than one year old. You have to know that purchases can be a very severe disease in the very, very young kids. And using that information, they decided to act. Information from action. This is when they decided to offer a job during pregnancy to women in order to protect the kids through cuckooing is what the strategy was called. Now I have, I've been okay, I've been a bit faster than I was expected. And I think we're going to switch to matter most because I have a very specific example of let me think for one second. It's sorry for that mess. So in Finland, for example, and I didn't want to talk too much about even base surveillance systems, because you will have some data mining stuff and you will learn more about that. But in Finland, we do not have an even base surveillance system where we would monitor online news or stuff like that. Finland, people say it's not a country, it's a country club. So because it's, so because it's quite costly to have an even base surveillance system where you would monitor the news filter, go through it, we do not really use it. We rely on other automated systems like webmails that like email systems like Promet that gives us information on events, and then we go, we check them through our emails and Promet has selected stuff online. And it can be dead dolphins on a beach in New Zealand, and this is not really of relevance for us. And then it can be detection of a cluster of gastroenteritis in a Russian town very close to the border of Finland, and then we will be interested in it. But we have some kind of hotline that communicable disease doctor who's there to inform the clinicians, physicians throughout the country of any risk and anything they should be aware of. And that communicable disease doctor is also here to consult if needed on any threats that a clinician might detect. So this is something that happened in October 2019. An astute clinician from Turku, which is a town in the center of the country called us and said that he had to treat several cases of invasive pneumococcal disease among sheep yard workers over the past week. Because there is a, maybe I should have mentioned it earlier, there is a very large sheep yard in this area. So how would you treat this information? So if we go back to epidemic intelligence, what would you consider this is? If it's just a phone call from an astute clinician? Is it something that you will consider as an event that a signal from an event that requires to be detected that, well, that has been detected and requires now to be filtered and validated? Or do you think this is more of an indicator? Indicator. So actually in that setting, it's an event. It's an event. This is something that happens quite often that we get a call from a clinician. I have, for example, we have some pediatricians who regularly call the institute saying we've seen more and more cases in this age group in our hospitals. And then we look at our surveillance systems and we realize that there actually, there's a twofold decrease of cases in that age category and it's just an impression. It's an event. So this was an event. And wait, what's that? So this was just an event. How would you describe this surveillance system? The fact that we have a hotline for event signaling, well, now that we've said that it was an event, you understood that it's some kind of a passive event based surveillance. It's a hotline. The clinicians can call to report or ask questions, but we do not call them every week. We are not actively doing even based surveillance, as I've mentioned. Now what would be your next step? Knowing that you have that event and that you want to confirm it, besides the fact that you will look into the diagnosis through the determinations. And I think someone posted something in the chat. So someone suggested active surveillance at the shipyard. It's a suggestion from Ilona. That's a good idea. So you want to validate this information and what can you use? Someone suggested active surveillance at the shipyard. That's a good idea. Maybe before setting up a whole surveillance system, because this is a notifiable disease, you can use your register and then you're really, really, really happy that you have those data managers in that system that's currently working properly. And it's a very quick to run analysis. You restrict in your surveillance system. You look for the area of the shipyard, Turku. So in the Varsinais Swamy Hospital District. And because you know the population at risk, shipyard workers are 18 to 65 years old males, like males of working age. With a quick and dirty EPI curve, you see that there is a clear increase in those IPD cases. So what you've just done is confirming that there is an outbreak. And this is exactly how it happened. We got the call on a Friday afternoon. One hour later, we were looking into the register and we saw that. All right. That event is now validated and confirmed by our indicator based surveillance system. The next step as part of this outbreak investigation was to do some homework. We looked into invasive pneumococcal disease in shipyard workers, whether there was any literature on other IPD or about shipyard workers. And we found out some information on crowded environment in adequate ventilation, as well as exposure to metal fumes and smoking, which is quite common in males working doing that kind of work. And we did a bit of homework on what were the available vaccines, which I will not read all the serotypes covered by those vaccines that would be available for invasive pneumococcal disease. We had to work on a confirmed, on a case definition. So this is what we did. And that's exactly what actually Maria suggested in the matter most charts. Search for confirmed cases, identify location of cases to look for common source of infection. So we had those case definition because this is something you really need to work on. And we had to think about time, place and person. So they had to be in that area, it had been after the first of February 2019, and it had to be someone who had working at the shipyard. Following the use of the event-based and indicator-based surveillance system, we had to go a little bit deeper. We reviewed hospital records and all the lab notifications that made in that hospital districts. And we looked for people that had worked at the shipyard, so we had to call an interview. And this was followed by some lab investigation and whole genome sequencing. Once again, we have first information, we enhance, we look deeper, and then we act it. And that's when we had this very, very lovely description of the whole outbreak over time. And you can see that there was circulation and of invasive pneumococcal disease in the shipyard for five months before we detected it. And this is why molecular epi surveillance is also relevant. It's because when we assessed using the samples, we had the serotypes of pneumococcal disease that were causing this outbreak. We could identify which action, which vaccine was the most relevant. And this is why they were given the polysaccharide one with 23 serotypes covered. So to sum it up, we got a signal. We validated it. We used an indicator-based surveillance system in conjunction to investigate the extent. We picked a case definition, and that's when we collected additional information and specimens. And finally, the most important here was the action, information for action. And I'm going to repeat it one last time before I can take a few questions, and we move on to Henna's talk. You want, when you think about the surveillance system, you need to think about the aims and objectives of the systems, why? You need to know what's event you're accounting, what is your exact case definition. You need to think about what you want to know from the data, which analysis you will be conducted. And most importantly, what will you do with the answers? There always has to be action from surveillance. And Henna will follow up with the example of TB in Finland, where if we want to be able to act, we need a bit more than the standard lab-based surveillance information one can get. All right, I'm done. I'm going to stop screen sharing, and I'm a little early, so we can have a few questions if you wish. So you mentioned the term event, but according to you, what is the difference between event and signal? We use very open signals, and then events. What's the difference? If we use the terminology that we are supposed to use, that if we use the official terminology, event-based surveillance does event monitoring and provides us with signals. That would be the official terminology. And once you've accessed the data from the indicates surveillance, you would classify it as an event. That would be the official terminology, and I might have mixed them up at one point. I would suggest slide six of the presentation if I can share my screen again. Yes, please share the screen. So according to the official terminology as defined by Christophe Paquet, Denis Coulombier, and Kaiser in 2005, the event-based surveillance system detects signals, and from the assessment of the signals plus the data from indicator-based surveillance, we classify something as an event. This would be the official terminology. Okay. Next question. Can you go to your slide with the dashboard? The one you said I don't like the dashboards. The ones that I hate? Information. Yes, and I agree with you. I mean, I absolutely agree with you because once you serve this data, it's not adjusted, basically. There's a bias. It's not adjusted data. And also, there's no information about uncertainty, right? And you're missing information, and so that basically whoever is not an expert like you will literally look at this data and see, oh, wow, it's just the US and Europe basically have the cases, right? And that's the wrong picture. So it's a really... But how do we fix that problem? What do you think was the best way to make the dashboard, let's say, trustworthy and correct? That's... I'm sorry, if I knew that I would have a Nobel Prize by now. I think the best... Now, I think the best can be done is have some good data scientists think about it and work on that. And there has to be way of correcting things. And we have some standardization systems, for example, when we want to look at the rate of... Like that we can use in EPI, but I don't think these methods, for example, would be enough at this stage. I would... I'll just... I'll pass on that question. But the other point you showed with the mortality, excess mortality, you know, this is something which is adjusted, right? And here there's no manipulation. I mean, there's no... Let's say virus. This is like... This is the true picture. Yes, but this is mortality. This is something that is detected very easily. Okay, but maybe the dashboard should then focus as a... Or at least on the landing page, that they show at least information that, you know, it's not... It's adjusted. So it's a correct... Right? So at least on the landing page. Maybe that's the way you go, these dashboards. But we have another question, please, from the audience. Thank you so much for the presentation. I'm Aiman. Welcome, Mellon, our providers. So for the validation of the event, you mentioned that we can analyze the registered data and we can... With a specific group at a higher risk of the disease to validate the event. But if the same group, it has a totally different exposure for this with similar exposure or root of transmission. Like for instance, in Sudan, we had this... And I would call it chicken gunia in any area in Demiq with dengue fever. And for a while, it was assumed it is dengue fever because the symptoms are the same and both of them are transmitted by edema. However, this assumption led... Led later for the development of one of the massive outbreak with more than 47,000 cases while dengue fever was like one, one, one and a half thousand. So how to avoid mislead by disease or syndemic... Syndemicity of two outbreak going together for the same group. Thank you. Well, this is a problem that is quite frequent with vector-borne diseases. I know, for example, that in the first week of... In the first month of Zika virus outbreak in French Polynesia, they thought it was dengue until they started to have a proper diagnosis system implemented. So unfortunately, it's not so much about the data, but it's about the health infrastructure and access to testing and confirmation of diagnosis. So as Tim said, I'm Henan. I'm in charge of our TBA surveillance in Finland. I'm working as a researcher in THL and a PhD student and also working with Mood Project. So I will give you a quick overview on TBE surveillance. I know you have coffee half-past. I will try to be quick. Quick with my presentation. So why do we do enhanced surveillance for TBE? As Tim highlighted multiple times, it's information for action. We do it in order, mainly, like the most important point in this surveillance is that we do it in order to gain a comprehensive picture of where the infection is acquired. So I can't see the chat. So if you have something just... Henan, can you go to full screen with your slides set? Because right now we're on the title page and... Well, okay, you can't see my nice animations, but you will survive. You have to... Okay. But you can also go full screen. Presentation mode. Yeah, I did it, but apparently you can't see it. When you share on Zoom, you need to share your complete screen, not the software. Aha. Yeah, yeah. This is the nice thing of us, our institution actually for bidding us to use Zoom. After a pandemic, I can still use it again. Okay, so it's okay. We won't have the animation. Sorry. So can you see it? Yes. Okay. So we want to know where the infections are acquired with TBE. So we want to know the geographical spreads. We want to know if there's new focus somewhere. And based on this information, the National Immunization Programme regarding TBE is updated yearly. And we also give vaccination recommendations yearly regarding TBE, which are also updated based on these surveillance. So really information for action. A little bit background, TBE vaccine has been a part of a National Immunization Programme sent 2007. And this means that if your municipality of residence is a part of this programme, you are entitled to have the basic three doses of TBE vaccination for free. Also, if you own a summer house in this area, because a lot of Finns love to go on the summer house the whole summer. So they are considered as people living in that municipality. But also we are looking at time trends if we see a longer or different transmission period compared to normal. And if there's changes in age, sex distribution, if we see any breakthrough infections, stuff like that. TBE has been notifiable in Finland since 95. And diagnosed cases are notified to National Infectious Disease Register. Only Diagnostic Laboratory is notified to register so it is lab based register. Only the cases that seek for care are reported. So, isn't to matter cases, we have no idea. And we have conducted enhanced surveillance 2014. And you can see here the trend during the past 10 years and during the pandemic years we've seen really a very strong increase in cases. I'm going a little bit quicker now. Just to see you tell you the overall picture of where the TBE cases are in Finland is a very quick map. We have the interactive map online for TBE that we update yearly for information for Finnish citizens and healthcare workers stuff like that. And here you can see that the cases, most case numbers are around the coastal areas. So, how do we actually do the TBE enhanced surveillance? In short, the cases are reported to NIDR as I mentioned. We in THL, National Health Institute of Finland, we interview all patients and we review medical records. Then we map cases in UGIS. We estimate the local risk with incidence calculations that are based on these patient interviews. And then we finally update national immunization program and local recommendations. And in this update, when we update the recommendations of the program, we consider incidence case numbers, but also case by case consideration for municipality. We have very small municipalities in Finland and if you have even only one case or two cases in municipality can increase the incidence of the roof because there can be like 200 people living in that municipality. So, in more detail. We follow it from NIDR and we bring cases to our line list regularly. Based on this line list, the patient records are ordered from the treating physicians. And from these patient records, we ensure the TBE diagnosis that they were actually diagnosed with TBE. Last year, we saw many numbers of cases that were actually not TBE cases, which is very important to go through. Even with this good surveillance system, there can be flaws. So we really have to make sure that we are interviewing the people that actually have TBE. So from this record, we also get their phone number, address, a little bit about symptoms, hospital treatment, possible exposure. Then we call them and we interview them. Usually by phone, some people by mail if they don't pick up the phone. They don't want to talk with us. And what we asked during the interview, we asked background information, symptoms, when did the symptoms start, hospital treatment, vaccination status, and most important exposure to ticks. Did they notice if the ticks was attached, when did they see the attached tick? And most importantly, the most likely place of exposure as accurate as possible. Of course, sometimes people countrally say they have multiple options, but surprisingly, a lot of people know very specific location. We also asked their activity during the past month, especially then if the patient haven't noticed the tick attached. And we asked the activity within the municipality of residence in all the islands, because that's highly endemic for TBE. Elsewhere in Finland, if they traveled somewhere else, or if they went abroad. We have one or two cases that are important yearly. We also ask other information related to exposure, such as contact with pet animals, or if they consumed any unaposterized milk products. And then we finally have some free space for additional information. And as said, based on these interview results, we map the cases to see if there are new clusters, get the overall picture, and we also see if the places of infection match with the previous years. Finally, we calculate the five years moving average incidence per 100,000 population per municipalities, or by postal code areas. And that is done by our EIA hours tradition. And finally, based on these calculations, we give recommendations or we extend the national immunization program. And what is important to notice that the TBE cases are restricted to small geographical areas. So that's why we even go to postal code areas. And also the time spent in the risk areas affected the risk of infections. So these are the facts that we are trying to balance within. And then we also ask for some information on how what are the guidelines for vaccination recommendations. And you can see here that if there's less than one case per 100,000, there's no recommendation one to five. And then we also use our recommendations, those especially active in engaging outdoor activities over four weeks, or we have been using that I'm trying to get rid of that because it's very vague, in my opinion. And then, if there's more than five cases, it's considered as highly endemic area by a double HL. And in the 50 cases, it's recommended to all longer 10 residents and summer residents. Sorry, I'm running a little bit through the slides now. What I want to highlight. Again, is that it's also a reason why we do this enhanced surveillance that the risk of a rise significantly depending on the location within the country. And we also consider that if we consider the incidence of Finland, it is less than five cases per 100,000 people. But again, going to the interactive map, we can see that in all the islands that I mentioned as highly endemic, we can see an incidence more than 40 per 100,000 population. So it really arise and that's why we have to do this kind of surveillance. So that's it for me. I hope you heard and I'll be God the information. I had to run a little bit to be able to not keep you from having your coffee. That's no problem. We still have. Thank you, Hannah. We still have time for questions. Thank you was very straight to the point presentation. We have time for questions from the audience. Let's see. Can you show us this map of Finland you just had on this portal you. So it looks at it's very like a skew distribution there's only like, basically, you know, 5% municipalities they have. Yeah, so how do you explain that? Is it because of the because this is a nature area so or. Yeah, it is really a disease that we see next to larger water areas so that's why it's in the coastal areas and we can see the cases in the north as well. And when, when you go inwards in to the country, the cases are that that you can see, maybe I could share a proper, proper picture of, of the map. So, they are lake areas, if there are, for example, here in the eastern Finland, they are all. cases they are next to a bigger lake, lake, but really the highest case numbers are in all and in Varsenae, so many area, Turku area that team was explaining regarding the pneumococcal disease. Yeah. And then, you know, I think, you know, when you think about the case numbers, the case numbers are actually the same. So, in this case numbers are usually the same, usually he said he heads these dashboards, like in this dashboard, you don't. Can you also switch to like the incident rates per population density. And then, you know, the vaccination program areas and vaccine recommendations so here if you go above the municipality, for example Helsinki, you can see in which postcode areas the vaccine is recommended for example, and what the risk areas really are within Helsinki, because in Helsinki, it's a large city but there's only few places that is considered as a risk area. So, in the hospitalization, can you scroll through time to see last, let's say 20 years. No, unfortunately, it's the case numbers and the incidents they are over past five years, and it's updated yearly. Okay. So, on the slide to show how actually there's there's like a doubling or tripling of the TV cases, and obviously the population is not going to two to three times or something's happening so why why three times more cases. Well, that is very interesting question and we've been considered, we've been thinking about that and discussing about that and we've seen observed the similar trend in Norway and Sweden as well. So what we think might be the biggest issue here is the COVID. People, we were, there were travel restrictions, people couldn't go abroad, they couldn't go to Thailand to get dengue, instead they went hiking in the National Park and they got TV. So we really think that it's the change in human behavior. We didn't study any studies on this but this is the educated quest. Yes, that people went to the areas of tick, tickworm and cephalitis and got more infections and we can really see that the cases we didn't see any new clusters yet last year, when we had almost double the case numbers compared to the past year. So we have no new areas but more infections within the same risk areas, which really tells that people to spend more time outdoors. Yes, I think that's probably good interpretation. Thank you. Thank you, Hannah and team. Have a safe flight. Please, please join us. We look forward to seeing you also in person. Keep us posted about your progress. Thank you. Sorry. Wait, one last question. Wait. I wanted to know if this dashboard is available only to public health or to the general public? Yeah, if I heard it correctly you asked that if it's available for general public, it is. It is only in Finnish but I can maybe share the link to the matter most. Thank you. Yeah, just share it on the mat. Yeah, I will share the link with you. There's a question on matter loss. Is there also tick surveillance? Hannah, there's a question about if there's also tick surveillance system in Finland. Yeah, that is a good question. We don't have that kind of surveillance that we would actively surveil whole country and see if there are TBE in ticks before we see human cases which would be optimal. But we do have universities that are doing quite active surveillance on certain areas and they are really focusing on different areas and doing research but not a wide scale surveillance such as this I would consider. Maybe that answers your question.
In this video tutorial, Timothee Dub and Henna Mäkelä (Finnish Institute for Health and Welfare, Finland ) discussed the basics of infectious disease surveillance (event-based and indicator-based surveillance, active versus passive surveillance), as well as the advantages and limitations of each type of systems, followed by the example of how surveillance activities for TBE are conducted in Finland. By the end of this lecture, participants should be aware of the limitations and quality issues that can occur when using surveillance data for comparison and/or modelling.
10.5446/59224 (DOI)
Some of you have seen some version of this talk. I mean, a little bit, but we'll see. I'm not sure how much people can cover it, but because in the audience there are also people who have not seen it. And from different fields, I feel that I should give some background and I'll try to riff off of Boris' talk. So my main goal is to discuss some recent results concerning the Romanian exponential map on the group of volume-preserving diffeomorphisms in three dimensions. This point of view has to do with Arnold's approach to incompressible hydrodynamics. I reveal that it's in visit incompressible fluids. That's an infinite-dimensional Romanian geometry in this case. This is in spirit of carton. It's pretty pure differential geometry in infinite dimensions. Okay, so I should perhaps begin with that setup. So as I said, I'll use Boris' introduction here, but nevertheless I would like to present here a summary of this approach in the picture. So MN is the fluid domain. I think compact Romanian manifold possibly with boundary dimensions, of course, two, three in general, the only relevant for hydrodynamics. But also R3 or Rm, maybe even more generally asymptotically Euclidean manifold, so complete Romanian manifold with ends such that it infinity the Euclidean spaces. Mu is the Romanian volume form. So this is fixed fluid domain. Now, because we, so fluid fills M, but because we do not allow particles, fluid particles to fuse or split, the configuration space of the fluid will be a group of diffeomorphisms of preserving the volume in the compressible carries of M. So now this is the picture I'm trying to summarize things here. So I will call it dif mu as in the previous talk. So these are volume-preserving diffeomorphisms, diffios of M. So we think of this as positions, fluid positions of fluid particles. Then because this is a group, so this is a group of volume-preserving diffeomorphisms with a group log given by the composition. If this is the identity, we also have an upstairs, so to speak, and that's the Lie Algebra infinite dimensional. In this case, consisting of divergence-free vector fields. So here we want to think of this as the spatial velocities, fluid velocities. Okay, now as in classical mechanics, Newtonian mechanics, fluid motions will obey Newton's law. So we can think of really fluid motions as tracing out geodesics in the configuration space, which is the group of diffeomorphisms, and the geodesics are of the metric given by the kinetic energy. Now that kinetic energy, so this is just like classical mechanics, now that kinetic energy will then induce essentially an L2 inner product at the Lie Algebra level, and also a right invariant L2 metric. So this is the right invariant L2 metric at the level of the group. And if we, say, have some initial condition, initial divergence-free vector field, then the idea is that we get fluid motions described by the corresponding geodesic, let's call it gamma of t, in the configuration space diffe mu. Now this is the Lagrangian star. So in fact, there is a very nice way, I mean this picture will provide a nice way of thinking about somewhat confusing for a novice in particular, somebody who enters the field and reads about Eulerian description of Eulerian coordinates and Lagrangian coordinates. This is really the geometric, nice geometric picture that combines the two. So the Lagrangian story happens in the group, the Eulerian story will happen in the Lie Algebra. So how do we pass, where is the, what are the Euler equations here? Well, one way to see them is to look at the tangent vectors to the geodesic, say gamma adult, at the point gamma of t, yeah. So this is by the way L2 geodesic flow, which happens here. And the right translate, use the group structure to get a curve, let me call this u of t, in the tangent space of divergence-free vector fields, and it turns out that this curve, so we get a dynamical system here, and this u, this vector field, which is a time-dependent vector field on m, satisfies Euler equations of hydrodynamics. Now here the nabla is the covalent derivative on m, p is the pressure, then we take the divergence equal to zero, and here is the initial condition, so we get a Cauchy problem, I'll call it star. Okay, so these are Euler equations. Okay, so this is really Arnold in his paper from 1966. However, in order to do analysis here, one needs to bring in topology. So I need to be able to work with manifolds that allow me to solve differential equations so that I can talk about these geodesics. So this was done by Eben and Marston in a subsequent paper. Eben, Marston, 1970. So what they did is they introduced topology, so one can topologize diff s, diff mu, with for example sub-left topology, so I'll write this diff s mu, or, so possible, held up topology, one alpha. So in sub-left hs, s here is greater than n over 2 plus 1, or some other reasonable topology, more exotic, so held as c1 alpha. And if that's the case, then the geodesic equations become a lonely E. So now completed, for example, I'll stick to sub-left in the completions of the configuration spaces and the algebra which now becomes, first of all, this becomes in hs case. So if s is greater than n over 2 plus 1, then this is an topological group, as well as smooth Hilbert manifold. And in that case, the geodesic flow here becomes really an ordinary differential equation, albeit in infinite dimensions. So this is the Lagrangian picture. And here we have the ODE's describing the fluids, and here we have the Eulerian picture, and that's the PDE, the final PDE that corresponds to this. So this is really the kind of one succinct way of looking at the Lagrangian and OEulerian story. I want to concentrate on the Lagrangian story. I mean, I want to describe the work on the Lagrangian side, but I still would like to say a few words about the OEulerian. So the Cauchy problem for OEulerian equations has a long history, and it goes back to the 1920s and 30s. So the work of Liechtenstein, Günther, Volibner, who proved, well, they were in C1-alpha category, and they only looked at local, well, Liechtenstein and Günther, existence and uniqueness results. Volibner proved global, first global result, global existence result in two dimensions, but they did not really study the dependence of solutions on the initial condition. So this was not the full Ademar well-positiveness. This was studied later, and the pivotal, I guess, name here is Kato. So Kato in the 60s, well, also in the Udowic, so Udowic and Kato, by the way, in the 60s, this is 1960s, studied the Cauchy problem, so this is well-positiveness, well, maybe I should, you can write, because of Kato, Ademar, well-positiveness of star in HS, actually even in WSP, where S here is greater than N over P cos 1. And with what I mean by Ademar, well-positiveness, I mean existence, uniqueness, and at least continuous dependence of solutions on the data. So there's been a lot of work that sprang from that project. We wrote several papers with Ponsa and Lai and some other collaborators. But the reason I'm bringing this up is because these two pictures, if one uses some finite smoothness topology, like, for example, the Soveless topology, are not equivalent. So here is a remark, and I'll come back to the Cauchy problem story in a second. But let me just point this out right here. The two pictures on the left side are not equivalent in HS or C1 alpha, or any other reasonable topology because of the properties of the solution map. So if we look at data to solution map in the Eulerian case, then we're looking at U0 being mapped to the solution U. Now in the Lagrangian case, we're looking at U0 mapped into gamma and gamma dot. Now in this case, the map is at best, in any reasonable topology like the ones I mentioned here, at best continuous. In this case, it's smooth. In fact, C infinity is smooth. There are lots of surprises here on this side. So for example, if I had more time, I would be able to maybe save a little bit more. But perhaps if I had time at the end, I'll return this. But let me just point out one thing. Especially in the view of the classical results, well-posed results in C1 alpha, it is a surprising result due to Burgen and Lee. It's a recent paper from 2015, I think, that the Euler equations, I mean this problem is not well-posed in CK spaces. So EG, here. I forgot I should mention two results. But okay, let me start with Burgen and Lee. Star is not well-posed in CK spaces. Mk greater than or equal to 1. The reason is essentially this. Those analytically inclined. Take a derivative of this equation. Here, so I will write this as du sub t plus more or less. This is more or less, I'm just speaking. U dot nabla du plus essentially something like du squared equals minus d nabla. And there is the pressure. So I'll come back to that. So now assume that du is in fact C1. And let's see what happens. Obviously this is C1. This is a transport equation. These are transport terms. So if the rest is C1, I can make this live in C1 for all time. Or at least short terms. No problem. What happens here? That's the trouble term. Let's look more closely at this. If I go back here, and this is really the gist of Eben and Maast. And this is what they found. They noticed that one can take the divergence here of both sides and observe that the divergence, when u has divergence zero, divergence of the covariant derivative of u in the direction of u doesn't lose derivatives. This is by the way why we will have smooth, why we have smooth map here. Because that's the what you call, picar, vanard, picar contractions. Okay, that's the ODE. That's the reason for the ODE. But let's go back here. If I take the divergence here, all I get is this. Plus Ricci if I'm on the manifold. So that's lower or lower. And let's forget the Ricci. Let's forget the curvature. That's an essential. So if I go back here, I'm saving that. Then what I get is a double-reze transform. What is this? Okay, if I want to solve for p, I need to take the divergence. And I have divergence gradient. I have a Laplacian. I have to invert the Laplacian. I have the inverse of the Laplacian. And now I have the trace of the u squared. Okay, this is C1. Trace doesn't do anything. And this is a singular integral operator of Kaldron signet type. And Cz type. And so it's not bounded in L infinity norms. That's the troublemake. Okay, so there are surprises here. Even in local TV. Another thing I wanted to point out, why at best continuous? Well, somehow this was not seen, although it could have been seen by Liechtenstein. Liechtenstein could have seen a lot of other things, I guess. Probably didn't bother rank them all up. He has the famous book from one of the first in the Springer series on hydrodynamics, with a lot of great ideas. But in 2010, together with Himalas. So this is 2010. What we showed is that this map in sub-Alippes space setting is not even uniformly continuous. This has been generalized to other spaces as well. So really, continuity is best, one can expect. One more, perhaps, result that is surprising. Let me go back to C1 alpha case. Even in this case, there are some surprises. But of different nature. So here I should mention again. And Lie, this is 2015. So for C1 alpha case, let me just say that full up in full space, in the helter space, C1 alpha, Adamar, well, positeness does not hold. One needs to pass to the little helter space to get existence, local existence, uniqueness, and continuous dependence of solutions from U0. So full Adamar for star is OK in little helter space, but not in big helter space. And I'm pretty sure this holds also for any other more exotic settings, functional settings like BASOF spaces, or 3D resarching, which are now popular in connection with Cauchy problem. So this is a result with Yonetta. I think it just appeared. All right. Now, how much time do I have left? Seven. Seven? I think I started five minutes late, right? Yeah, OK, that's fine. 12. All right, thank you. So let me now switch to the Lagrangian side. On the Lagrangian side, as I said, we have the smooth dependence there. And this immediately tells us that we have a nice infinite-dimensional model of Re-maniac geometry here, because in particular this means that we have a well-defined L2 exponential map. And that map, the L2 exponential map, will be defined initially in dimension 6, 2, and 3, or 3, on some open subset of the zero vector in the tangent space with values in difference mu. And it's defined, say it's T0, by definition, this is the unique L2 geodesic with the initial conditions that it starts at the identity in the direction of mu0. And we know, because we are dealing with an ODE, that this is perfectly fine, by contractions, as I said. OK, so then, locally, by the way, it's just like in finite dimensions. This will be a local diffeomorphism by the inverse vector there for Banach-Manifold, the Banach spaces. So locally, everything is fine. We have a perfect nice Re-maniac geometry. What happens globally? And that's, of course, of interest, because this is a solution map in Lagrangian coordinates, or fluids, two or three dimensions. So I'm interested in the singularities. So what can I say about the singularities of the L2x? Well, these are the conjugate points from classical Re-maniac geometry. The conjugate points in diffeur. But unlike in finite classical Re-maniac geometry, here, we have to be careful. So in infinite dimensions, in general, we have two times. Depending on whether dx fails to be, it's not one to one, and in which case we say that the conjugate point is a monoconjugate point, or it's derivative. The singularity is of the type that dx is not onto, in which case we talk about epiconjugate points. Then another problem is that they can have conjugate points, in this case, can have infinite order. Imagine the unit sphere in the space of square-solvable sequences. Easy exam. They can cluster. Conjugate points can cluster on finite units, due to the segments. Now, here the example again, an abstract example, is easy to see, is a football, an American football. So in infinite dimensions, so an infinite dimensional ellipsoid, in little L2 again. So take x1, x2, take a circle there, unit circle, and take x1, x2, x3, take a football, take an ellipsoid, and then x1, x2 still fixed, x4, make it squeeze it a little bit more, make it look more like a ball. x1, x2, x5, even more so, and so on. What you get then on the circle joining the northern and south pole, you'll get a cluster of sequence of conjugate points, which will converge at the south pole. So that's an example. Now, the point is that we know that all these, we have examples of all of these pathologies in diff mu. All these can be found in diff mu in general. Well, let me put diff mu and n, but now is the theorem that this n has to be 3, cannot be 2. Yes, what do you mean by n in order? If you take a sphere in R3, the north pole, take north pole and south pole, they're conjugates of order 1. There's one linearly independent, just one, really, Jacomi field, which is 0 at the top and 0 at the bottom, and it's non-zero. Now, if you take same setting, but in R4, you'll have the order of conjugacy increased by 1. So you have to put this here too. The infinity you have, infinite over. Sorry. So we have all these, but not in 2D hydrodynamics because of the result. Well, this result goes back to 2006, and it says that if n is 2, and well, boundary, m is compact, and compact. Originally, Riemannia, originally we assumed that it has to be without boundary, but this can be now deleted because of, I think I've written something here. Okay, deleted because of the recent work with Steve and James Brown. Again, I think this is 2018. So we can drop that. It took us a while, but it was a technical thing, quite a technical thing. So in this case, if n is 2, then the L2x is nonlinear fat-home map of index 0. In particular, what this means is that the L2 exponential map in 2D hydrodynamics behaves like an exponential map between finite-dimensional models, a classical. So 2D fluids want to become, want to be like classical finite-dimensional geometry, Riemannian geometry. And none of these pathologies can occur. We can find them, however, in three dimensions. But on the other hand, and this is the last thing that I will state, on the other hand, in 3D, we now know that one also can find some protagonist, provided that there is some, there's enough symmetry present in the manifold. So let's see. So I'll state this one, maybe, this is 2. And this is due to Newton-Pellets, myself and Steve. Well, in progress, I guess. And I won't be able to state it. Maybe we'll have a dig for it. Okay. Well, the statement is as of 2018. So, okay. If M3 is compact, or by the way, asymptotically euclidean, asymptotically euclidean, for example, R3, with a killing field. So this is HS class killing field. And U0, the initial condition I mentioned earlier in the Cauchy problem, which is the divergence free HS, of course, S here is greater than 3 halves, what is it, 5 halves. And U0 is a divergence free axisymmetric vector field with no square, or with small square. Then, axisymmetric, you mean the one commuting this Q? I will, yes, that's right. Then for each T, the X is 0. So I did not mention that there is a Fredholm operating, maybe, since I'm right here, in terms of DX. So by Fredholm, nonlinear Fredholm, what I mean is, so this is a notion introduced by Smale in the 60s when he generalized Sartz-Sterren to infinite dimensions, to Fredholm mappings. So the idea of Fredholm between a map is not only a Fredholm between two Helbert or Banach manifolds, if its derivative is a Fredholm operator. And its index, if the source is connected, the source manifold is connected, its index is the same as the index of the derivative. It's all classical. So here we have Fredholm, so we have a nice piece of, in fact, a subgroup of the group of all volume preserving diffeomorphisms of a three-dimensional manifold, those consisting of axisymmetric, no-swear-all flows, where the expansion map can be, in some cases, there are some topological obstructions in certain situations. That's why I'm actually stating this in terms of linear, meaning a linear statement here. Where? The expansion map will be Fredholm. Yes, so now, yeah, so let me, so first of all, what does it mean for V to be axisymmetric? Axisymmetric means that it commutes indeed on a manifold. So these are nice and very straightforward and very natural generalizations of really the only, the classically studied case of R3 of axially symmetric flows, and R3 goes back to Yudovic and Obukhov, 70s, I guess. Then there was nothing, nothing, and recently it's much more active field, and there are some nice results, and I think that's what Gindy and John very recent, kind of similar to what I was saying earlier, but I don't have time to discuss this, related to the norm inflation, but okay. So this is axi-symmetric, now no swirl, what is swirl? A swirl of a vector field, say U, will be defined simply as the inner product on the manifold of that vector field in K. And if this is zero, that's swirl three, we say that U is swirl three. Small swirl, well, it turns out that swirl is conserved. So even in the axi-symmetric case, which by the way, an example of that would be, take R3 with cylindrical coordinates R, theta, and Z, and consider, take a vector field in these cylindrical coordinates whose components do not depend on theta. That's an example of an axi-symmetric vector field. Swirl three, take the theta component of that vector field to be zero. An example, a draining jet. Open a water tap, at least, up to approximate, right, I mean with the swirl. But imagine a simple draining jet, that's a swirl three axi-symmetric flow, or smoke rings, ideal smoke rings, if you can do them, if you smoke. Okay, so these are the examples, and what I wanted to say here is that swirl turns out to be conserved along the swirl. So swirl, this, so that's a theorem from the, one of the results that we have, is that swirl is conserved along particle trajectories, in the three-dimensional case. What this kind of means is that, yeah, there is some conservation law that one can use, and maybe attempt to say something about global behavior of fluids. Thank you very much. Questions for Kjara? Because essentially when you had this commutation with Kjara, it was down to the fact that it's like 2D man, like the way it's 2D. This is not a problem really, because we know that we still, there are still people who suspect that there may be blow-up in the axi-symmetric case. So axi-symmetric world, let's put it this way, seems to display all the difficulties. We see all the same problems as we see in the full incompressible Euler case. In 3D. In 3D, yeah, in 3D. We don't know how to prove global existence. Now global existence can be proved in this, in the Eudovic, and if swirl is, if we have axi-symmetric and swirl zero, that's really what Eudovic and the book of Kahl axi-symmetric case, swirl three. And it's the swirl three, the really conservation of some additional conservation law that enable them to prove global existence. So what I'm really concerned about is, or what I'm interested, what we're interested in, is really to understand the axi-symmetric world. So we just concentrate on the subgroup of axi-symmetric diffeomorphisms, in the 3D dimensional case, and look at the Fretholm properties, vis-a-vis global properties, global persistence, and try to see if there is a relation. So for example, I mentioned here, if we have Fretholm, this type of Fretholm result, for the no swirl case, I mentioned here small swirl, that's just a cattle perturbation, perturbation of Fretholm operated steam, that gives us a free. I know that they've been attempts to prove global existence for small swirl, and there are some results, but they're very partial, and there is a lot of work to be done, and there are things to be understood. I would like to see if it's possible to connect these two. If for example Fretholm can inform us a little bit about what is really going on, well, with the global flow, in light of the conservation that we have. Because by the way, there is no reason, of course, in the even axi-symmetric world to expect that vorticity would be answered. So the standard to the methods right there. So you think of axi-symmetric as kind of two and a half dimensional world, right? Is that fair to say? Well, maybe two and a half, yeah, something like that. That's right. I don't want to specify the meaning of the word, to quantify it. So swirl three, axi-symmetric, these are like 2D. Exactly, this is all the work of Nikola and Kolmaharov. Yeah, that's fast rotation. Fast rotation implies, so really fast rotation, implies global resistance. So you can get it with methods like this, because later it was just classical PDs, just metal and all that. Yeah, I don't know, I should look at that. Yes, yes, you could. I remember, of course, with the result. It's quite old, demonstrate in the other one. Right. I think there'd be some developments, but you let Nikola and I have some work on that, but I must say I'm not probably even though this is my club. Any other questions? Alright, thanks again. We have a coffee break until 10.50, which is almost an hour, and then we'll have our last speaker for the morning.
In the 1960's V. Arnold showed how solutions of the incompressible Euler equations can be viewed as geodesics on the group of diffeomorphisms of the fluid domain equipped with a metric given by fluid's kinetic energy. The study of the exponential map of this metric is of particular interest and I will describe recent results concerning its properties as well as some necessary background.
10.5446/59223 (DOI)
I would like to talk about this... Not exactly the topic that I announced, but I hope to get to that event. It is our joint work with class, one and two, Gerard Messoliuk. And this is the point of view on the group of D-harps with the Betelfield for Newton's equation and the relation of the Matelung transform of this equation. I also hope to say a few words about vertex shape, so let's see how it goes. So just the references that I'm going to talk about are in spoilers. So that's 1711, 0.321, 18.07, 0.7272. And also hopefully I get to the talk about the table with Antonis Zosimov. And then the reference is 1705. So this all has to do with beyond Arnold's framework of the world equation, but let me start with just several words about Arnold's framework. So when we are talking about what equation? Let me put it in plural. We mostly talk about the incompressible equation, so the what equation for an ideal incompressible is the split. And so. And the equation is that it's put on the velocity field of fluid, so we mentioned a domain, for instance, in Rn, or in n-dimensional Riemann menu code, so which has the Riemann metric in the corresponding volume form. And then the equation is on the velocity field of particles, of fluid particles in this domain. The time derivative of this velocity field plus the covariant derivative of V1-Catzel is given by Riemann's gradient of a pressure function, which is defined uniquely up to an additive constant by the condition that the equations of V, with respect to the volume form, is equal to 0. And if M has boundary, then we also have to add the condition that we have to use V expansion to the boundary. So this is the initial setting for incompressible fluid, and another setting for our approach is the following theorem, that the wave equation can be regarded as an equation of the geotetic flow, but this way. So this is the geotetic flow with respect to the L2 right invariant metric on the group of volume preserving geotemorphisms. So geotemorphisms reserving the volume mu, the Riemannian volume form on M. So this is the group of volume preserving geomorphisms. So this point of view turned out to be seemingly fruitful for various equations. To mention that if you take not L2 right invariant form, but other groups where you consider the metric and taking different metrics, so different pairs, they produce such equations. So everything can be used for SL3, but one can do it in the n-dimensional case. One can consider for instance the group of motions of three-dimensional body. So here this is the Euler top, this is the Exiscophe equations. One can consider equations of magneto-hydrodynamics in a similar way. There are equations such as Cartesian de Vries, Camasso-Foll, and so on related to the De Berassore group and the various metrics and so on. So there are equations that can be explained in a similar way. What I would like to talk about are equations of compressible fluid. So for compressible fluid, sometimes also called the Euler equations, you have different type of equations, so it starts exactly in the same way. So you have now the fluid, which is described not only by the velocity field, but also by density. Now the equation is that we have the material derivative, so the material derivative plus the covering derivative V1 itself is equal to minus 1 over rho, rho is the density of the fluid times the gradient of the pressure, and pressure is supposed to be depending on the density, so I consider not the full incompressible fluid, but what's called barotropic. So barotropic fluid means that the density function depends on rho only, and usually one more variable, which is called p-entropy, is not involved. So it's does depend on density. I don't know if you can answer. Pressure depends on density. Thank you. And this is the continuity equation on density. What is p of rho? Yes, p of rho is e prime of rho times rho squared, where e is an internal energy of the fluid, which is always the molecular structure. Now the theorem that is kind of forgotten, partially forgotten, is the theorem of Malenkov of 1979. This was read as 1966, the famous paper, is that if you look at this equation, it turns out that it is the Newton equation on the group of all ephemeral fisms. So all the barotropic equation is the Newton equation on the group of all ephemeral fisms. Where Newton essentially stands for the fact that we have the Lagrange energy consists of the kinetic part, which will be exactly the same as Arnoite's approach, except here we consider this on the subspace of the Lagrange ephemeral fisms. So it will be the kinetic energy and the potential energy u. So this is the potential and the existence with the formula for this potential, it depends on the density only. It is the integral over the manifold m of external energy times Arnoite. So I will say more about this picture, but I just would like to put it here so I can appreciate it. We have a kind of very natural extension. The right-hand side now plays the role of the gradient of the potential energy, which depends on the round. It is related to, usually some powerful force when it is considered for molecular gas, it depends on the number of atoms which constitute molecules. So this is some explicit formula here. So this, so somehow this molensop was apparently hypnotized by Arnoite's approach, and in particular he wanted this equation to still be an equation of Chaotix and that's why he essentially promoted the point of view that one can consider this one can regard this Newton equation as the Chaotix equation in terms of the Mapartite metric, which is probably not such a fruitful point of view here, though this Newton equation seems to be more natural. Okay, now I would like to put both points of view under one roof, and that's what I would like to talk about. So for this I'm recalling our favorite picture relating fluid dynamics and optimal mass transport. I think for many people in this room it's a very familiar picture. So it's a relation to optimal mass transport. So we will now look at the group of volume preserving epheomorphisms as a subgroup in the group of clode-ephemorphisms of M. So this is what is wrong in the group of clode-ephemorphisms, and this is the subgroup of clode-ephemorphisms. And there is all this hybrid over the space of densities. So I will consider everywhere sufficiently smooth densities. There exists, for instance, the subgroup sitting for this, so I'm describing just the geometric parts. So let's consider the space of densities, which means positive volume forms, volume forms, let's say nu. So which means these are n forms if n is the dimension of the meaningful m, which under my life by the condition that the integral of nu is equal to 1, for instance. So some normalization. Now this space is actually, the space of diffieomorphisms is fibred over the space of densities so that over the density of the air from density nu, there is exactly the group of nu preserving epheomorphisms. However, if I take the density nu, some other density in the space of all n forms, then what is above it, of course, is not the subgroup of those preserving n-machars because the identity is, appears just once, in one fiber. So here there will be a fiber f nu, which consists of all diffieomorphisms of m, which move from nu to nu. It's clear that one can always multiply on the right this diffieomorphism by nu preserving diffieomorphism and still still be moving from nu to nu. This is the vibration. It's actually a certain diffiebration because of mosa theorem. You can move any form, any n form to any other form provided that they have the same total mass. So there exists an automatric here, which for the sake of time I will not describe. Which, roughly speaking, for a flat manifold, it is just a flat metric which identifies the group of diffieomorphisms with a pre-hubored space of vector value functions. So we have the flat automatric here. It is not the right in the element. However, it is right in the element confined when restricted to the subgroup of all n-prisoning diffiems, which gives exactly the Daubnold's metric on the group of diffiems. So the incompressible fluid dynamics is actually scattering along this fiber. However, what is interesting here is that it turns out that and this is the theorem of Ota to Sallum is that this way projection of diffieomorphisms of n equips this L2 metric to the space of densities equips this Wisterstein L2 metric. So I'll say a few words about this. The metric is a Riemannian submersion. So it turns out that somehow the map, this projection, is respectful of the metric provided that, so once I define the metric here. So the metric downstairs is actually has a Riemannian origin. However, I will explain it not in these terms, in terms of the distance function between any two densities. So here the distance, the Wisterstein distance function is as follows. So if I would like to describe the distance between two measures, mu and nu, then the square of this distance is described as follows. You take any point in the manifold x, so any point x in the manifold m, and move it by diffieomorphism here to the point phi of x. Then I'll take the distance on the manifold m between x and phi of x. Take this square of this distance. Then I average this over all points of m. So I'll take, in other words, the integral over m, this couple of the measure mu. This is the average how far each diffieomorphism moves to the points. And now I would like to find the optimal diffieomorphism, so to say. So I would like to take the infinum of all diffieomorphism phi, which actually move mu to nu. So once we, so this is called the distance between the two densities, and that's exactly the distance appearing in the optimal control, which has the remaining origin. So it comes from the quadratic abstraction of the tangent space. So it turns out that this is the remaining summation, which means that we have vertical spaces that are just fibers, but there is also the horizontal vibration here of those orthogonal, so the planes are orthogonal to the tangent to the fibers planes, and they project the tangent spaces to the base isometric. Okay, now, so this is exactly the magic. So now, by the way, one can view this theorem as just, so thinking that we have now a Newton equation here, with the potential energy coming from the base. So the potential energy depends on the base point one there, while the kinetic energy comes from this very altumetric on here. It turns out that exactly this picture that I would like to discuss in various generalizations of this picture. So one of them is, one can actually put the H1 metric, H1 dot metric here, and it turns out that the corresponding projection, even though the respect to left cassettes rather than right cassettes, respect to the different gives the addition of the result of Steven Preston, and Jonathan Lennards and myself. But what's important here is that the other options beyond L2 to get interesting metrics downstairs. So this picture somehow is shown that optimal transport, which is happening on the basis, is somewhat due to the incompletable fluid which is happening in the fiber. Now, what is model 1 transformer? How is it here in this picture? So this is the model 1 transformer. So let me start with the following statement. So this is the theorem of model 1 of 1927, and some version of this is due to Alfa-Linese of 1922. So there's this hydrodynamical form of quantum mechanics, let me put it this way, of training equation, which was linear in a linear part. So namely, I'll take the following equation. So I did it here, so I would explain all the ingredients here in a second. So this is minus the pressure of P C plus the function of P squared times P C plus V, which depends on X times P C. So this is the Schrodinger equation, but it somehow combines two pieces together. So usually the Schrodinger equation, so if I get about this part and just think about the linear equation, then V is the potential. Or if I get about this part, then for appropriate functions, V is what is called the linear Schrodinger equation. For instance, if V is just the identity, so it's just the P squared times P, this is what's called the classical linear Schrodinger equation. So the first part is V is due to modeling. The second is F is due to the problem of the Nessian. And now the claim is that if I apply the following transform, so P is the square root of rho e to the i theta, then one can rewrite this equation in the following form. It's ETV plus the covariant derivative of V and the new one itself is equal to minus 2 F of rho Sv and minus the population of this square root of rho divided by this square root of rho. The gradient of this and here B is the gradient of theta. So by taking this very transform, one can rewrite everything in some sense, one can rewrite function P, which is a function on M, which is complex valued. And I would like this function to be from anywhere along zero. So this is a wave function. I rewrite it in some sense in a polar form. This is the absolute value and this is the phase. Now if we write the equation on the gradient of the phase, so V is the, then we'll see that this is almost like the compressible fluid equation, while on rho one gets exactly the incompressibility, the continuity equation. So now it shows that essentially quantum mechanics is some form of compressible fluids, though one should mention here. One thing to mention is that the Schrodinger equation was actually discovered in just the year before. So it's the hyper-dynamic conversion of the equation that appeared just so the Schrodinger equation was discovered in 25, published in 26, so already in the next year, 27, this kind of mechanical form was found. Another thing to mention is that in the compressible fluid equation that I mentioned before, the right-hand side depended only, it was the pressure, it was the function of density. Here it is, it depends on both density and derivatives of density. So that's why it is called the quantum pressure specifically for this purpose. Now the question that we are addressing is that what is the genetic meaning of this geometry of the long transform? And for this we would like to massage the, so essentially we would like to discuss this very formula in more detail. So what does it mean? Is it just kind of the practical incidence of two equations, so there is something behind it. So to do this, let me consider the pair, a rho and theta that appears in this polar form of the wave function. So we remember that the integral of rho is equal to 1, so we consider just density is the term normalized and positive. But about theta, we would like to actually look at the functions, modulo, etting constant, just because we anyway take the gradient of theta here. So I would like to consider a set of theta, which is just theta plus c, or any c. This is the object that we have. But then if you just think a little bit about this, then the density space is just this defined space normalized by this condition. So the tangent space consists of densities that have zero mean. So then the dual space to those will be exactly the space of functions, modulo constants. So in a matter of fact one can consider now the pair rho and the, and the set of theta as a point in the tangent bundle of the density space. This is an important message. So now I hope that this place of rho is just the point on the density. And this cassette is the function modulo constant. It's exactly something that is due to forms with zero mean. Similarly, if you look at c, it is normalized by the condition that the two norm of c is equal to one. But moreover, I would like to consider c, that the cassette of c, which are in distinguishable modulo phase. So it's c times e to the i alpha, where alpha is any real. So this is the cassette. So, see what would enter. Again, if you look at this, you have the unit in dimensional phase of c, which is the space. So c infinity functions on m that are as well used. So c is about zero. Now I take the space of functions. I take the unit sphere in the space of functions. They are two units here. And then I do not distinguish points that are on great circles of this unit sphere. That's exactly definition of a projective space. So, as a matter of fact, it is just the, called the incident dimensional projective space of such functions. And I regard c as an element in this space. Now, so we have the, so the, the metalone transform now is, it serves such a pair to such a cassette of wave functions. So now a theorem. The first theorem is that the metalone transform, let me denote this by, for instance, c. So here, which maps now c star density to a projective space of c infinity functions on m, which I, which work like this. So it is rho theta to the cassette of c. It is a simplectomorphism. So when I say that this is a symmetric, I have to say what symmetric structures are. So here it will be just the canonical symmetric structure in the Cartesian space. Because this is the Cartesian space to the answer. But it is not that the projective space also has the number of initial canonical structures, which is called the Fubini studio. So it's a, of course, a Fubini studio, simplectomorphism. And it turns out that this is a simplectomorphism. That's the first statement. So the second statement is that it turns out that this space actually is a color space. There is a natural complex structure here and a natural metric. Fubini studio, sometimes it's called Fubini studio metric to metal tracks. But it turns out that it is a very natural metric on this Cartesian bundle as well. This is exactly the one which comes from the Fischer-Rau metric that I mentioned before. And so this very sheet from T-star-tense equipped with the, it's supposed to Sasaki Fischer-Rau metric. And this canonical structure, they together consider the color structure here. And this map to the same projective space, again, is the Fubini studio metric, is actually an isomorphism. So, in isometry, and hence this, I consider the whole pair, this should be a color map. So in particular this, my time is up, so I'll just mention that it essentially shows that there's more or less the same equation. So when we consider the space, the quantum mechanics in it can be rewritten as the compressible fluid with this quantum pressure. And in particular it would be very interesting, I think there are some various experiments showing a dynamical meaning of quantum mechanics. Right now there are some groups in MIT for instance, and in France that put this massive pilot wave through it with droplets of glycerin moving along the vibrating surface, which exhibits a lot of quantum mechanical properties. So it would be interesting to see if this very close analogy could help us. So I'll just stop here. Any questions? So for the compressible Euler equation, in the picture of small elements, is there similar interpretation as in the Eon and Balan space functions or do you have a loss of derivative there? So is there some kind of avenging mass difference? So, Gerrit, there is a loss of derivative when we pass to the density space, from different things, to the density space, just because we have, yeah. So we consider this in the setting of fresh air spaces, for example, you can develop nice existence. That's the natural thing. The reason is because one of the minimum constructions, which is important is to be able to use both left and right associations. So this is a way in which one wants to lose derivatives by using left translations in the dimension, so we have not considered this. The obvious question is what's the link between the H1 metric on depth and the rest of the V over here to complete the picture of the Fisher-Rawin depth. This is the picture, of course, just on the, on the cadet's, just the density, so it would be interesting, of course, to lift it to some kind of the bundle over this. But the point is, I mean, one can formally do some similar, but the question is whether it has some natural interpretation. Here it's about to be quite a famous metric in the Simpletic structure, but maybe there will be some kind of more artificial construction. We don't see it, at least. We were thinking of doing this in a vector form, like a matrix form, and they seem to be related to the chiral versions of those equations. This might work, but it does not seem to be, it's like, I would say, a finite dimensional extension rather than a infinite dimensional extension in different molecules. So it's rather like a multi-component, showing the equation. Yeah. You could, if you had an n-d-d reading table, you could interpret Rosita, just reading, that's something. You could interpret Rosita as a bariatized curve. Yes. And I think this is exactly the kind of… Yeah, this is the Hesse-Moto transform, essentially, in one dimension. It is no longer here, so we don't have this term. And by the way, it exactly shows that Hesse-Moto transform is just one dimensional version of the Madelung transform. And it looks very surprising in 1972 when Hesse-Moto did this, but it's clear that that's exactly… So the equation first appeared in Daril's paper 100 years ago, 1906, where he gets the binormal equation for curves in three-spaces as the equation on curvature and torsion. And it turns out that that's exactly kind of this pair of equations, and they're taking the square root of the curve… for curvature squared. So he is the torsion and Daril is the curvature squared. So they're taking the curvature squared, exactly gives the Hesse-Moto transform. So no surprise at all. It was a surprise for us when we realized the torsion. And it's also your transformation that you had with David and Peter. Yeah, and they didn't develop the transform in the last one. Yeah. It's the basic method you probably don't… Last question? Yeah, so maybe a comment on Mark's question. Because you said, if Molensk had considered this as an ODE on this set, so the metric structure is just a non-nuclear variant delta metric. So the fact that that's a smooth spray is already in the Marston paper, right? But now the difference is that you also have a potential. So you have to take the potential into account, and I mean, I don't know if that gives you a smooth vector field in this case. But the metric part is already taken care of. It's definitely not smooth on a Banach model. Oh, it's not smooth. Yeah, there's too much loss of derivatives from the density of the rock. The non-nuclear delta? They'll put the potential in the compressor. Yeah, so it's not smooth. Yeah, so the infinite number of units. The right one is the…
We discuss a ramification of Arnold’s group-theoretic approach to ideal hydrodynamics as the geodesic flow for a right-invariant metric on the group of volume-preserving diffeomorphisms. We show such problems of mathematical physics as the motion of vortex sheets or fluids with moving boundary, have Lie groupoid, rather than Lie group, symmetries, and describe the corresponding geometry and equations. (This is a joint work with Anton Izosimov.)
10.5446/59225 (DOI)
And I would like to thank the organizers for inviting me here. It's very nice so the first two talks were About the few more face groups of the few more face I'm going to continue But but now I will change the metric structure, so I'm not going to use the L2 metric Well, I see you missing. And what I present here is joint work with Start with a little outline So first some background and motivation for what I want to do I will present some some results And essentially this is related to the analysis that Gerard was talking about We want to study what happens And the metric is no longer fully writing variables So there are some pitfalls when you try to emulate this a bit more student analysis So we'll talk about that and then how you can move from global to from local to global results We also heard a little bit about this And also what happens when you go to Banach from Banach to a for shade quality so from from so many spaces to Okay, so the background was presented very well by Boris Many of us consider this paper by Arnault from 1966 as a foundation of this field of geometric hydrodynamics So you know what it's about so the Euler equations describing an incompressible fluid evolves on the space of vector field, let's say So there's an interpretation of this as instead a geodesic flow on the group of Sorry, I should say divergence field and there's an interpretation of this as a flow on volume preserving the homomorphism And As Gerard presented the the relation is just take either the Lagrangian or the Ilarian point of view So so after an old paper the Evident Marson did analysis on this based on this and they came up with these local post in this results That's the paper contains a lot of things actually And since then many authors many were here today have been working in in this fashion, let's say and also trying to do the same thing not just on the volume preserving the homomorphism but on the whole and Normally then you work with higher order right invariant sub-left-right and the key point I guess is that Gerard also said is that the Geodesic equation that you end up with It's a smooth you obtain a smooth second order order differential equation of on a sub-left completion of your different morphism So here is the basic question that I will discuss today So what happens if you have a metric on the space of all the few morphism of some very compact manifold? Where the metric is not fully invariant it's just invariant with the subgroup of all in preserving the So an example of that metric is exactly the fluid metric that Boris was talking about this morning But we want to do higher order versions of this and and What happens then? So so we assume that we have some some metric structure or some remanus structure on the group of the few morphism And we assume also that it's invariant with respect to From the right with respect to elements living in the volume preserve What can we do? And before I tell you what we can do why why are we doing this at all? What is the the sort of underlying questions or where does this come from and of course? I think it's sort of interesting in itself to see just see how much of this seven Marston Analysis goes through if you drop if you drop the full writing variant assumption So that's one motivation the other one That's how we sort of started with this comes from this auto calculus framework, which Boris also told you about So let me summarize that maybe my point of view is the same picture we had before So this is the manifold of the few morphism, right? It has a group structure. So at some point there is the identity And then if you take the projection So you fix some reference density And here I actually think of density is not as volume forms, but as functions Relative to some fixed volume form your manifold So you take the projection and which I called the x here So you take the projection given by the the action the left action of the diffio on that volume form corresponding volume form you get the projection down to these densities probability densities, so we assume they integrate up to one and And this gives a result by most sir, but this is in fact the principal bundle So this this gives us a vibration of the space of the morphism and this works This works in the for Seattle Polydus in the case when you have smooth diffium or face It also works in the in the solar Okay Well, yeah some restrictions, maybe it's not a smooth principal bundle Okay But anyway So so what is the autocadabulous? Well you take this? L2 metric on this M Given by the expression here. That's the same one that was talked about earlier And because the metric is right invariant it induces metric on this quotient here, so on the probability densities and This is called sometimes the automatric and this the distance of this metric generates the L2 washerstein distance So that's the point and then The open problem here, which already has been addressed by some authors here Is how can you generalize this to to higher order? So that's that's one problem stated in the 800 pages book by a villainy to sort of try to find generalizations of this So that's one motivation the other one comes from our shallow water equations so in fact, there is a very closed relationship between shallow water equations and compressible oiler equations, so in fact and this equation up here which sort of the simplest Shallow water equation H describes the depth or the height of the wave if you like and V is the horizontal velocity on the surface this and Then the simplest model looks like this So you have the evolution of a vector field to say one or two dimensional typically and Then you also have the evolution of the height function Which is transported like a density? Okay, and the energy functional here contains Two terms and the first one is just H times V squared So so this is like a kinetic energy term and then you have a potential as well, which is just a square So this is the simplest possible one and if you like you can also think of this as a simple Compressible fluid where so this is the internal energy that Boris was talking about before and then H would be the density So H would be row And the corresponding so you can view this of course as a Newton system so the under Lagrangian on On the tangent bundle would look something like this where fee and fee dot are elements in the tangent bundle of the diffumes Okay So here again you have this standard metric plus some potential that only depends on how the diffused acting on the reference But this is only the simplest model what people tend to do in the shallow water in the shallow water field is that they find more More refined models So so one of them is the green 90 equations Which is sort of if you remember this first part here is just a standard one and then you add these terms here To sort of correct this to make a better model and did the transport of the height function or H is still the same if you if you look at this you see that From the point of view of energy the energy functional is you have still this term, but then you have a correction terms that depends on the derivatives of your vector field P So now the first two terms this is still quadratic in P right the first two terms that is positive can be interpreted as the kinetic energy And this is still the same potential you had before So this somehow if you think of this as a Lagrangian on the tangent bundle of the diffused this would then correspond to an H1 metric with just a potential and In fact if you look at this you see that it comes from just a Tailor expansion so you if you want even more refined models can continue this and get even more derivatives on your Yes Oh, so it's just the adjoint of the knowledge with respect to F2. So yeah, so not yes divergence generalized I guess because Well, let's see. Not exactly right depends on the curvature. It's a flat if it's flat Okay So so you can generate even more refined models and that corresponds to actually having higher order metric So that's one motivation Yes, so that's what I said this just corresponds to an H1 Okay, so let's get into business And see what we want to do now which kind of metrics structures do we want to study? So the first thing which is sort of implicitly why I already stated here, but let's state it more formally is that So you're on T The tangent bundle of the your different morphism you have a symmetry given by the volume preserving the morphism and then if you sort of look at The reduced phase space it is it consists of just a vector field and a density. That's what you have seen so so all of this would be equations evolving a vector in involving a vector field and a density and the Lagrangians that I would consider on the tangent bundle will always come from let's say a small Lagrangian depending just on on the Velocity field the vector field and the density And in fact, I will take them on this Newtonian form So so I will have one term corresponding to to the kinetic energy which has in the in the inertia tensor And then another one corresponding to the potential energy. So that's the setup and Once you have that you can derive the equations of motion for this so the governing equation will look like this So M is the momentum here, which you get by applying the inertia tensor to the velocity field But now of course this this operator here may depend to believe that's the point. It depends on row, right? That's the difference from the fully right invariant setting and If many of you here are used to the pedification which corresponds to the case when you have a fully right invariant metric and what is the difference? Well, it's essentially these three things the first is that the a here depends on row That's the first difference And the other difference is you get this extra term here, which is row times the gradient of The derivative of your Lagrangian with respect to the density So so notice this is This is non-zero even when you're potentially zero because because you also have the fact that the metric And then you have the transport equation for your role so now there's a coupling between the two because M depends on this fashion. So that's the difference from from you Okay, so we want to say something about short first local results for these type of equations And of course in this generality, we can't say anything. We have to specify What our inertia tensor is right? So let's discuss some natural assumptions on this So here are that the three natural assumptions we came up with And I will try to sort of explain how this is different from From what you get in the fully right invariant case. So for the first one is that This operator that takes your vector field and your row and assigns a Pytode to the vector field This is a smooth differential operator in both row and V Right so and in row it can be nonlinear typically this but of course linear in V And it should be of order 2k minus 2 K is an integer in row and 2k in V And then we call this metric of order K So that's the first assumption the second is that for any smooth density And I sort of now fix the density actually the inertia operator should be elliptic We need to be able to invert it in a nice way And then the third one which is sort of The one that's a little bit more complicated here. So we need to say something also about derivative with respect to a row so if we take a We take this inertia tensor and we differentiate it with respect to a row we get something that That takes a V in a row and it gives you a map from C infinity to the space of vector fields, right and If we can take the L2 adjoint of that mapping and we require that the L2 adjoint of that mapping is also a smooth differential operator of order 2k minus 2 in the row variable and 2k minus 1 in V and U Okay, so when you foresee this it looks a bit strange, but actually let's say that these conditions are Sharp in the sense that if you take away any of them for sure you will find metrics where you don't have Where you don't get local results, but at least not in a good is having horseman technique and if you if you relax any of these conditions here 2k the orders of the differential operators. It also doesn't work. You can find the counter example So so let's be more specific and let me give you One case where this works so one metric for which in which this works and one case So the first example is I take so I specify my metric It's enough to do it in just row V and the velocity variable V, right? And it's defined through these some here where AI are just functions from our From you know takes a positive number returns a positive number and then and then I construct it using covariance like this So this will be a metric of order k And if I do it like this and I Because the following conditions either if k is one I require that a zero is positive and a zero is positive And a one is Completely constant uniformly, right? So it cannot depend on the row at all or if k is a greater than or equal to two the only condition is that a zero and a k are strictly positive Then it will fulfill that these assumptions So notice that something you have to treat the case k equal one so h1 matrix has to be treated Especially this is one of the first Let's say major differences from the standard setting that that there is something funny happening even for h1 We're fully writing variants. Normally when you have h1 you expect things to work out So in that example the green equation show what the expansion you'd better to take it up to the second term Yes good observation So the observation is that that board is made is that that equation does not fulfill this So in fact, we cannot use for green I think we cannot use the second term We can yeah if you include more yes, we can because then it's enough for justice Yeah, that's something that comes out of when you do these calculations of the analysis So that was a good example. Let's now take the counter example So now let's L be any nice. Let's say elliptic invertible differential operator And you construct your metric just like this So okay, why would you want to do it like this? There is actually a reason for doing it like this and it goes back to some work So because this metric although it doesn't have these nice smoothness properties that other very good convexity properties That's something we're working with now But but anyway, this is one that does not fulfill those conditions Yeah, let's see First result is the following so we take M to be compact I think for simplicity with me without the boundary in the paper and then we we take the metrics that fulfill all the assumptions I stated some slides ago and then we let the Sobolev index S be strictly larger than D over 2d is the dimension of the manifold plus 2k Okay, then we get local well-posted as free the sick on the sobolev completion diff s So notice here 2k Right, this is different from from what you have in the fully right in my case and this is this is necessary So that's that's another the pitfalls in the analysis that you really need More smoothness in order for this to work Okay And this is for the geodesic equations this is when you have no potential function But in fact if we go go through the proof it is easy to see that it's not a problem to add the potential As long as it doesn't differentiate row too much and and there were what this means here is that if you take the The variational derivative of your potential it should be a differential operator order maximum 2k minus 2 If that's true, then you also get all the local well-postedness when you have a potential Okay So so that that's I would say maybe the main result. That's the local well-postedness result. Let's now talk about Going to global results for these type of metrics and the first thing is that the standard let's say Arguments when you're on a league group or some some group structure and you have a metric that is strong But normally you can you can get a global existence of your desks and this is essentially how it works you have Let's say we start at the identity And we have some some local results saying that as long as your initial condition you're not initial father velocity Is in this ball here? There's a domain such that we have existence here right so then we can move along this curve and end up somewhere Right and then the argument is that if the metric is strong Then the velocity at the end point here will still remain in this ball Oh, they're epsilon ball of the same size so then you can just take that velocity translate it back again to the identity Computer geodesic as far as it goes and then copy back right so then you can iterate this process and you get global existence, but now Of course, this doesn't work when things are not fully right invariant because your epsilon Here will depend on row and row. It's not right. I mean it's it's move is changing right? So so so you cannot use this simple technique and actually you can ask yourself can we even expect? global existence So so the question is if you have Any same invariant metric of the type I just Showed you and it's the order is high enough. Will you always have a global existence of your basics? And the question is can you can you expect that I don't really know I don't have a mathematical argument But I have a physical one and it comes from shallow water equation So really in shallow water models you expect that you get some way breaking So so maybe you cannot even expect to get the global existence But at least existence up to way breaking or something like that So this is something to study what we did obtain however was let's say like a cheap global result and what we did is we take the Dementric of the form that I specified as the first example and then we make some assumptions on on these coefficient functions So the first one is that a zero is great strictly greater than some positive constant and the same for the last one a k and then Things work out we get that the diffeomorph is Def k this the case of the completion with this metric is complete metric space and it's just equally complete You have global geosics and the proof is essentially just that this metric is uniformly stronger than a fully right invariant metric that is strong So so there's no deep thing going on here. It's just comparing it with it with another metric Okay That's what I wanted to say about the global results now There is one more key thing in the ebb and Mars in paper And it's the fact that you don't lock yourself at just the specific s You want to to prove that this works when the sublum index s also goes to infinity so that this in fact works In the fresh air setting of smooth diffeomorphism So this result is sometimes called no loss no gain it means that if you have some existence results and you start with initial data that is Great which has more smoothness than what is required from your existence result Then in fact the smoothness is retained in the solution So if you start with hs plus one the solution would be hs plus one So so the question is does this work and and yeah, this works Absolutely the same for semi invariant metrics. So the statement is the following you take just any vector field on the tangent bundle Which extends to a smooth vector field on the on the sublum completion for every s greater than some s not and Then if you have Initial conditions with higher Regularities and required then what so so this is the exist the maximum existence interval it is similar to the existence in for the lower for the smaller s and Then well, what is the proof the proof is essentially to carefully read what is in the Evan Marston paper? Sorry, oh, yeah, yeah, okay, that's important. So the vector field has to be as the active variance That's the whole otherwise this will not work I'm explaining what is j also the day existence interval yes Okay, so can you also? increase s very light in this space with which you're working then Exponential map of our solution operator I think the most important thing is the Set open set of the same size or whether the size shrinks for example the size of that set depends on No I Don't you have to go to I think the way we stated it is that we get the same result in every Marston but Probably not more than that Yeah, okay, but anyway, it's essentially containing that Just that they didn't state it And of course we if you let s go into infinity you don't get result in the corresponding for the corresponding for shaley Where it's truly actually? Okay So yeah, that's essentially what I want to say what is the outlook well we want to maybe Do more sophisticated global analysis? And study things like yeah, of course you can also ask questions about friend on this You can study the sectional curvature for these kind of matrix So maybe that's something interesting the other thing is what all told it what she studied I gradient flows with respect to this remanus structure. So because our metrics are writing variant you get the big class of remanium metrics on On probability densities so you can do the same thing as study gradient flows and other things that might lead to something interesting Okay, thank you Moving on spreading So two questions Impact of this in this context here, so we have this money narrative H3 and different steps to be higher So so if I understand your question so correct you want to add some viscosity to this And see what happens and I think at least So so for me this would be like adding a force that is not from a potential So so now so now the question is if is that force going to be a smooth when you translate it, you know to to the whole bundle on the group and the result Like this is already given. I think in the Evan Marston paper for some type of Dispation correct me if I'm wrong there are many experts here, but we haven't looked into that but possibly I don't know Number of derivatives For example, you refer to that is so yes And that's why we get some nice results Then related thing is These are home Tonya so equations so here Not the fusion but dispersion may help So I'm not even sure if say green nag me has a dispersion dispersion relation for free There may be some week dispersion like a massive week dispersion, but KTV another shadow what equation this strong dispersion And then you can use for example completely different techniques. You can tell your space function space in which you are standing The question problem to the queen of the linear term of the equation. This is what we're getting in this series of papers in the 90s He came out with a so-called light spaces or space time space time spaces where he got global existence even for a to initial data and used of course the strong And in analytic fallacies using so-called spear But if in your context if you get these some of these equations with strong dispersion then maybe this is the methods to Not so They don't distinguish between the actions Slavery they're good for certain types of equations, but may not be good for others So yeah, so once I one thing I forgot to mention is for the green nag the equation first like Boris pointed out our Existence result doesn't work for green because of this. We need this extra condition on the coefficient function and the other thing is that people have already of course Analyze this equation. So so so local existence of solutions is well known for green It's well known for green magma already Other questions? So what about the noro's no-gain results when you look at the boundary? Same as for the different one, for fully writing the right matrix is also problematic. You're not better. What do you mean by problematic? Yes, you mean like convexity or something like this? No, I mean If you start which are given initial and final configurations which are smooth enough, let's say hn and greater than s You have any results? Every jubilance It's the same problem as in the fully writing by itself As long as there are no conjugate points along the jubilance You can go higher in its days. Yes, but there's no global results in it. There is something from very special cases. I think on Toros by No capitalizing. We take something which But I think that's the way Toros by Not in full generality as long as they're the best thing that I know is from you effects There you have something. Oh, is it only by markings? Okay, yeah, so and that is the result with no conjugate points Yes, we don't expect the first batch I I'll comment about what you guys talking so the finite dimension of this is like different product Problem What you're doing, right? Yeah, so so that and and then I want you to say that the potential that you have that this electric operator, right? It could be like the equivalent of the moments of inertia in the rigid body being shifted So in this way one can control a rigid body certain positions, right The tumbling where you can move the rigid body by controlling let's say two beats a lot of Along one of these axes of moment of inertia And what I want to say related to your question is that the shooting right would be affected by the way you change the role So for different roles, you will get you so I'm trying to say that you can rock and control your flow In a way that in a rigid body moving the beats you're going to control the position of the pipe Yes, that's that's Okay, but my question was a little bit off right I was wondering are there any peak on solutions like the Kamasa hole You know that kind of You know this thing We could look at them I mean they go through and body problems So, I mean if you take if you take the Compressible or an equation that's the simplest the simplest example right it's barotropic equation the one Boris was talking about It could be a first place to start Yeah, you don't have What do you have So In my dimension maybe to the No So Yes We yes, they started because for Number of different inertial operators that are given by maybe the pseudo differential Yeah, but still right So this My intuition on this would be something like this so because how do you construct the peak on solutions in the fully writing Variant in case one way to look at this just take take the action Of the diffio on some finite dimensional space right so the points just moving around some points, right? And then you get the momentum up and the units the inertia turns to invert The rock pulses and you get something right so but now you also have the density So I would expect that you get the movement of some densities plus some point vortices that are moving around Yes, so there seems to be But the density changes are going to change the weight But it's not even clear that I think Laurent you mentioned that it's possible that The direct delta dissipates And leaves the space immediate So if that happens then there's not conserved anymore you need to show that you are Delta the concern no, but look it's like this right Disappear Formally it's like this right you start with you start with the action of your diffium group on something and now the action will be on On some on some finite dimensional space corresponding to the point vortices times Let's say the densities right and then you lift that action to to a Hamiltonian action And you compute the momentum up and and that should be at least formally that should be some sort of Simplactic leaf where things are moving around right so so I expect that At least formally this is something we're drawing it and I'm just trying to say you can write It sounds For the discussion for the long break and we'll take it right now. Thanks class So let me just mention that there's a guided tour at one p.m. After lunch
We investigate a generalization of cubic splines to Riemannian manifolds. Spline curves are defined as minimizers of the spline energy-a combination of the Riemannian path energy and the time integral of the squared covariant derivative of the path velocity-under suitable interpolation conditions. A variational time discretization for the spline energy leads to a constrained optimization problem over discrete paths on the manifold. Existence of continuous and discrete spline curves is established using the direct method in the calculus of variations. Furthermore, the convergence of discrete spline paths to a continuous spline curve follows from the Γ-convergence of the discrete to the continuous spline energy. Finally, selected example settings are discussed, including splines on embedded finite-dimensional manifolds, on a high-dimensional manifold of discrete shells with applications in surface processing, and on the infinite-dimensional shape manifold of viscous rods.
10.5446/59226 (DOI)
We'll talk about some relations between optimal transport and stochastic management mechanics. I must say that this talk will not be very technical in the sense that, well, let's say, for example, each time I write a PDE, I won't tell you if you're in what sense the solution might be found or if it's decent or not. I just wanted to focus on these relations and open the way for new problems, which some of them are being studied, some not yet. But it's more to explain the idea of these relations rather than be very technical. OK, so it is based on various joint works, some with Marc-Arnaudon, the president and Chin-Chen, those with Jean-Claude Zambrini. And there are also, I just referred that I'm not going to talk about this, but there are some extensions of these results which have applications with fluid, notably the infinitely dimensional counterpart of what I'm going to say. Part of them in collaboration with the RATU, part of them with the other authors, plus Christiane Lyonard, also here present. But these extensions somehow go into, OK, so there are some extensions that go, so these part of the work are extensions of stochastic geometric mechanics, and these are extensions of the optimal transport part. But they don't connect as the basic results in finite dimensions, at least as I'm going to explain. So I just wanted to refer in there that the endanguier view is unresolved. OK, so let me start by recalling the, probably everybody knows this here, but OK, since it's after lunch and maybe won't rest a little bit, so let's start slowly and recall the most counter of each automotive point problem in the flat case. So very, very simple just to recall the problem. So it consists in minimizing this cost functional, so the distance between the points. Let's say I'm in a flat case here, just for it to be simple. And they minimize in the joint distributions. So I'm integrating here in the cross-pace, the cross-product space. And these are joint distributions, such that the marginals along x and y, the coordinates, are given by priority by some, let's say, probability measures. And since this cost, let's say, in control terms, and since this distance can be also written as the infimum of the autonormal velocity of path, which started x and ended y, then if you average this quantity, you get a higher value. Therefore, the most counter of each problem is equivalent to minimizing this quantity. Pxy here means a probability measure on this path where the marginals are starting. So you start from x and you end up y. Therefore, the marginals, there are measures at x and y. And then by these integrations, this means that you are minimizing over probability space, probability measures on the path space of this quantity. OK. This is, in very short terms, what the marginals are of each can look like. And then, of course, you have the sortable arrangement, version of the problem. The other version of the problem consists in which, if everything goes right, in the sense that, as I said, I'm not here saying when can solve the problem or in what kind of assumptions, regularity assumptions I have. But if everything goes right, it turns out to be equivalent to minimizing the velocity. Or, of course, the velocity is computed along the integral flow with respect to the velocities, such that the loss of the underlying flows are, again, at initial n times given by some probabilities measures. And if these measures are absolutely continuous with respect to the Lebesgue measure here, and I mean the flat space, then this density function at each time, if it exists again, will verify this equation, which is the continuity equation. OK. So this much for the standard Moroshka-Tarovich problem, which is, of course, well, except that you deal with probability measures, but it's started deterministic control problems. Now, oh, OK. Again, also, I add if I solve the Hamilton-Jackoby equation for psi, and I have this continuity equation, then this problem can be solved via velocity, which is gradient upside. So this is the equation for the velocity, and this will be the equation for the upside potential function of psi. OK. So if you want to turn this problem into a stochastic problem, OK, this is one way to view it. But if you want to think about now the underlying pass that I was talking about not being deterministic pass, but being stochastic pass, so you have here a sort of random fluctuation, like even by some wrong in motion. This is the standard perturbation we put. I have put here just in the two or three slides some epsilon just to stress that this can be a perturbation, and that normally epsilon goes to 0. Everything goes to the monstrosity problem that I was talking before. So you recall the notion of the entropy of a measure with respect to another measure, which is this quantity here. So let's choose B to be the law of the Wiener processor, this Brownian motion. OK. And so Q, the law of the diffusion, which is driven by the Brownian motion, plus some drift. Now, it turns out, and this is a central thing in your theory, that the entropy of the law of this function with respect to the law of the without drift, so of the Brownian motion, is the entropy of the initial conditions, the initial marginals, plus 1 half of the, so the L2 norm of the drift here. And this is, OK, I'll come back to this a little bit later, but this is a consequence of the well-known theorem in psychotic analysis, which is the synopsis. So what is the sort of generalization of the most counter-efficient problem for stochastic path? It's called, in most, in many frameworks, in which the Schrodinger problem comes from Schrodinger and in relation to quantum mechanics. But anyway, it's consisting, minimizing the entropy function, I was talking about, subject to given marginals q0 and q1. If y, so remember that y, that y was the drift, so if epsilon is not there, this would be the derivative of the path, simply. If y is a function of the process, the density, again, if everything is solved and exists and then we are in the right regularity assumptions, then the density will verify the continuity equation per term by this term, which is given in terms of Laplacian, as the Laplacian generates the random motion. So we have the generalization. And if you have v equals to gradient of 5 solving, now the Hamilton-Jacques-Bellman equation, which has this extra Laplacian term here, then you have the analog, so you have here again an extra Laplacian, which gives the per-derivals. So you see that if you put brutally epsilon equals to 0 here, everything goes back to the most kind of each problem. Now, if the question whether you have, and in what sense you have limits of these solutions of this problem when epsilon goes to 0 is a very delicate problem that I'm not going to talk about, but some remarks. OK, so first that you can, by making a change, a logarithm change of variable, you can transform the Hamilton-Jacques-Bellman into a heat equation. And the other is that, again, when epsilon goes to 0, formally solving the problem converges to the optimal transporter problem. But this is a constant formula level if you want to make it rigorous. You have some works of Christian Lerner, for example, that work on this subject. OK, so you have these two problems, Mochan Korovic and the generalization, which is called Schrodinger problem. Now, what does it have to do with geometric mechanics or with stochastic geometric mechanics? OK, so now I will place myself into the context of league groups. And somehow I'm going to make the Schrodinger problem, but in the context of league groups. OK, so let's do framework. So you take g to be a league group. I'm taking here a right invariant metric, but you can take a left invariant and then, well, some changes signs and some changes of some operators. But anyway, take a right invariant metric. Now, we'll be able to have a g-b to connection, e to identity, and the algebra is called Karol G. Now you take normal basis of these three algebra, special one. It's not that you cannot take a more general one, but I want it to be a clean result as easy as possible. So I'm going to make these assumptions on this vector field of these three algebra. So right invariant and such that covariant derivative of h i with respect to h i is equal to 0 for n of this, and an ordinal basis. I'm supposing here that everything is fine in dimensional analysis because everything will work fine. For my point of view, the most interesting results is R when g is infinite dimensional, notably when g is the defiomophase group of some n group. I haven't studied anything, but I guess so. So one of the examples is the defiomophase group where morally any group will do. But if you want to do the analysis, initially to mention, you have to go case by case because some things are not well defined, some things are different, so it has to be case by case. I guess loop with an interesting case that you can deal with. I can tell you. Yeah, right. But in some sense, we were also motivated by two weeks and defiomophase groups who came out of this equation. We think like this at the beginning, so this was another group. But a priori, every group, the machinery in finite dimension essentially works, and then I'm going to state it here. And then if you want to adapt it to each instant marginal case, you have to work each case. But there is no fundamental reason for it not to work. There can be technical details. Kind of different. More of the, of course, things will be measured very much alike. OK, so what does the Brownian motion on some G-league group looks like? So the Brownian motion, which generated a given by the plus delta B. Well, it can be written like this. So G with a little 0 upstairs will be the Brownian motion. And the stochastic differential equation that it obeys, it can be given. So in terms of these bases, because I show here an orthonormal basis. So it's the tangent map with the right translation here. Again, it could be left of the sum in this direction of this orthonormal basis. And because of this condition here, it's the same to use here this little circle means Trotanovich's integral. OK, there must be some definition of some integral. There is Trotanovich and there is it. And as I'm making this assumption here, it's the same. So the contraction will be same. Again, it doesn't have to be so, but I'm choosing the left most simple model to explain the relations that one can see between these problems and not going into what difficult frameworks. But it's not an essential thing here. OK, so this G with a little 0 G-league not will be the standard Brownian motion on G with this in the right invariant case. And then a general diffusion process that we call G is the Brownian motion plus some drift. Again, as we saw in the flat case. This drift is an extension is indeed, OK, I can just call it like this. But I'm going to denote it by the nabla G D T just because it's more suggestive in the sense that, well, if again, if there was not this term, this would be just the derivative in time. And here it's a sort of regularized derivative in time. So this object normally you define it like this. So you have G, you transport it back with the parallel transport on the group. Transport it to the origin, you call this xi. And so this is the usual parallel transport over the gas expanse. So it was defined by Eto, I mean, to a shown to exist by Eto one time ago. And you call and at the origin you make some derivative in time, which is more or less the usual derivative, but with a conditional expectation. So it kills the marginal part. And then you transport it back to where you were. And so this justifies in some sense why one looks at this is sort of derivative, the drift. But OK, this will coincide for this case with the TLE RG of U. OK. Now, G's Sano's theorem says the law Q of this process with a drift U. And again, I will need some assumption on U, of course. And I will with G only assumptions. But if I have some essentially out to assumption on U, the G's Sano's theorem says that the law of this process is absolutely continuous. The law on the pass space, right? On the continuous assumptions, all of these values on the legal. The law is absolutely continuous to the law of the Brownian motion without the drift. So G not if they have the same initial margin distribution. And the density dQ with respect to P is given by the exponential of essentially this drift integrated against the Brownian motion minus 1 half of the square norm of the drift. So this is the basic thing that makes the connection, that allows to make the connection between entropy and the pentadentry and its expression. Because the entropy is therefore essentially if you have the same initial distribution. It is therefore even because you remember the entropy has a logarithm, so it drops down the exponential. And then it has an integral, I mean the expectation. And the expectation kills this martingale part, this stochastic integral part. Therefore, you are left just with this. And the entropy coincides exactly. So the entropy of the measure Q with respect to P on the pass space coincides exactly with the kinetic energy after all. I mean, somehow the general line in the sense that the derivative in time is not the derivative in time, is regularized, is the drift. But coincides is a sort of kinetic energy. Therefore, you see that the Schrodinger problem, so minimizing the entropy, coincists in minimizing what is essentially the generalization of the Elton-Rohmer-Legloss. With this regularization, because you are working with non-differential group but everything looks sort of natural. Now I will need some. How much time do I have? Three. Two three? Oh, OK. Oh, sorry. Two three. Oh, OK. Because otherwise, I will get the three myself. No, no, no. I think it's two three. OK. Then it's OK. So some notations. You denote by alpha the invariant measure on G. P of x, the law of the Brownian motion, G, 0, starting on x. And the transition semi-group corresponding to the Brownian motion by P and P2 by starting with probability measuring. We assume that U and sigma are absolutely continuously respect to alpha. And we denote by m mu sigma, all the probability margins on G times G with probability measures or probability distribution such that the marginals at initial time and final time are the given mu and sigma probability measures. And also some technical thing that pi is absolutely continuous with respect to mu plus P1, which is this transition. Again, for notations of this slide, sorry, it's just sort of notations. If you have some probability distribution pi in this space, you write P pi dw, the integral of the transition kernel of the Brownian motion with respect to this marginal P pi, which is pi, which is the probability measure on the past page. Now, there is a very general theorem in entropy theory that says that if you have probability measure in this space, such that the entropy with respect to mu times P1, so P1 is the transition of the semi-group, is finite, then there is one that attains the influence, so it minimizes the entropy with respect to q with respect to P2. So this one solves exactly the problem that I was stating, the Schrodinger problem. Or if you want, it minimizes the section function, which is essentially given by the energy. Moreover, so q is P, remember, P pi is finite. This, moreover, q is P of pi, and pi attains the influence of the entropy of P with respect to this measure with respect to P1. And also, it is a law of a marginal process. So this is a very general theorem. But if you, the answer to the existence of a minimum. Again, in this assumption, it's not everything is finite. OK, so we have the problem, Schrodinger problem, defined in a general e-group. We have stated it in terms of entropy. We know how to minimize. We know there exists a minimum of this entropy. And now we want to know what kind of equations think so. And then, OK, so again, I have already had said this, but if you recall, if you recall this entropy is given by this, I call it the action functional energy. OK, so we have this stochastic Euler-Panquerier reduction theorem. G is a process of this time, a diffusion process of this kind, like before. And a process like this is a critical point of value, a, so in particular, a minimum of this point, if and only if the drift as a vector, so the drift is defined by the direction of the vector field, you satisfy these equations. So now, if you recall, now I mean this geometric mechanics part. If you recall geometric mechanics in the deterministic case, you have exactly this, this equation here, and you don't have the k. Now, what is the k? It's something else that we have here. k is, well, the expression is here, but it's essentially the R-Moch Laplace operator. So that's the reach. That's the reach, yeah. So the thing that appears here in the equation of motion for the velocity is the operator produced with the R-Moch. So just give you a hint of the proof, but doesn't make any sense. The point is that, at least if you, when you are in finite dimensions, everything goes smoothly. You have this existence result, and at the same time, you have that it solves this equation. Then you have to work in order to obtain what kind of regularity you have for you and so on in what kind of sense. For example, in infinite dimensions, things get complicated, but you have to go step by step. Do the same thing. So essentially, the proof goes like this. You have a critical point, so you have to define the variation. So you vary it. We are in the right invariant framework. So you vary it on the left with respect to exponential functions. So essentially, we are variating in the direction of v's. v's, any vector field, c1, such that in order to keep the final and initial end end points fixed, you consider v of 0 equals to v of t equals to 0. So you differentiate. And then you apply this differentiator. So you have these variations of the initial pass g. And by it or for where you compute the differential of these variations. So this is essentially eto-formal. So there is this stochastic part. This one is already, these two are already in the third to be this one. So you only have this. Yeah, I mentioned here that there's no part coming from contention terms because I'm using the assumption that allows to have eto integral equals to the tannobit integral. Then, OK, so then you derivate in epsilon all these objects. And you plug this differentiator into the action function. And you get this part as it is already in the case of finite dimensions. Nothing more. Sorry, I did a deterministic case. Nothing more. Plus a term which is, well, not very elegant term. But if you work out the details in case of righty variant, it turns out that this thing is, I mean, it's Riemann and Calcoules. It turns out this object is indeed the drama watch of greater depth than I was in action. And if you put, of course, I already said that. If you put all the HG theory we recovered, you have a Poincare reduction theorem. So this must be seen as a stochastic geometric counterpoint. OK, you also have a possibility of making of changing variables. Again, you have to define exactly what this means and it goes through the Hamilton-Jakob development equation here with Laplace-Batrony operator. And some extra remark, and then I'll go in here. If in the assumptions of Cesar's theorem, there is an extra assumption, which is that pi, this pi not that exists, and this makes the entropy finite. If it's absolute, then you have to respect for new Cesar of P1, P1 being the transition semi-group of the Brownian motion. Then actually, the pi that solves the minimizer is of this form. And this psi, so again, it has a problem structure. And you can go through a pair of equations from this mu and sigma, which are keeping at the end, to this psi and phi through these equations. And this is called, OK, I don't know what this is called, but this is a problem that was solved in finite dimension, at least by Brownian also a long time ago. And it's kind of comfortable because instead of giving probability measures, you give initial and final functions. So sometimes it's easier to deal with. So just to end up with some reference, OK, so the geometric result, which I mentioned here, extensions to include effective quantities and going to infinite dimensions and so on is here. It's in archive. But this is extensions again in the stochastic geometric mechanics part. Concerning the optimal transport problems, Schrodinger problem, there are general things here. I mean, the general statements and properties of the Schrodinger problem, you can see this here. And here, this is the extension, which is also quite recent. But in the direction of the optimal transport problem, that in order to cover some equations of fluids, like the Navier-Solvstein problem. OK, so I think I finished here. Thank you for your attention. Thank you. Thank you. Thank you. Any questions? So do I get a thread that you get the best stock situation in this way? Yes. So could you explain? There was a paper by Hugo Gomez, I think at some point in the book. Sorry, may I precise in what sense I obtain Navier-Solvstein equation? OK, so we obtain Navier-Solvstein in the sense of the theorem of the stochastic color Poincare theorem. So if you have a critical point, then it solves Navier-Solvstein equation, a representation in this sense. Concerning existence, we don't obtain it because we are infinite. As I mentioned, there's a few problems. We don't obtain, exactly, we obtain some approximations by using the Schrodinger problem, but with imposing this at each time, this effect at the density with the conservativeness. And then we have some sort of, this is studied here. I mean, the real existence of the, this is studied here. And well, there are some restrictions. OK, now please go ahead. I was just wondering, there was a paper by Hugo Gomez at some point about the real process. I know, it has nothing to do with it, and it's not the same variational principle. It is a stochastic principle. It is a stochastic principle. It is also in the form of representation. Again, he doesn't show that this gives the solution of nothing though, as I don't hear in this stochastic method approach. But the action functions are completely different. I mean, variational principle is different. It's a follow up on the same question. There are many approaches, many different approaches to the model. No, it's in the model. For instance, there is this model by Italian form. Yeah, but then this is not for Navistok, say, or not for, this is not for PDE. This is for a stochastic PDE. That we can also do that here. But that's something else. Then I don't see the relation with transport problem, by the way. Yes, this is my question. No, because there the Lagrangian itself is randomized. So there are randomness at two levels in variational approach. And again, what you do here. There are the paths that are random, right, diffusion process. And the Lagrangian, because Lagrangian here, it is essentially the three, so the square. So the Lagrangian itself is computed on stochastic paths, but it's not stochastic form. Now, if you perturb stochastic Lagrangian to, then you have stochastic PDE. You don't have Navistok, for instance. You have stochastic Navistok. So that's different. And then I didn't speak about this in particular, because I don't see the relation with optimal transport. So I want it to be in a framework where there things could be viewed from one side or the other. So that was the most simple. And especially if you are in finite dimension, because then NaviCy, more or less, can be really constructed and result in problem. It's a simple problem. Did you apply this for a G taking SO2 or SO3, like having a [?]. We have examples. Yeah, we have examples. Those would be. But again, what those have been for the optimal transport, like the rigid body, optimal stochastic rigid body, or the rigid body in transport. We can do this. We can do this. I can give you formulas and so on. Now, the point is that I don't understand very well what does this mean in terms of rigid body. I wish Dario was here, because then he would make a speech about this. Physically, I don't understand very well what this means, but we can do that. That sounds perfectly OK. It's a finite dimensional example. No problem. If you have infinite dimensional G by finite dimensional noise, can you then say something about the analysis? Infinite dimensional G, but a finite dimensional noise. So finite, about the time. Yeah, yeah. So it's a cylindrical noise, right? So only a number of directions are perturbed, it's like me. Yeah. Yeah, it's the same problem. I mean, the same problems are already there. You can have only one direction perturbed. And how is it that the analysis is complicated? In order to prove the existence of the solution, yes. In order to prove the counterpart of the stochastic point-career result, it's not that complicated. You have to work out and see that things don't explode. OK, it's not that complicated. But the existence. Because the existence, either you go through the entropy approach, or you go through the PDE approach. I don't know. Maybe there is a third way. I don't see. The PDE approach, you finally end up with the same problems that PDE people have. The entropy approach, you end up with others. But that does not depend on the number of directions that you randomize. Perhaps. So I want to ask a different question. But maybe you comment on what you just said. Perhaps in some cases, the PDE approach becomes a linear approach. On the lines that you have, the sense that if you ask, don't work, don't do the reduction. Just work in the group. Is there a chance that some of these problems, in some cases, the interventions may become statistical means? And then, or something like that, something that would be. You mean at the Lagrangian level? Because at your level? Yes. Yes. Yes. The Lagrangian level. The group level. I mean, whatever the fact. Because the group level. Yeah. So now I'm coming back to you. What I said earlier, it seems to me that indeed, probably, trying out first, before even considering the few morphos groups, trying out some of the techniques here of what the analysis looks like in the case of the group group, some interesting problems. Yeah. Thank you. Because I didn't think about studying the group group, but you need to study it. Even then, if things can be worked out in the group in the Lagrangian cycle, the maybe even the path space is all right. Then I see some kind of connection to my account. Yeah. That's the obvious option. Yes. Yeah. And then the techniques will be new. The techniques new tools will be available. So I guess that I don't see existence in any of the terms there much easier to obtain. This is after all the Bama group. Yeah. Yeah. So that's very good. I would suggest that we shift further questions to the coffee break. We have 20 minutes time. And I guess there's something urgent that you would. Well, I just wanted to add to what there is a nice equation which is one of the other equations with the Laplacian that is exactly on loop groups. So we have one of the simplest examples on loop groups. The oil punker, he's talking about this. The Laplacian equation is a new thing. And probably you can recover it. This is a good question. OK. I'll talk to you. Let's give the big thanks. So we'll. And we'll start again in 20 minutes. So let me see if we can all talk to each other. Yeah.
We formulate the so-called Schrodinger problem in Optimal Transport on lie group and derive the corresponding Euler-Poincaré equations.
10.5446/59227 (DOI)
Thank the organizers for giving me the opportunity to be in dance and also to share a bit of mathematics with you. So I changed my title because in the first title I didn't write interpolations and I think maybe this part of the talk could be a little more interesting to people in the audience. So let's go. This is John Quirk with Ivan Gentil, Luigia Ripani, Luigia was a PhD student of Ivan and myself. Now she's in Novine with Tristan Georgiou and I also thank Jovelin Conforti for many stimulating conversations. And the aim is to build interpolations, meaning trajectories from a probability measure to another one, okay, and related to a dissipation mechanism which is given by a gradient flow. So the first part of the talk will be about gradient flows in P over R n. P over R n is probability measures on R n. You can play the same thing on a manifold and I just decided not to introduce the difficulties. Then the baby problem, interpolations in R n, typically geodesics, okay, in R n, interpolations in the probability space. Alright, so thank you, Nabila, for recalling about quadratocoptimal transport. So this is mainly an invitation. W2 square of alpha and beta, W2 is the best of diametric between alpha and beta, is the infimum of the, you see, the average of the square distance of the whole pi where pi is a coupling of alpha and beta, and beta being two probability measures. So an interpolation will be with time s between 0 and 1, and omega is the space of all paths between 0 and 1. Mu will be, I will call it the flow of probability measures, it's not a flow in the sense of oddies, simply because to give it a name, which is a path from 0, 1, 2, the probability space. And if you define this action on the flow as being the infimum over all the expectations of a p, p is a path measure, you see, it's a path measure, probability measure on omega on the path space, and you require that the marginal at time s is precisely u s, you take the infimum of this average stochastic action, then you get this result, taking the infimum, this is a geodesic formula. You can think of this as being the square of a real distance, and you have the action, usual genetic action. Okay. Displacement interpolations are the solutions to this problem, so you can see them mainly as constants with geodesic, and this is true in a metric sense, but this is not a geodesic on a remanin manifold, because p of r n with the structure of w2 is not really a remanin manifold, it looks like. And there are many difficulties behind these slugs, like, so there are words by Ambrosio, GD and Savare to get rid of all the regularity problems that you have on these displacement interpolations. Okay. Well, Bedamou-Brenier formula, Jean-David is here, so this alpha of the formula, is, well, you can see it again as a remanin equality, this is the, it looks like the square of the remanin distance, and what you get is the infimum over the square of some velocity, which is defined this way, so you take a velocity field such that the nu s, this is a flow which solves this equation, and the square of the velocity field of nu is precisely v, you take this infimum there, over all the velocity fields, and it gives you this square of a kind of remanin norm here. Okay, so this is the Benamou-Brenier formula, and what Felix Otto did, and so this, this is the same kind of ideas that Boris presented us during this morning, is that to take this formula seriously, and to see that this nu dot is the best velocity field in some sense, the minimizer of this, and this could be seen as a tangent vector on p of rm for this structure, okay. This is completely heuristic, and giving a rigorous meaning to this is one good big book, two big books, one by Cedric Villany, and also the one by Nampozo, Gidli, and Sabar, okay, and this minimizer must be a gradient, well, I'm cheating a little bit, okay, in the closure, in the input closure of a space of gradient velocity field, okay. So, as I said, these displacement interpolations are not regular, and we are in search of regular approximations just to do some computation on that, and this will be performed by epsilon-entropic interpolation, I'm going to talk about this in a moment. So, this was a little, now I switch to gradient flows, so there is a beautiful reason by Cedric Soto and Jordan and King-Darret, telling that the solution of the Fokker-Planck equation, for instance, if u' is 0, then you have the heat equation, okay, and the solution of this Fokker-Planck equation can be seen as a Wasserstein gradient flow with respect to this Soto-Riemann-Energy of some function, which is an entropy. So, empty, you see, is a probability measure, you have a flow of probability measure, u is a potential, u' is the gradient of u, okay, under this assumption, where you have a strict convexity, kappa is one negative, then empty is a semi-group as a function of m0, and also you have convergence as t tends to infinity to some equilibrium measure, so you put the right additive constant in u so that m infinity is a probability measure, okay. So, and this Jordan-Kinderler Auto-Theorem tells that empty is the w2 gradient flow of this function F, which is F, the real entropy with respect to the equilibrium measure, and the real entropy is given by this formula, we've seen this formula in Annabella's talk, okay, and the meaning of this is simply this, okay. So, we have to give some meaning to this gradient, but this is quite natural by mimicking what happens in a Riemann-Energy problem, okay. So, this is a heuristic proposition, and they gave a reverse meaning to all this, okay, in the paper, it was in 1989. So, what I'm going to present very quickly is that this empty, the solution of this for Kaplan-K equation, is also the gradient flow of the same function F, but with respect to another structure, which is not a Riemann-Energy structure, but which is generated by some large deviation costs. So, large deviation is part of probability theory, and I will try to explain it very quickly. And this will give rise to regular interpolations, and as you can guess when the sign gets down to zero, turns down to zero, then you recover the displacement interpolations, okay, the non-regular interpolations. So, just as a warming up now, let's switch to the simplest setting where we consider what is gradient flow in Rn. So, we all know what it is, even if we didn't encounter it before. So, as a notation, omega is a path, and now I put infinity because gradient flow, then the time goes to infinity, and the interpolation is only on unit time, okay, but for the gradient flow, you let the time turn to infinity, to reach some equilibrium, okay. And F is some function, which is differentiable. I call it the free energy by analogy with what happened in the set of probability measures, and this is the gradient flow equation with a minus. Okay, everything is all right if F is sufficiently convex, okay, and then you have a semicolon property, and this cap, K here, gives you a contraction property, so this is simple, absolutely basic undergraduate calculus here. So, you have this contraction, X and Y are the initial conditions, and everybody gets to the same place, and this place is the admin of F, which is the equilibrium state, so you have an exponentially fast convergence to the equilibrium state, okay. And if you differentiate F of omega t along the solution of the gradient flow, so what you get is this, simple calculus, you've got something which is always non-negative, so F of omega is a Lyapunov function for this dynamical system, these are big words for various things, but the interest here is to introduce this function, the square number of F prime as I, and I is the free energy dissipation, because you've got this. And in a moment, back to the probability, to the state space of probability measures, I will be efficient in formation, okay, this is the reason for I. Now, if you slow down your gradient flow, what you get is the calculus, it's simply that now you switch F to epsilon F, okay, and it's direct, and usually when you read a little bit about gradient flow and what physicists tell us, is that you can get a gradient flow as being an over-damped Hamiltonian system, so you have a very strong viscosity, Hamiltonian system plus a viscosity minus lambda times the velocity, lambda is positive, lambda tends to infinity, you re-normalize, and you kill something, and now what you get at the end is a gradient flow. So gradient flow is, you don't think of acceleration of a gradient flow, because you don't have any more the idea of getting a Newton equation. You've killed something. What we did is, okay, we did it, and what we obtained is an easy computation, clearly, and we've got a Newton equation. So the gradient flow is the solution of this Newton equation, where the potential is minus half the free energy dissipation. So let me draw this, suppose that F, you have a strictly complex function, F is your x star is your minus F prime square, this is here, so that's zero. This is the potential. So if you start from here with zero velocity, then you form this way, or you form this way, you go to infinity. But if you start from here with exactly the velocity which is given by, if you are here at x, and if the velocity is precisely minus F prime of x, then you will go here with an infinite time, very quickly at the beginning, exponentially quickly, and then this is, and you stop here at this unstable equilibrium. Okay, this is a strange way of looking at a solution of this gradient flow equation. But now we have a force field, and we are going to be in operations as Hamiltonian solving the Hamilton minimization principle in this force field. Okay, so what we know now is that along the gradient flow, the force field is the gradient of the free energy dissipation. So let's do it now. Let's build interpolations and relate it to this free energy dissipation force field. So this is the equation of the gradient flow, and this is, we have this system. If and only if, and this is completely basic, omega solves this minimization problem. And so the solution of this infimum is simply zero because put omega dot equals minus F prime, then you get zero. Omega dot equals minus F prime. So the minimizer is trivially this, and the infimum is zero. Okay, this is minus, minus, this plus. Okay, see, there is nothing there. Okay, but it serves as a definition of gradient flow, which is called a minimizing movement. It was introduced by D. George. Okay, now take a small step at Zion. Consider all the paths, all the interpolation between X and Y, omega X, Y. Take the same action function here, only with a big effort. There is an upside here. Okay, so this is for a small time, and this is, look at this as the, a non-minimization principle. Expand this, you will get the cross product. Integrate it, you will get this. Well, the cross product is a null Lagrangian, okay? You have a differential. F prime times omega dot, you integrate, and then you get this. And then you're left with the square of omega dot and the square of F prime. They do not behave exactly the same way because here you have a speed, so you have something which is in one of our rape sign. And here you don't have the speed, so it's order is a sign. Okay? Because of small time interval. So now what we have here is a Lagrangian. We define the action with respect to this Lagrangian usually, and what we call an epsilon interpolation is related to F is precisely the solution of this problem. And the value of the problem, I call it the cost epsilon F of epsilon Y. Okay? So this is a cost. I rewrite this, and this is the definition of the interpolation. Alright, so you see that this cost is close to the square distance plus some perturbation. And the square distance give rise to the usual geodesics. Okay? And so what we get is that this interpolation converts to usual geodesics. So this is a small perturbation of a usual geodesic. Of course in Rn you don't gain any regularity because this geodesic is already regular. Okay? But this will be interesting in infinite dimension on the probability space. So the connection between interpolation and dissipation is the epsilon interpolation, epsilon F interpolation are exactly as the gradient flow. They live in a force field which is the gradient of the free energy dissipation, but this time of the epsilon slow down gradient flow because of the epsilon. Okay? So very simple analysis. The contraction inequality works this way. And so you see that as the square distance essentially. And you get exactly the analog of this. So what it tells you, this is easy to work with, but the way we have built this interpolation is precisely done to have this easy case. Easy calculation. We propose a population of geodesics which is in the right direction to have these easy computations. Because you have F prime and epsilon F and so on, you do very simple analysis. So okay, don't be afraid of this. Look at this. Saying that function F is such that this S is greater than K times identity. That F is K complex. And it's equivalent that saying for any geodesic gamma. So a geodesic is a constant speed geodesic. This is a line in RL. You've got this inequality. It tells you that the function F, along any geodesic, is above, well, the chord is above the function, the curve. Plus this term, which tells you that you can go very deep if K is large. So now this is the analog of this with a function here. And when epsilon tends to zero, then you get the identity and you get back to this function. Alright, so we have natural perturbation of this K-contactivity inequality along any interpolation. Thank you, Giovanni, because you worked on this in another context. Epsilon modified basic context inequality. So the basic context inequality is this, saying that F prime prime is greater than K. You can say at X star, which is the interval here, simply is this. Alright, so you can translate it this way with the cost epsilon F here. And it gives you a simple formula. I don't want to say more about this. Let's go back to the interesting setting where we have this on the state of probability action. So the Foucault-Planck equation is this equation. So if Q is zero, then you have the hit equation. Otherwise, you have something with a trace field, which is given by half the gradient of Q. What we are going to see is that it's connected to large deviations. We need the stochastic representation of this flow empty. To do this, we consider a stochastic process, which is Rn valued. I call it Zt. And you have this velocity field plus the venous process. So one of the two is just cosmetic because I want to have half laplacian to have this form. However, I should have written square root of two times dWt. It's awkward. So the solution of this stochastic differential equation is a Markov process with Markov generator, which is written here. And I call Rm zero this solution. M zero stands for the initial measure. And when, so mt is the time marginal, the t marginal of Rm zero. If you start from the equilibrium, m infinity, which is exponential minus u times the back measure, then you stay at equilibrium, of course. But even better. If you look at any interval of time and your reverse time at the level of the past measure, then you get the same thing. So it's completely invariant. If you read the movie backward, you don't see any difference. Statistically, you see the same thing. All right? This is reversibility, and this will be useful. Now, large deviations from a lower of large numbers. So I have to introduce this law of large numbers. So you have a large particle system of identically independent trajectories. The one, the whole copies of z. That hand is the empirical measure. Delta is the direct measure. This is the empirical measure. So it's a random probability measure on the past phase. And that bar n is the flow of time marginal of that hat. So this is a trajectory in a random trajectory in p of r n. So you've got this. And there is a very general result in large deviation theory, which is called Sanoff's theorem, which tells you as soon as you have an IAD sequence, then you have Sanoff's theorem. It tells you that the empirical measure, the probability that the empirical measure is close to some, something, belongs to some subset, is essentially exponential minus n times something which is positive, which is the entremum of the relative entropy of p with respect to r for all the p's in this dot. And the relative entropy is this formula. And by the contraction principle, so you can do a continuous transformation of large deviation systems, what you obtain is that the probability that that bar n is close to mu, knowing that it was starting at m0 is of the order exponential minus n, this rate function, which is here. In particular, well, the minimum value of this function is 0, and it's precisely achieved at mt. This is because, well, the entropy is 0 if and only if p is equal to the reference measure, and if you take the time marginal, then you get mt. So the law of large number tells you that your particle systems, that bar starting from m0, will be returned almost surely to the solution of the Schrocker-Planck equation. But of course, n is finite, and then you can deviate from this ergodic behavior, from this limit, and it's very unlikely that you can deviate, because you have an exponential minus something positive here. And precisely this function will be the cost, we were going to build the cost function with this rate function j. Okay? So recall, what we did in our n was based on this formula, and we are going to do the same thing if instead of the action function based on this, we have this entropy here. This is jota, don't be afraid of this, this means that if mu0 is not equal to m0, you put plus infinity, otherwise you put 0. So this is an autonomic constraint in some sense. So, and what we obtained from this is that the function, the karygraphic f, which is the other of this capital F here, is this function, and what we obtained also is that the large deviation cost function should be built on this formula. So expanding the square here is not so easy when you have this entropy. So what you have to do is to consider time reversal and do a stochastic analysis based on Nelson's stochastic velocities. So I don't give any detail about this. This is the interesting part from doing mathematics, but I don't want to get into details, but this is really the same. What we obtained at the end is a natural cost, which is based on this formula. And this formula you see is, well, I start from alpha here, I start from beta here, and what we use, I should have written it, oh, sorry. I didn't. What we use is, if you have this entropy, if star means the time reversal, then you don't lose any information taking the one-one mapping here. So you have the same one-one mapping here and here. And what we have, I told you that r is reversible, so r star is r here. So we use this invariance with respect to time reversal to get this formula. Again, now we consider a small time step, small interval, and then what we see is we obtain this large deviation in cost. So this is built on a Schrodinger problem. So the Schrodinger problem is, well, it doesn't appear clearly here. But this is connected to minimizing some relative entropy on the flow of probability measures starting from alpha and ending at beta. But I told you that this should be connected to a free-nurvegy dissipation, but we don't see it here. So by the way, where is it? And also, now we have something which is really interesting, so I go back to the Benjamin-Bronier formula. And this epsilon-modified version is given by this. And this has been proven by CGPC, CGG is George Stiffen. So a little before us, so, and this is an interesting, very interesting paper by George, Chen, and Pavon that you can get. So this one is great also. So we have this formula. And what we obtain here, you see here you have the velocities, and this term only depends on the position. So we are back on Lagrangian, you have an action function, as usual, and we can think of a Newton equation in this setting. So, and what is interesting also is that i is the Fisher information, and this is the gradient of the logarithm of the density of alpha with respect to the equilibrium measure. And great now, you see, we have to minimize this term plus this term. This is the usual one, with epsilon equals zero, then you get the displacement interpolation. With epsilon positive, you add this term, and you see there is a gradient here. So this is a regular, an effect regular, so it's a regular, okay, because of the gradient. And, okay, so. The new satisfies the continuity equation, or the full-cut equation? Yes, you see, this is the continuity equation here. And I don't write it in this form because of the definition of auto, then new dot is the guy in the continuity equation. Okay, yes. So this, I already said this. And you should compare with the finite dimensional baby system here, and we add i is equal to the square number of f prime, and it's also true at the level of auto calculus that the fissure information has this shape. And you have gamma convergence of this minimizing problem as epsilon tends to infinity, and to the usual displacement, this interpolation is a minimizing problem. Just to, now to turn that, very quickly. So, yes, because we get exactly the same formulas as in finite dimension. So it goes very quickly. So here you have this, there is a semi-group, now you've got this cost instead of W2 square. So this formula was proved by Sturm and von Rennersel. This is the contraction property of the semi-group in the Versailles metric. You get this epsilon analog here. This formula here was proved by Sturm and von Rennersel in a seminal paper. And we've got this analog replacing with the function theta. This formula is Stalagrand inequality, and we've got this analog. So, as a conclusion, taking large deviation of empirical measure of diffusive particle gives rise to a large deviation red function and plugging, playing a slowing down procedure, you get free energy and a large deviation cost. And what you have to do at the end is to identify the energy dissipation, which is not given for free. So epsilon interpolation, our well-suited approximation of displacement interpolation, they allow for proving tight perturbations of transport inequality involving the free energy. They inherit some regularity from the dissipative mechanism, if any, of course, of the gradient flow. And you can read this. The paper by Giovanni Conforti is very beautiful, and my talk was based on this portrait. Thank you. APPLAUSE We have time for some questions. The question is there a stochastic characterization of these interpolants, like we were talking of curves of measures, so you mean there's a map process behind it if you're used to. Yes, yes, yes. Yes, they are connected to the Schrodinger problem. So I didn't have enough time to focus on this. Yes, really. So you have to minimize relative entropy with respect to the path measure R. And knowing with the condition that the initial and final marginals are finally better fixed, this gives you a unique solution if it exists. And that you take the time marginals from this learning solution. You put some epsilon somewhere because you slow down Rx, you re-normalize, and this is the guy. Yes. All right, another thanks for... APPLAUSE
In several situations, the empirical measure of a large number of random particles evolving in a heat bath is an approximation of the solution of a dissipative PDE. The evaluation of the probabilities of large deviations of this empirical measure suggests a way of defining a natural ``large deviation cost'' for these fluctuations, very much in the spirit of optimal transport. Some standard Wasserstein gradient flow evolutions are revisited in this perspective, both in terms of heuristic results and a few rigorous ones. This talk gathers several joint works with Julio Backhoff, Giovanni Conforti, Ivan Gentil, Luigia Ripani and Johannes Zimmer.
10.5446/59228 (DOI)
is to study these kind of measures. So I consider Qt of f equal to the expectation of f of xt, exponential minus the value of s x s s. So where xt is a Markov process on some state space s. So x at g is s u s g, the historical process. So I want to simulate this kind of integral, which often appears. So to build a kernel, let me renormalize them. So that means to build a kernel k of t, which would be reversible with respect to Q. And then to prove something like new kt at the power n minus Qt smaller than something like, well, there will be c t over n. I will explain what is n at the power n. So n will be the number of particles of some system of particles. So we're going to do that. So imagine in this state space, you have g which is small here, g which is large here. And g is very small here, very large here. If you sample with the law of x, then you have to remove almost all the trajectories. So you have to sample in another way. So if you have a trajectory set like that, then you will create some trajectory set something like z bar. So the kernel kt will give you some z bar which should stay with which full list stay in the domain where v is small. OK, so let me start. So this is a joint world with Pierre Del Morale. And in fact, he studied with the Korn and Patras the discrete tank case, which is very different in fact. So I will speak about many body Feynman-Kelbliger's and particle Gibbs sample, which will be this one. Some perturbation analysis which will be this one. And then in the end, I will speak more generally of the stability of non-linear diffusions in my first on propagation of chaos. So the aim, as I said, is to estimate this kind of integral, xt is a continuous time mark of process, the vt time-dependent function, zt is this. But in fact, we are interested in the historical process, which is written here. So first remark, let me introduce the process with non-linear generator, this one. So the lt is a generator for fxt. So vt is my function here, the state space here. So what is x bar? So it moves like xt. And at rate vt, it jumps into its own law. And that is distribution of x bar t. So it is a non-linear division because you need the law of x bar t to construct the other party. OK, we will see. We need this. Ideally, we would use x bar t, but since we need this law, it's not possible to simulate. So we defend the n-particle system which approximates it. So it evolves with the same generators as xt. But at rate vt, it jumps on its empirical law. So here, for instance, if you take any of the coordinates of this particle system, it has this generator. But here you put the empirical law of all the other. OK, it is a particle system. OK. Here is a picture of I attack n equal to 3. So if we jump, even if it is continuous here, maybe psi 1 jumps, but I don't represent jumps of psi 1. I just jump at rate vt. So at rate vt, you jump from the first to the second particular particle, and then you split again, and you continue independently. OK, you are independent until you jump to the other one. And this is the exciting. And then now I will represent the historical process. So the historical process jumps to the past of the trajectory. So the picture is a little bit different. Here is the processes of the historical process. You jump to the past. So you get the historical, for instance, the historical process psi 1 t at time s. It is here. OK. OK, here we are. OK, first I begin with an elementary result, but elementary with a very helpful. So remember, I want to compute this. This is very useful in life to complete this strategy, I think. So in fact, what is very interesting is exactly the law of x bar t. And surprisingly, it is also equal to 1 over zt. So here you take one of the trajectories. One of the particles. And here you put the right. The right. OK. This is the proof. In fact, you define these functions. OK, it is easy to see that the eta of f is just defined by the value at 1. At time 0, they are all equal. In fact, if you, for instance, I take some n. If I compute, if I differentiate this, then this is the evolution by empty. This is the jumps. And I differentiate this. OK. And there is a constellation. I think this one comes up with this. And I get this. Same equation for all n. And in fact, it's not true for n equal to infinity. So in fact, same equation for all n means that they are all equal. And this proofs this. OK. So coronary is the same thing with historical processes, with some middle work to prove this. And the mark of the jumps, but we already saw it on the pictures, the historical process, jumps at right this far on the project past of the others. OK, so the Q of f we want to calculate, OK? Because of this corollary, we compute it with this. OK. OK. Let me define this course. Oops. Xt is a sample from one of the particles. Let me, OK. I want to compute this, but I need, in fact, I need the conditional law of this knowing all the system of particles. So for this, I need another system of particles, which I will call zeta. So zeta, the first particle is the initial xt. OK. And those other particles, they evolve independently with generator LT. They jump to one to another with this red. And the z jump to, oops. And there is an additional jump at right 2 over nt on the first one. So the picture is, so they're almost the same as the sitey, but you have a frozen trajectory, which is the red one. And the other trajectory, the rest of the jumps are a little bit different. OK. So you are much more likely to reach the red one. OK. And in fact, this new system of particles, so the result is that this new system of particles will give you the joint law of one of them chosen by random and all of them, or the frozen one and all of them. OK. And the rest are not the same. OK. So this is a computation. But in fact, it will be very useful because we get, so qt is what we need. Qt is the law of the xt with this red. The qzeta is a new system of particles. And in fact, we have qzeta. From this, we get this. And so from this, I define different kernels. So first, I have one frozen trajectory. And this, from it, I build the other particles. This is my first kernel. The second kernel, when you have n particles, you choose one by random. Let me define the law, so qt times these kernels. So it's a law on z1 on n particles. And there is some forward transition, some backward transition, some integrated transition, k of t. And k of t, this will be this one. OK. And the result is that we have this result on another kernel, which comes from the previous coloring. And the qt is the other thing we expect to k of t, which means that I start from, so I have one particle, one system of particles. I keep the first one. I construct the system of n minus others. Then I choose one by random. And I get the same law. And in the other direction, I choose one by random from this one. And then I keep this one on the n minus others. I'm back to the same law. OK. Let me just, so this is a picture for the kernel k. So I have one trajectory. I need n minus 1 trajectories for properties. This is the kernel m. And then I choose one by random. This is the kernel k. OK. OK. I just keep the proof. And OK. Now let me speak about the stability. So for all functions f with oscillations by one, I have these stability results. So this is a law of the non-linear process. This is the law of the average of the system of particles. OK. And for this, the similar proof gives the wanted stability. So for the regular characteristic conditions, for instance, if you're going to compact manifold, electric diffusion will be just the value of it. Then the regularity conditions are satisfied. Satisfied. OK. Let me give just a very rapid sketch of the proof. But for that picture, so I have my same s. Some domain where this v is small. This is large. If v can even be different until you can move. So you construct your system of particles from the first one. OK. Then it's a system of historical processing. And then you choose one by random. And you are more likely in a domain where v is small. OK. The idea of proof for this. We have another non-linear diffusion x bar. So you start it from eta at time s. OK. And if you start it from n of x i 0, which is eta 0, on time 0 to time t, you get eta t. And in the contrary, if you have this evident quality, so you investigate the interpolation. And you compute. Just you do it in to calculus, which takes some time. And you prove that the drift is bounded by c of t of n. And you get this kind of result. OK. For the remaining time, I would like to skip to another subject related to stability. And so now I consider a manifold, a Brumian manifold mg. And I consider a Brumian motion with a non-linear drift. So the drift. Oh, yeah. OK. It's right in here. 5 t. So you are the martingale part. And you are the drift with eta t, the law of yt. OK. And I assume that this drift is as this form. OK. Let me define this matrix, which is the. OK. You will see later why I consider it with the derivative of the drift. OK. So the result is that under conditions, so ct is this operator here. Ricci is the Ricci curvature on the polymagnifold. This is a matrix on the polymagnifold. So under these conditions, then the Vassarstein distance between the law at time t started from mu 0 and the law at time t started from mu 1 has got this estimate. It has a exponential growth with weight minus lambda 1 t. Right, lambda 1, sorry. So for instance, if B is from the potential of the manifold and the interaction potential, where x is the distance from point x, then h1 is the second derivative of u, hn of u, plus hn of f, which is f composed with distance rho, plus 1 ralph Ricci, which is larger than lambda 1. So h1 is exactly this condition. So let me give an indication on how one can prove this kind of result. I will use the infinitesimal parallel coupling. So I have two initial measures, mu 0 and mu 1. So at time 0, I take y 0 and y 1 with this law. And I consider a family of random variables, which realize the optimal distance between the ascension sense between mu 0 and mu 1. Then I take an independent copy of everybody, all of x epsilon 0, where x is an independent copy of y. So from this construction, well, now I take x 0 t, y 0 t, a boon in motion in the product manifold. And since it's a linear process, the drift of x, it's expectation on the y variable of this, on the drift drift of y, it's expectation on the x variable of this. I take two independent process so as to express the drift with expectations instead of expressing it in terms of laws. I take two independent. OK. Now I define this process with epsilon fix. So let this be a solution to the equation. So this is a Corian derivative of nt. This is a derivative in epsilon. And I ask it to do so. So in fact, I want this to be a boon in motion with drift, the same as here, but with the starting point, which are here. I want to make an interpolation from epsilon 0 to epsilon 2 to 1. It turns out that if I solve this equation, and the important fact is that the Corian derivative in time of this has a finite variation, then I get a boon in motion with this drift. So in fact, it is a SDV in an infinite dimension because I need to consult it for all epsilon in fact. So I get a pass with a control split answer. So it's a continuous property of pi by the dimensional. How do you sense if you say that this is a sarcastic differential of the next dimension? OK. So let me set the term t to x epsilon t, y epsilon t, y epsilon t. OK. So z times 0 is like that. z epsilon t is here. And so the, yeah, in fact, so this should be the derivative in epsilon here. Because I have a z 0 0 here. And here I have a geodesic z epsilon 1. And this I know. And in fact, I am building a curve here. And building a curve here with a speed z. And for constructing this speed, for constructing the speed here, I need all of this. OK. So it's not a SD in a finite dimension. OK. And it turns out that I get one n motion. And since co-variant derivative in time may measure the difference with parallel translation. In fact, the more or less the speed, the length of this vector here will be the right exponential of this operator here. And this is why you have this here. You have the exponential of this, which controls everything. Because you have an operator for constructing this speed here, which is the right exponential of this. OK. So how much time do we have? 1 minute. 1 minute. OK. Let me finish. Just the last situation. We related to nonlinear processes. So now I take n, 1 n motions in the connect manifold, in fact, which are independent. And again, with drift, here I take the law of this, and the law of the position. OK. And I would like to compare it with the same one n motion. But instead of taking the law, I take the law of the empirical law. And now what I will do is the x i i t will be parallel with zeta i i t. So it's different to this point of structure here. I have the zeta i i t, the x i i t. And parallel copying, I take the minima geodesic point by point. I move the martingale part from the first one to the second one. And I keep the drift. And the result is that the friction curvature of m is larger than some kappa. If it is the same as before, then we have the propagation of k-hole result. So if I take two particles, then they are distant to one of the square root of m. Times this factor here. And in this case, h2 is the... OK. Questions? Can you say something about the relationship of this or its discrete time analog to particle methods and machine learning, such as the particle MCMC? The first one. Oh, no, I don't know enough. OK, interesting. Related to the... Oh, in discrete time... OK, we'll talk with you. Yeah, OK. Yeah, I can ask you a really basic question, because I'm also not so familiar with the topic, but you started out with a lowercase f, and then it became a path-definite functional unifiligraphic f, which I got this correctly. So in the end, the perturbative result is the same, like you get the same structure, same rate, and so on. I see the... I'm trying to find comparison to the more standard setting of the same theorem. From the lowercase f to graphic f, graphic f means that I take it to a point... It's a function of all the past of the... Yeah. Yeah. And then the perturbative result, you get the rate. This is what we had on the left side of the blackboard. I will write it down, but now it's erased. It was like c times t divided by the capital N to the... Yeah, it is the same... Is this what we would expect from the classical view? Same rate for a lowercase on the... Yeah, yeah, yeah, yeah. Oh, that's right. Let's go. All right, well, thank you very much. Let's thank the... Thank you very much. Sorry, but I think we can start with the next... Okay.
"Continuous time Feynman-Kac measures on path spaces are central in applied probability, partial differential equation theory, as well as in quantum physics. I will present a new duality formula between normalized Feynman-Kac distribution and their mean field particle interpretations. Among others, this formula will allow to design a reversible particle Gibbs-Glauber sampler for continuous time Feynman-Kac integration on path spaces. This result extends the particle Gibbs samplers introduced by Andrieu-Doucet-Holenstein in the context of discrete generation models to continuous time Feynman-Kac models and their interacting jump particle interpretations. I will also provide new propagation of chaos estimates for continuous time genealogical tree based particle models with respect to the time horizon and the size of the systems. These results allow to obtain sharp quantitative estimates of the convergence rate to equilibrium of particle Gibbs-Glauber samplers."
10.5446/59230 (DOI)
That's the setting. And then, of course, you all know that in the case that x length space, then also the resulting Wasserstein space will be a length space. That means there will be shortest path, which correspond to the so-called displacement interpolation. So instead of just doing the linear interpolation between two measures, we're actually moving mass in the base space. And then we can associate a momentum field with this, or a velocity field. And it turns out you can actually rewrite the whole transport problem in terms of these fields of the mass and the momentum field, satisfying the continuity equation in a weak sense. And then you minimize this action, which is essentially the kinetic energy integrated in toy. Yeah, and if you set p equal to 2, this looks a lot like Ray-Banian-Baniforce. I'm just doing this because I will briefly want to talk about unbalance optimal transport, because I don't think everyone's familiar with this. So this is the motivation. Let's see. We have a problem like that. We have two bunches of mass here in the beginning and two different batches of mass in the end. We want to interpolate. And if they're not equally split up, then there will be no choice but to transport a bit of mass from one to the other one. And if this is just a tiny measurement error, and this is very far apart, then this is a very unnatural behavior. You want to get rid of this. Well, things actually maybe have to grow or shrink. If we want to model this, we want a behavior like that. Well, one way to do this is starting from this Ben-Aubigny formula, just modified to make it unbalanced. So we add a source term, theta, here, which appears here in the continuity equation. And then we have to modify our action to penalize the theta as well. So we remove the kinetic energy and we add something which contains theta. And the idea is not that new. And in fact, a lot of different things for phi have been tried, more like TV-type things for theta, L1-type, or combined L1 and L2 things. And the thing I want to focus on today is this particular term, which we call the vasestant-discharal distance. And other people came up with it as well. For instance, Euromucan-Savare called it the Helio-Konturvitsch distance. And what you can see here is, essentially, we have here the optimal transport part. And here we add a term which looks like the remaining tensor of the visceral distance. So this is the Riemannian tensor of vasestant. This is the Riemannian tensor of visceral. We just combine it. Gives it a bit more options to handle solutions here. OK. Now, the good news is this new object indeed does yield the geodesic distance on non-negative measures. And just to see a bit more what it does, what it structures, we looked at geodesic between direct measures. So we started with the direct x0, mass 0, when I go to x1, m1. And then the standard vasestant measurement would happen is, if both m's are equal, then the geodesic is a Dirac moving at constant speed from the initial to the final location. Now, in the Fischer-Rauke case, in the Vasestant-Diescher-Rauke case, if the two Diracs are sufficiently close, closer than pi in this case, then the geodesic will also be a traveling Dirac, but it will reduce its mass bit in the middle to save on transport values. So in the beginning, it will reduce the mass, then transport is cheaper, and then you have to grow the mass back. And finally, when the masses are too far apart, it turns out that transport is no longer economical, and you do everything via the Fischer-Rauke term. So the first Dirac is simply being teleported, if you want, to a second Dirac. And now, this is already a nice result, I think. Diracs are staying Diracs. They're not being destroyed or smeared out. But the most, I think, most pleasant result is that, in fact, general geodesics can be decomposed into geodesics between Diracs. So we kind of understand very nicely what this distance is doing. And here's a small numerical example. So we have a density that we interpolate into another density. If you just take the L2 or TV or whatever interpolation, you just get this fading, not very satisfying. If you do Rathenstein 2, you get, well, you get movement of mass, right? No more fading. But here, for instance, you have this chunk of mass that's traveling from the whole image from this square to form this Q down there. And this may be unpleasant. This may be an artifact that you want to get rid of, and in the unbalanced case, you get it. Locally, you get nice interpolations, but globally, there is no interaction between them. So I think this has its merits in many applications. OK. Now, another way to tackle the unbalanced transport question is to start from the Kantorovitch formulation. So we started from Benjamin Bernier. You could also start from Kantorovitch, and you relaxed the marginal constraints. So now you're optimizing over arbitrary, not negative measures. You pay the transport price. And here, you penalize the deviation of the first marginal from mu and the second marginal from mu by some marginal discrepancy function, which usually has some form, some integral of some small f. And there's more choices, but today we only need this one, where f is super linear, so it's going to be infinite if it's not absolutely continuous with respect to the marginal. And then you can try to solve this. You can go to the dual problem. It looks very similar than before. We have the same feasible set and the same constraints. But now, it's no longer just the overlap of alpha with mu, but now there's this minus f star minus. And this is, in general, an increasing concave function. So this is still a convex optimization problem, concave maximization problem. All right. And you can recover the balanced case very easily by setting f to be the indicator function of 1. Then here, rho has no choice but to be equal to mu. Otherwise, this will be infinity. And then if you do this conjugation, you will get that this function is indeed the identity. So these terms simply disappear when you have the original dual problem. And now, the most elegant or beautiful result of all this that actually there's a bridge between those two approaches. You can recover this, but then, for our handling of the distance by setting this marginal is pregnancy to the kubat-Liber divergence. And the cost factor of this particular cost function. You don't need to worry about the details. This is just some function. But this is a radial cost. This is an increasing function of the distance. And it will be infinity after, oh, I didn't adapt the normalizations after pi half. So this is a slightly different trade between transport and mass change. So there will be no transport further than this. Beyond the system, everything has to be handled by kubat-Liber. OK. Then the final thing that we need, the final ingredient, before we can put everything together, is semi-discreet transport. So now mu will be an absolute continuous measure. And mu will be a discrete measure of some Dirac's with mass mi at locations xi. Then we can almost immediately tell that the form of the optimal coupling has to be something like this. So we chop up mu into cells ci. And then everything in the cell ci is being transported to xi. This has to be the form of the optimal coupling. And these cells ci that are the so-called generalized like air cells. So for a weight vector w, we just check where is it. This thing is maximal. And all the points x, where this is maximal, they are in cell ci. So this is related to the C transport. Of course, you will know this is optimal transport. And in our case, since the cost may be infinite, there may be a residual set where we cannot go with finite cost. And this will be treated separately occasionally. OK. If you haven't seen those guys, just think of this as the generalization of the Voronoi cells. If you just pick c to the distance and you set this weight vector to 0, this is just the Voronoi cells. But you see, if I turn up certain w, I can make this smaller. And so I will increase the corresponding cell. You can do a bit of competition between the different cells. And then starting from this, you can go back to the original transport problem. And you will get that in the semi-discrete case, you can solve optimal transport with this tessellation formulation, where this is essentially, you can get from the dual to this very quickly. So our dual measure nu, sorry, only lives on these points xi. So we only need the dual variable beta at these points. So this will be the dual variable beta only at those locations. So this is the original dual term. And then we just pick the optimal alpha for a fixed beta. We make alpha as large as possible. It has to be smaller than all the dual constraints. And this is the dual constraint that will be active on this particular cell. So this is a very simple transformation. And you will get to this particular formulation. And in fact, this is also the base point for very efficient numerical methods. That contact, for instance, is working on. You want to get that by Bruno Levy? OK, I think now we have all the ingredients assembled. Now we can combine them. So the first good news is this dual tessellation formulation survives. Now the optimal coupling is a slightly more general form. We still chop up some measure and transport everything in the corresponding cell to the corresponding point. But now this measure that we chop up is no longer necessarily equal to mu, but it can be something else due to the marginal discrepancy. Then if you look at the dual formulation, you can do the same trick. We introduce this new effective dual variable w. And then we just pick the corresponding alpha as large as possible, which we result in this guy. And instead of just being the overlap between the two, we have this minus f star minus appearing both in the alpha and in the beta term. And then there's the residual term, things that cannot be reached. We have no choice but to remove this mass. We have to pay f zero for this. So the balanced case, this would be a problem, because then this is infinity. But in the unbalanced case, this might still be OK. So this is the dual tessellation formulation. It survives almost unchanged. And now in this case, again, we recover the balanced case. It's very easy. But now we can also do a primal version of this. So we take these kinds of couplings, and we don't know the rho yet, and we don't know the w yet. We can just plug it into the primal one and optimize over those two guys. We optimize over omega s. And w, and we're probably going to optimize over rho. And there cannot be any mass in the residual. And then you get this is the transport term for every cell to transport to the cross-point. This is the first marginal discrepancy. This is the second marginal discrepancy. It's very natural. And this doesn't really make sense in the balanced case, because it would only be finite when it's optimal. Because this is only finite when the second marginal constraint is satisfied. This is only finite when the first marginal constraint is satisfied, and then you know they're already optimal. So it doesn't really make sense in the balanced case, in the unbalanced case. I think it's natural to write this thing down. OK, let's go further. Let's try a few things. Now we have choices to make. We have cost function, and we have this marginal discrepancy to choose. So we can get very different behavior. So let's start with the standard Wasserstein 2 case to practice. We just take the square distance, and we take this f to be the indicator function of 1, which means that calligraphic f can only be the indicator function of mu. So the marginal is actually fixed. Then these cell CI will be actually the weighted Laguerre cells with piecewise straight lines. And there will be no residual, and the marginal rho has no choice but to be equal to mu. So this is the back measure that I'm currently just using as the first marginal. And now in the Wasserstein 2 case, it's just chopped up into these cells, and every cell is transported onto this point. So now let's start unbalanced. Now we change f to be the calligraphic letter divergence. Now the first marginal is free. And what happens is the cells are still weighted like air cells. They're slightly different, but the shape is the same, essentially. And now the first marginal is free, and you see that it's a bit as higher density when you close the points, because then it's cheaper to transport. And the density is decreased if you're far from the point, because you don't want to transport as much. So that's very natural. And in this particular case, you will notice that the support of this new free measure rho is now still the whole space, because this calligraphic letter divergence is essentially minus infinite slope at 0. So even if you're very far, and it's very expensive to transport, it's still cheaper to transport a very little bit to gain a bit of this infinite slope. So this measure still has infinite support. Full support. Now we go to this Wasserstein Fisher-Rauw case. So we keep the calligraphic letter divergence, and we change the cost to this particular one, which is infinite after the pi half. Now what we get is, for the first time, we have a non-empty residual set. This is marked here in white. These points, we cannot transport them to any of the red points with the finite cost. We have no choice but to send rho to 0 here. Otherwise, it would be infinite. But everywhere else, for the same argument as before, everywhere else rho has support. And finally, we tried quadratic regularization. We just take, again, the standard Euclidean distance. And now we penalize in a quadratic fashion the deviation from the original marginal. Then the residual set is now a non-empty because the cost is finite. So everything here is not white. But still, the measure rho does not have full support. Because here, it's a finite cost to transport to there, but it's even cheaper to just neglect the mass completely. So we have qualitatively very different behavior here. Just the standard balance case. There's not enough degrees of freedom. Now here, it's almost the same just the measure rho is varying a bit. Now here, we have only bounded support. And that's due to the infinite transport cost. And here, we have bounded support. But that is now due to the finite throwing mass away cost. We have four qualitatively very different behaviors. That's a nice suit to play with. OK. And we also see that there's kind of a length scale intrinsically involved in all of this. The length scale between those, the distance of these points and how far we can transport. So now let's play with this length scale. And for this, we look now at the scale of cost. See epsilon, we just put an epsilon here. So if my epsilon is very small, it will look like our space is very large. And transport will become very expensive. And for simplicity, we just assume that f of 1 is equal to 0. That means if possible, we prefer to balance the masses. And we pay for not balancing them. OK. Now this is the problem that we try to solve. If we have epsilon go to infinity, then we can see here that this is becoming very small. And transport is becoming very cheap. We're making the space very small. So we don't really need to change the map. We can transport everything with very little cost. And it will go to 0, essentially. If we let epsilon go to 0, then transport will become prohibitively expensive. And we have to do everything via the marginal constraints. And here is what we can see. For large epsilon, in this case, large means 1. This is the back measure. It's transported onto those four points. And you don't see anything. This is constant. Now if you make epsilon smaller and smaller, transport is becoming more expensive. And what you can see here in the cells, the standard, they're no longer the generalized like air cells, but now they're becoming curved. And now you can see here, you can see the residual popping up. So now the unbalanced effects show up if you make epsilon smaller and smaller. And also you can see here in the free marginal row, in the beginning it's essentially equal to the back measure. And you can start to see this emphasis near the points where you can actually transport to. So this is very natural. And here's the same example with a bit more points. And for visibility, I've tracked the cell up here. So this is always the cell corresponding to the same point. If epsilon is very small, we're in the unbalanced regime, then mass only transports to the small ball around this point. And now if you turn up the length, if you turn down the length scale and make transport cheaper, then you recover this more classical Wastegs-Dan-Truh behavior. So now the cell doesn't even contain the point anymore. So the point is still here, but its corresponding cell has now moved to ball. And down here you can see the emerging this structure of these small balls. I think it's very nice to play with these features. And I guess it has an application somewhere. We'll figure it out eventually. OK. So now we have the length scale. So we understood now the semi-district number less transport problem to some extent. Now we think of further, let's look at the so-called quantization problem. So what is it? Our goal is now to approximate some absolutely, Lebesgue-Azul continuous measure mu by MD-RAC masses in the optimal transport sense. So we're trying to minimize the following object. We're trying to minimize the transport distance from mu to nu, where nu is now a superposition of MD-RAC measures. And we're optimizing the locations and the masses. This has applications I've been told in optimal location planning. For instance, if mu are your customers and they live spread out over the countryside and you want to build discrete stores, whether you build them, you try to minimize this essentially. Or apparently in physics, it's related to the pattern formation in crystals. We'll get to that. Or if you just want to discretize something for the sake of numerics. So it has its merits to this problem, so let's just look at it for now. Well, one way to tackle this is to first get rid of the minimization of the masses. So we're doing this first. And we will arrive at a problem of this form, where we just minimize over the positions. And we have to guess what this JM is now. Well, if we're free to choose the mass that we want, what we can do is for every mass of mu, we always pick the closest point. There's no need to go to another point, which always pick the nearest neighbor. And that means that the optimal coupling will be mu chopped up into the Voronoi diagrams now. So every particle of mu is going to the closest point. And then we put all of this to xi. And then we just choose the masses Mi to be exactly that kind of mass that's arriving. Then we're good to go. This is very easy. And then this JM becomes this very simple formula. So we're integrating over each cell the transport from that cell to its generating point. Very simple. Very easy to solve essentially. Now it's not very simple to solve because it's highly non-convex and x, but apart from that, I think it's decently treatable. OK, now let's do the same thing in the unbalanced case again. We have some practice by now. All right, so all I'm doing is adding and writing a few things. We changed optimal transport to unbalanced here. Again, I am assuming that we prefer to balance the mass if possible. And now we'll stop optimizing over the m again. As before, we know that whatever we're very much of mu, we will chop it up into the Voronoi cells and transport everything into the generating point. And we will choose the mi to be equal to the mass that arrives, that is just as before. And then all that is left is this row. We have to find it. So this j will be the infomization over this row of. Now here we have the transport term. Everything in cell vi is transported to xi. Now here we have the marginal discrepancy term on the first marginal. We have to pay for the deviation between row and mu. And finally, that would be the second marginal term, but we've eliminated that by setting m to be this mass. And then due to this assumption, this guy disappears. So we're only stuck with this guy. We have to infomize over row. And due to this guy here, we can infomize over rows that are dominated by mu. And we can essentially optimize over the density. And if you look at this long enough, we just pull out a minus. Then this is a supremum. There's a minus. Now here's a minus. And then this is essentially a central shawntry conjugation pointwise. And we end up, again, with our favorite function minus f star of minus something. And again, compared to before, it's almost the same thing. We just have to add this guy. Very simple. Very nice. And the optimal free marginal, we end up by doing this optimization. We know what it is. All right. So the unbalanced quantization problem seems to be also like everything continues to live on. The unbalanced thing hardly destroys any structure. OK. Now, a numerical method just to get some pictures. In the balanced case, you would use what is called Lloyd's algorithm, or you could use it. And the idea behind this is essentially an alternative optimization. First, for given points x1 to xm, you compute the optimal coupling, which essentially only means you need to find this for annoy cells. And then for the fixed coupling, you optimize the locations. So for every point xi, you try to put it such that the transport from the cell to that point will be keep the cell fixed. It's now minimal. So the new xi will be the generalized weighted center of mass of this old cell. And you just iterate this, and there's a bunch of literature on this, which will usually converge to a critical point. Not necessarily to a global optimizer, but to a critical point of the function. And again, all we have to do in the unbalanced cases, we have to replace this guy by rho. And we have to add our minus f star of minus. And everything else remains. This is still a weighted center of mass just for a different weight function. So even some of the convergence proofs here still apply. We didn't even have to worry about this. So we can apply Lloyd's algorithm with very minimal changes. And now here we have some examples some numerical ones. So again, we start with Wasserstein 2 just to get an idea of what we're expecting. Here we have a slightly homogeneous density. So it's a bit lower at the boundaries, and it's higher in the center. And if you quantize this, then you get a roughly equal distribution of points. Every chunk here is covered by one of these points. Now in the unbalanced setting, you see a quality to see very different behavior. In particular, all the points and all of these three models, they're concentrating around the center. So the length scale here is chosen such that these balls cannot longer cover the full space. So they have to compromise some. They cannot approximate all the mass. They have to compromise. What do they do? Of course, they go there with the masses highest. It's a very natural phenomenon, and you can see it's pre-universal in all three models. When it turns out, this may actually be related to things that we see in real life. This is the population density of Germany. And we just run a quantization method on this just to have a nice picture for the paper. And we got this. And then Benedict realized, well, if I look up the distribution of Ikea stores in Germany, this seems strangely similar. So in Berlin, there's a bunch. There's a few in Hamburg, in Bavaria. This is actually only once in Nürnberg and Munich. And there's a lot of them in multi-invest files. I don't know what the English name of that is. A few in Baden-Württemberg, but virtually nobody in Brandenburg on making book 4.4.1 can go to Ikea. Because Ikea simply says, OK, I can cover everything. I have to make a compromise. And that compromise is making book 4.4.1. So I think that the unbalanced potential problem actually may be able to describe things that we see in real life. So it's not an entirely academic problem. OK. Now, the final part of this talk, and I'm usually a lot faster than I intended to. But this gives you more time for the final part. This is the most interesting one. So now we look at the quantization problem. Again, we try to minimize for these locations. But now we're going to look at the limit of very, very many points, like asymptotically infinity many points. So first of all, for technical reasons, we have to assume that x is now a complex polygon in R2. Otherwise, we couldn't do the math. And it has to have a most exciting, but that's not really important. It's important for technical reasons, but not for what I'm going to talk about. And then the other thing is that I'm introducing again this length scale epsilon here. Because if I let m go to infinity, then there will be points almost everywhere. And transport will always be very cheap. And I'm not seeing a lot. So I have to rescale accordingly. And I'm doing this where this epsilon. And it's actually very easy to see that if you're interested now in the limit of m going to infinity, the minimization of this guy with a sequence of epsilon m's that also goes to 0, then what you end up essentially is this guy. And I'm trying to explain this now. So first of all, here we have the asymptotic rescaled point density. So here this epsilon is just for rescaling. We have the number of points over the volume. So this is the asymptotic point density. And this function b is the energy density that you would get with a corresponding regular hexagonal tiling. So if you put over all of the area just regular triangular points in the corresponding veranoicelles of an hexagonal tiling, then you can assign an energy density to that. And this is this energy density. And what you get is actually area times this energy density. So this tells us that asymptotic and the optimal solution will be a regular triangular grid on our problem. This is not just for the back measure. Then everything falls essentially from this guy because this guy gives us a lower bound than this one. And then we just have to construct a candidate for the upper bound and show that it converges. It's not that hard. Now, in the under balance case, again, almost everything survives. We have to add this minus f star of minus, which only affects this guy here, but the theorem still remains applicable and we get the same result. I think for technical reasons, we assumed we were lazy. We assumed that f of 0 is somewhere between 0 and infinity. This not 0 because then otherwise it would be trivial. We don't have to transport it all. And not infinity because then we still have to do good answer to check. Is there a point covering this region? Otherwise, we have to move a point there. It was very annoying. You don't really learn a lot from this. We just said, nah, it's fine. And we're good. So you can extend the proof easily. It's just a lot of nitty-gritty work. OK, now what we get is this energy density B is now a non-negative. That's not surprising. It's decreasing. That's also not surprising. If you add more points, we can decrease the transport cost. It's convex. I did not expect that, but OK. And it continues. And if you let the point density go to 0, then the energy density will go to the cost that it takes to remove the mass completely by the under balance effect. And if you let the point density go to infinity, then it tends to 0 because you don't have to transport anymore. So this is all very natural. And this is the Lebesgue case. And now we can go to a slightly more challenging case with a varying density. Now x is still a polygon with the most excites. It's a convex polygon. Mu is now Lebesgue absolutely continuous with the Lipschitz continuous density m. And again, now we have three different regimes. If we let m go to infinity, if the corresponding rescale density goes to infinity, then we are getting the pure transport perfume. We don't care about the under-balanced effects. We just do transport. The cost goes to 0. Because there's points everywhere. In the other case where the scale point density goes to 0, we can't transport anymore because the points are now they seem very far away. So we have to do everything via the under-balanced effect. So it tends to go into removing all the mass with this f of 0. That's also very natural. The interesting part is now what happens in between. If this tends to a finite rescale density. And what we get is this guy, so limit of m to infinity of the minimize of this guy with m and epsilon m tends to this minimization problem. There we now integrate over the domain this point density function times the area. And d of x is now the spatially varying point density. So what you can see here is it will locally look like a regular hexagonal grid, but with varying length scales. And the length scale will depend on where we are. This is this function d of x. Well, what is this function d of x? It's something l1. It's positive. And if we integrate all the point densities, it has to come up to be this p. So we just have to choose this d such that this constraint is satisfied. And then we have to choose it in such a way that this is optimal. So locally, we have a hexagonal grid, which has to distribute our points in such a way that the cost is optimal. Well, and then you can, at least formally, you can see where this is going. If we are under the back case, then d will probably be a constant in space. Then we can just solve this guy. And then d will be p over x. And we're back to the original result from the previous slide. Now, if this is not constant, then we have to minimize over d, substitute this constraint. And what we will get is there is a dependency of x. Some function of the local mass density. And there's a lambda, which is essentially just the Lagrangian multiply of this guy. So there's a non-local, spatially-varied point density. And in the standard bussage-denture case, this is just the square root of the mass density, of the mass density, which just tells us if there's a lot of mass, I have to put more points there. Because even transporting to this point is more expensive. Like in points with math, forget about it. Yeah, I mean, you get the idea. When there's more mass, you have to put more points there. I think that makes sense. The interesting part is that in the unbalanced case, this d may become 0, even in areas where there's actually finite mass. This doesn't happen in the optimal transport case. There will always be a few points. But in the unbalanced case, d may drop to 0, even if there is mass lying there. And you can actually see that numerically very nicely. So we start with a, we keep m fixed. And we have a very small length scale. So it covers the whole space easily. So everything is covered. And now we've reduced epsilon. And at some point, it starts to lose total coverage. And it concentrates in the middle. So here you can see there's finite mass. But we do not cover this with points. We had this earlier. But now if we let m go to infinity and decrease m, it's epsilon in such a way that this remains constant, you can see very nicely that the area that is covered roughly stays the same. And it tends to be a beautiful hexagonal group. So this is kind of a numerical confirmation, if you will, of this result here. We have no point density here. We have quite a constant point density in the other areas. And we can compute this asymptotic point density by looking at this guy. It's not that hard to do this. And then what we get is for this input density, which is the same as before, just a different color scale, if we now decrease the number of points that is available, so this rescale point density, if we just decrease it, it starts to chop off regions. And he focuses more and more on the highest density regions. And the same thing happens in this population with Germany density example. You can see in the beginning that IKEA everywhere. But if we decrease the number of stores available, then eventually only the people in the rural area and in Berlin will be able to go to IKEA and everyone else. I have to get to this for each other elsewhere, I don't know. So this is very interesting phenomenon. It's pretty intuitive to, of course, the form improves a bit harder. But if you look at this, it's pretty intuitive to see where this is coming from. And I'm very happy that David came to visit us and that we stumbled into this problem. And I think that is more or less it. Summary, everything survives. It has a relation formulation. But you have this very interesting interplay of the length scale between the mass change and the transport. In the quantization problem, Lloyd's algorithm survives. But now we have this new feature of these neglected regions. And in crystallization, still the classical result survives. We have a model triangular grits. And we have this non-trivial local point density thing. Just by the way, I'm moving to Munich next month. I'm seeking to hire a PhD student in the numerical transport. So if you have a candidate that you do not want to keep for yourself, I'm happy. If you let him know of this. All right, thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. OK, time for questions. Any questions for Bernard? You changed the underlying distance, the occulting distance. Yes? Do you consider something unethic topic? Basically, with an unethic topic, wave shape underneath, do you get potentially at the end also other microscopic patterns? I think we talked to David about this. I'm really not an expert on crystallization. My impression is that as long as it's nothing too fancy, I think yes. We will just be, if it's ellipsoid, for instance. So our easiest case would be an infinity one. And then, who is an expert? I would expect that you get a different grid, maybe, with an infinity might be just squares. I'm really just guessing here. Or, yeah. And what are the squares? Diamonds? Oh, it's magic. Do it. So because of the UK's times. My impression is that whatever survives in the balance case might survive in the unbalanced case. So if the optimal tasselation in infinity is diamonds, then I guess it will remain diamonds. That would be my intuition. But we didn't check. Yes? So in the balanced case, when the cell, the some cell disappear, then the action degenerates. Then the action of the function degenerates. And then the constant of the problems. So what happens in your case? Oh, that's an interesting question. So what about the linear case? Yes, because did you code the Newton's method? No, no. We used the quasi-Newton method at some point. But we didn't bother evaluating the Hessian. What I think is there some kind of magic compensation from the. So for instance, if you use faster than the Scharal, it will be very tedious because then the lines will no longer be straight lines. I think if you use the Gaussian and Euler-Konatorvich thingy, then you still have straight, you still have standard like air cells. So then you still have straight lines which you could evaluate. And the Hessian, I don't think the Hessian has a meaning in that case. We would just, I think, we would be a king. I mean, there will still be a derivative. It's not completely degenerate because there will be the F0 term, I guess. But I don't think it will be C2. Sure? Maybe it depends on how F behaves at 0. So no, I'm absolutely not sure. Because in the case of the sphere, the case of the sphere with this log is essentially a feature of cost. Then it's equivalent to compute this cells and standard of the matchments for cells. So for different costs, which has been implemented by Compte-Merigou and is able to make by next second order of optimization. But I understand that to do this numerically, at least, you need to be able to handle the interface very elegantly. And this is. Yes, but the interface in the case of the sphere is kind of easy. For this cost function? For the Vessar-Stefichon. OK, I was not aware of that. OK. You use the same function F for the source and the target. Is it essential? No. No. If you want to go to metric. I think you can. Yeah, because we came from the master-scientific raw case, but there's absolutely no reason in the analysis to do that. In complex analysis, there are other ways of combining energy functions or cost functions, particularly in the context case, there's this evolution. How would this relate to what you're doing? Well, it's kind of an internal convolution. This is a bit like internal convolution, right? This is exactly. Oh, OK. In a Romanian sense. I mean, you can always choose either go this direction or go this direction. Any composition on that? Otherwise, I think you can also interpret this as you have an interior transport. And then on the outside, you have additional terms that paralyze the. So you can write this as a nested optimization. That will also be an internal convolution. Not exactly an internal convolution, but because you don't have the minus, but apart from that, we'll look very similar. You have questions? If you will, please. Thank you.
"Semi-discrete optimal transport between a discrete source and a continuous target has intriguing geometric properties and applications in modelling and numerical methods. Unbalanced transport, which allows the comparison of measures with unequal mass, has recently been studied in great detail by various authors. In this talk we consider the combination of both concepts. The tessellation structure of semi-discrete transport survives and there is an interplay between the length scales of the discrete source and unbalanced transport which leads to qualitatively new regimes in the crystallization limit."
10.5446/59234 (DOI)
I think it's great workshop. I'm going to talk about the analysis of shape viability via deformation problems, and in particular how we can incorporate physical model in deformation frameworks. So first of all, in order to study shape viability within populations, for instance, so here examples of three populations, point of view that I think you all like is the morphometric point of view, where so this is a macroscopic approach, and shapes are studied via their geometry. So the idea is to study the geometry of these shapes and to compare the shapes studying their geometry. A nice way to do it that goes back to Darcy Thompson is to study the differences between two shapes by the way deformations can transform the first one into the other. So the idea is to set a space of deformations, and then if you want to compare two shapes, the idea is to search among this group of deformation, the set of deformations, which one transports the first shape as close as possible to the second one. A nice particular case is the LDM framework, where a space of vector fields is set, supposing that it has certain regularity. And then the group of deformations that is used is the one of the final point of flow equation for time varying vector fields in the space of vector fields that you set. So this framework has very nice property that you know. In particular, you can show that you can use the metric on the space of vector field in order to equip the shape space with the metric. And then you have existence of minimizing trajectories. You can do statistics on your shape space. And so there are many, many works that show how well it's working and on practical cases. So here is a slight example of the results that you can get. If you want to use the LDM framework to transform this small leaf into the big one, so I don't know how well you see the colors. So the blue into the black one, it's supposed to be black. If you have no points of correspondence, so here I use the variable frameworks for the curves, which mean that they are really treated as curve. There is no points of correspondence. You set a space of vector field. Here I took a Scaragoshan canal. And you try to find the best trajectory of vector fields, bringing the first one into the second one. So here is typically the kind of result that you obtain. And of course, it's a very good result. So you have a good matching. However, if you look in some specific areas, for instance here, you see that this part has not been really elongated. So here the matching is quite good. And if you have no other information on the shape that you want to transform, that's a very good result. But if you look more closely to the leaf that you want to transform, your goal is to transform, for instance, a growth of leaf, so you have the leaf. So you want to study how the small one grows into the second one. If you're a biologist, what you would look into is where would be the growth, for instance. And here what you can see that this part here is way longer than this one. So if you're a biologist, what you want to do is to design a nice biophysical model, for instance, an anesthecal model. And then you want to derive all the equations given this model and try to understand how the growth is happening given this model. So knowing that you would have a bigger growth here than here, because this is quite similar to this area, you will try to incorporate this in a biophysical model. It's not possible to directly incorporate such a biophysical model in the LDM frameworks or in the large deformation frameworks. And so what we have been working on is a way to develop an in-between frameworks where we keep the macroscopic and geometrical point of view of the diffeomer geometry. And simultaneously, we try to incorporate some knowledge about the biophysical model. So try to incorporate some priors about the way the biologists say that it should evolve. In order to do so, the goal is to incorporate structures in large deformations. So instead of saying, OK, we'll build any deformation that can be obtained by the flow equation given a set of vector space vector fields, we'll try to incorporate the structures in these vector fields, and we'll use the framework of deformation modules that we have developed. So I will first recall how we can use this framework of deformation modules in order to incorporate the structure in large deformations. And then I will show how we can define modules from a physical model. So if you have some idea about the biophysical model, how you can use this knowledge in order to incorporate it in the deformation modules and then in the large deformations. So in order to incorporate structures in large deformations, as large deformations are built as flows of vector fields, the idea is to incorporate the structure at the level of vector fields and then integrate trajectories of vector fields that satisfy this structure. So there have been many, several frameworks incorporating structure in large deformations this way. So the idea is to define local generators of vector fields and then say, OK, now we'll consider only trajectories of vector fields that are generated by these generators. So all these frameworks define generators of vector fields that are very adapted to a given situation, to some data that are to study. And so alone to understand, to build large deformations that are relevant, for instance, from a biological point of view. In our goal was to develop a generic model. So instead of defining particular generators, our goal was to define frameworks to define easily complex generator that comes either from explicit generators corresponding to a given situation because you know the type of vector fields that should be generated, or as I will show later, that can be defined implicitly from a biological model. So that will come at the end of the talk. The challenge here is to define complex generators such that when you integrate the flow equation, these generators evolve in a relevant manner. And so that you also ensure mathematical properties such as existence of the flow of the flow and existence of minimizing trajectories. So I will now explain more precisely what is a deformation using a deformation? Yeah, what's the deformation module? And how? OK, so a deformation module is a five-fold. It's defined by five elements. And I will explain how they interact with this diagram. So a deformation module is a structure that will be able to generate vector fields. As I said, the idea is to define nice generators of vector fields. So the first element that defines deformation module is the field generator, which is a function that we say, OK, these are the vector fields that I want to generate. The way that these vector fields will parameterize will be by two parameters. The first parameter is going to be geometrical variable that we call the geometrical descriptor. The idea is that this parameter will call for the geometry, for instance, the location of the generated vector field. Then a control variable. Oh, sorry. Then the space of geometrical descriptor will be generated by O, the rest of the talk. Then a second variable called the control variable will specify how you want to use the possible generated vector field. So I will denote by H the space of controls. So the field generator is a function that takes input one geometrical information, one control information, and returns a corresponding vector field. As I said, the geometrical descriptors will correspond to a geometrical information. So if the geometry is changing, for instance, there is a rotation happening simultaneously, you want the geometrical information to follow this change of geometry of the obtain space. So you need to be able to specify how vector fields can change, so vector fields and then by integration and different morphisms, how they can change the state of your geometry. And this will be given by an infinitesimal action of vector fields on geometrical descriptors. So note that the infinitesimal action takes an input any vector fields that are regular enough, not only these that are generated by this deformation module. And last, we have a cost function that specifies how much it costs to use a geometrical descriptor and a control. So these five elements define a deformation module. And I will now give a very simple example of deformation modules just as an illustration. So very simple one. OK, so suppose that you know that your image can be transformed only by a sum of two local transitions. And you know the localizing function of your translations. So it's a Gaussian kernel. You know the scale. How will your sum of two translations be parameterized? It will be parameterized by two points. So I don't know if you see them. Two blue points here. And two red vectors. And then here in green is the vector field. So that can be a prior you want to incorporate. You only want some of two local translations. How do we build the deformation modules, the deformation module corresponding to this prior? Well, what is the geometrical information here? The geometrical information is the location of the two translations. So the space of geometrical descriptors will be the space of two points. So R2 times R2. What is the control variable? Which means what is the variable saying how we can use the vector field that we allow? It's the vectors of the two translations. That says with direction we want to push. So the space of controls here will be the space of two vectors in R2. So R2 times R2. So the space of geometrical descriptors R2 times R2, sets of two points. Controls R2 times R2, two vectors. The fifth generator takes the input couples of two points, two vectors, returns the corresponding sum of two local translations. Then the infinitesimal action says how a generic vector field can act on our geometrical descriptor, which is couples of two points. So we just an easiest way to do that, is to just apply the vector field to the two points. And the cost, one possible choice, is just to take the square norm of the generative vector field. So that's one example of deformation module that generates sums of two local translations. So that's a very simple one. But that's an example of deformation module. And this is an example in green of generated vector field for this deformation module. If you change the geometrical descriptor and the control, you obtain another vector field that is generated by the same deformation module. Now if you have an additional prior saying that, in fact, the directions of the two translations, they cannot be chosen freely. In fact, we know that they have to be in opposite direction and parallel to the line between the two points. Then you can no longer say that the control variable is the two vectors. Because if you know the two points, you know that they should be parallel to these purple points, purple vector. So the deformation module that will incorporate this additional prior would be to, once again, say the geometrical descriptors is set of two points. But now you say, from these two points, I can build these two purple vectors in an automatic way. I can build this vector field. And then I have a scalar variable, which this way will be my control, to which I multiply this vector field. And if it's higher than 1, I'll have a very contractive field. If it's negative, I'll have the dilating field. So note that the vector fields here can be generated by the previous deformation module. The other way around isn't true. Because this deformation module is a more constraining deformation module, you incorporate a higher prior. OK. So with deformation modules, you can incorporate prior at the level of vector fields. Now, we briefly explain how we can use that to study shape variability. The first step is to build structured large deformations. Remember that large deformations are built by integrating trajectories of vector fields. So we need to specify, if we want to build modular large deformations, we need to integrate trajectories of vector fields that are generated by a field generator. So there will be parameterized by a trajectory of geometrical descriptor and a trajectory of control. So we need to specify what are the trajectories of geometrical descriptors and control that we want to consider. We'll consider the ones such that at each time, the speed of the geometrical descriptor, so the geometrical descriptor is Q and control is H, the speed of the geometrical descriptor is equal to the application on itself by the infinitesimal action of the vector field that is generated by this geometrical descriptor and the control at the same time. So we will only consider trajectory of geometrical descriptors and controls such that at each time, the speed of the geometrical descriptor is equal to the infinitesimal action of this field generator applied to this geometrical descriptor. So only where there is a feedback loop acting back on the geometrical descriptor. We also consider trajectories such that the energy defined at the integral of the cost is finite. And then among some regularity assumption, in particular, the UEC, which says that if you control the cost, then you control the norm of the vector field in a fixed space of vector field. You can show that you can integrate this trajectory of vector field. And besides, it's only defined by the initial value of geometrical descriptor and the trajectory of control. Then you can integrate this equation and you obtain finite energy control past on a geometrical descriptor and control. So here is a simple example of modular large deformation with deformation module contractivity that I presented earlier with constant control equal to 1. And the geometrical descriptor are the two points at time 0. And here is so it's contracting. And the geometrical descriptor, they do follow the evolution of the geometry of the ambient space. So now, if you want to use these structured large deformations to study shape variability, you need to study the relaxed matching problem. So the idea is that you have two shapes, F0 and F1, belonging to a shape space F. And you want to match F0 into F1 using a large deformation generated by one deformation module that you suppose no. So you set a deformation module and you want to use large deformations generated by this module, which means parameterized by one initial geometrical descriptor and one trajectory of control. To one geometrical descriptor and one trajectory of control, you can associate this trajectory of geometrical descriptor and control by integrating the previous equation. And then you can associate the quantity energy, which is a regularizing function of your trajectory. And a data attachment term that measures the difference between the target F1 and the transported source F0. Under some regularity assumption, we could show that minimizing trajectory of control exists. They are parameterized by an initial momenta that belongs to the contingent of geometrical descriptor and shape space. So it's the product of these two acquaintances. And you can derive the Hamiltonian equation. So you have the shooting equation. And given only the initial momenta, you can shoot and minimize these quantities so that you perform your matching. So here, in order to do all this, you only need to define the deformation module that you want to use. So once you have your deformation module that satisfies the UEC, you have the existence of a modular large deformation. And you can perform matching of other shapes. I didn't specify that. But in fact, you can combine deformation modules. So I will not go too much into detail. But the idea is that if you combine two modules, the vector fields just add. So you can combine a deformation module generating local transition and other generating local scaling. And you will have a global field, which is a sum of local transitions and local scaling. And you can do that as many as you can. And all the regularity properties are stable under combination. So it's quite easy to build all this. So now the question that we ask is, how can we define deformation modules? In the examples that I showed, the deformation modules are defined by an explicit field generator. So I said, OK, I know my localizing function. I know that I want a sum of two local translations. This is my field generator, to sum of the two local translations. So this is working for many examples. But in practice, if you have super that you study this shape, and you know that at time 0 it's like that. And then it becomes like this. And then the biogist says, OK, model what's happening. OK. There is no way you can directly define deformation module with an explicit field that satisfies that corresponds to this type of evolution. So the idea is to study. But on the other hand, if you are a pure biogist, what you would do is, as I said earlier, derive a really a biophysical model about the elasticity, for instance. So there is elongation, horizontal elongation, no stretching on the vertical, and try to derive all the equations. As I said, what we want to do is an in-between framework. So what our idea is to, from an observation, derive a step of a biophysical model. And then from this physical model, so in this case, for instance, any stretching properties. So what are the directions of stretching? What can we say on the stretching properties? How can we define deformation modules given the fact that we want to observe this infinitesimal behavior from an elastic point of view? So we first explain how we can model the stretching properties of the material in a way that can be used to define deformation modules. And then I will explain how we can indeed define the deformation module corresponding to the property that we want to see. I must say that this is really, really ongoing work with Alain Trouvé and Benjamin Chalier. So please ask questions if it's not clear enough. So how to model stretching properties? In order to do the stretching properties from an infinitesimal way, so if you say that you want to go from one state to another, infinitesimally close to the first one, and you want to say that this evolution can be modeled by the action of a vector field, the way to incorporate a stretching properties on this vector field is to use a strain tensor because the local change of the ratio of distance will be modeled by this quantity, which is the symmetrization of the differential of the vector field. So the idea is to use this strain, the infinitesimal strain tensor, and to derive the physical model from it. So for each point of the shape, we'll try to put a constraint on the strain tensor so that we build only a vector field that satisfies the properties that we want to see from a stretching pointer. So how can we study the infinitesimal strain tensor? We can separate it in two variables. The first one will be, so this is a diagonalization, and we are only dimension 2 here. R is a rotation matrix, and so this rotation matrix will code for the principal directions of your strain of the diagonalization, while alpha and theta will correspond to the eigenvalues. So how much you can stretch in the two principal directions? So the idea here is that R will correspond, so for each point of the shape, R will correspond to a local frame saying what is the local frame, what is the local axis, the autonormal basis attached to this point. And so of course, if the shape is rotating, we want this frame to rotate also. So it will be part of the geometrical descriptor when we define our deformation module. Why alpha and theta will be the way to really model the stretching properties. OK, given that the principal directions are these ones, what is allowed? Should it be stretching this direction, this one, bigger than which one? So the idea is R to pose the physical model via these coefficients. So for each point xi on the shape, we'll have two coefficients, alpha and theta, so to the two direction of your internal basis. And you will say that the value of these two coefficients will only depend on two things. One is ci, what I will call ci, which is a matrix that will depend only on the nature of the point. So to each point, we will attach, in a sense, a nature of the point. And that will say if one direction is without stretching, for instance, so we'll have 0. And then h will be a control variable. And you can say, OK, this is the behavior of, so OK. So if h is only in dimension 1, you will have ci, is two coefficients. And then h will say, OK, I know that beta should be twice alpha. But now I want everything multiplied by 10, because there is a big, big stretch. Or I want everything divided by 10, because there is a very, very small stretch. If you have h of dimension 2, you will allow two types of different behavior. And you will play between these two behaviors by choosing the control variable. So this is the idea of how we'll build now the deformation module. So I will now give the exact definition of the deformation module, given that we know the matrix ci. So the whole modelization comes here from the matrix ci. And then we will define the deformation module corresponding to these stretching properties. The first thing to define is the space of geometrical descriptors. So what are the geometrical variables? It will be the points of the shapes attached to the local frame, r. So we need to say, how can they move under the action of a vector field? So it will be the action of the vector field on points and rotation matrices. Then the space of controls. In the examples that I will show, the space of control will be of dimension 1. That's a choice only for these numerical simulations. We can do that with higher dimensions. But it's easier to understand with only dimension 1. So ci, for each point, will have only two coefficients. And I suppose that you know the ci, so the two coefficients, attach to each point. We need to define what is the field generator. As I said, it's not going to be explicitly, because you cannot directly define front ci. So given a geometrical descriptor, which is a collection of your points and rotation matrices, and given a scalar control, you want to define the field generator as the best vector field that will have the strength tensor equal to what you want. So what you want being given by the ci matrix is that can be rotated under the action of your rotation that will say, OK, now it has moved, and multiplied by the control. And you want to have this constraint on every point. So the field generator will be the first one. It will be defined as a minimization of the functional that has two terms. The first term will be concentrating the strength tensor at each point. So you want to find the best v, so that at each point, you minimize the distance between the strength tensor and the values that you want, where ci is a diagonal matrix with the two factors, like that, that you have set before. So this is alpha i and beta i are given by a modellization. And you multiply by a control here. The control is the same for everyone. So you choose a control, which will say given the modellization that you have, and given the local frame at each point, how much do you want to use this? Very much for a high control, very few for a small control. So you compare your strength tensor at each point to what you want, and you regularize by the norm of the vector field. So the field generator would be the one, the vector field, in v, you fix the space v, minimizing this quantity. In fact, you can show that you have explicit formula from this, and it's not so hard to compute. So this totally defines your field generator. The last thing that you need to define is the cost and the choice that we have made, just the square norm of the vector field, so that all the regularity properties that we need are satisfiable. So now, given the matrices, ci, for all your points, you can automatically, secretly define your deformation module. Your deformation module will generate vector fields, subsets, strength tensor at each point, is very close to what you want, which could not have been possible if you had directly defined your vector field. And then you have the whole deformation module that is defined. So you can combine deformation modules. This sub-deformation modules are also the one. You can build modular deformation, and you can perform shape matching. You have the geodesic equation with the geodesic shooting that are parameterized by initial momentum. You have all the machinery of deformation modules that work. And in order to do that, you only need to say how many points you want, and what are the values of ci, so what is the nature of each point for an elastic pointer. So we now give a few examples of geodesics that can be generated by such a deformation module. So here's the deformation module, all the points that has 25 points. And all the blue points are these xi. What you need to decide is what are times 0, also, the matrices are i. So first, I'll take its rotation, no rotation, so it's just identical matrices. And you need to say, what are the coefficients, alpha, i, beta, i, so the ci? And here I say, it's this. So what does this mean? This means that on the local frame, the horizontal stretching needs to be equal to 0. And you let free the vertical stretching. So you have a local frame, and you force the strength tensor to be horizontally equal to 0. Then if you shoot with this initial momentum, so I just put the green is just to see what's happening. It's just a visualization purpose. So all the initial momentum, every point and rotation have an attached initial momentum. They're all equal to 0, except for this one that are the red vectors. And now I shoot. I have the geodesic equation, and I shoot with this initial parameter. And here is the trajectory that I obtain. It's elongating. And it's elongating. It's not like a local scaling. There is no stretching on the horizontal axis. And the stretching is uniform here. It's not from a Gaussian localization, something like that. Because it's respecting the constraint that the strength tensor should be vertical here. Now if we do the same, but we change the rotation matrix, we say, OK, the frame attached to each point, it's not the classical canonical frame. It's a rotation of pi over 4. So you rotate. But you have the same CI. So for each point in its local frame, there is no stretching on the horizontal, but the horizontal is now pi over 4. And there is stretching in the vertical, which is now pi over 4. So I shoot with the same initial momentum. So the only thing that I change is the initial rotation frame for each point. And here is the behavior that I obtain. Of course, because I said there is no stretching there. So you do whatever you can to push this way. OK, I can go this way. So that's the answer. Given your biophysical model, that's the answer. Of the deformation module. If we go back with a rotation that is equal to 0, so matrices are identity. OK, five minutes. Sorry. Identity. And you say, so the contract is already equal to 1. You say, OK, I can have stretching in both directions. But if one is an elongation, the other one is going to be a contraction. Because for example, you know it's divergence 3. You shoot with the same, once again, the same initial momentum. And here is what you obtain. You have a stretching, but it's getting thinner and thinner. Because if it wants to go up, it has to get thinner. That's in the model. That's in the elastic model. And last, the funniest example. In all of the previous one, I showed matrices CI that were independent on the point. They are all the same for all the points. But of course, you can make something different depending on the points. So here, I said that the rotations are 0. So the Rooker frame is vertical and horizontal. I said that there is no horizontal stretching. But there is a vertical stretching that is allowed. But it has to be bigger when the axis is bigger. So on the right, it's going to be equal to 0. Because it's a maximum value for the x-axis. But it's going to be super high on the left. And now, I shoot with the same initial length. And here is what I obtain. Why? Because it has to remain from the horizontal line. It needs to stay of the same length. One important thing here is that as the matrices are geometrical descriptors, they fill the change of frame. So locally, as on the left, it has to grow more. That on the right, you will have infinitesimally a local rotation. So the local frame, our eye, will follow this evolution. And what is forbidden to stretch is now this direction that has been rotated by the global flow. So the horizontal line 0 is in the local frame attached to each point. So at time 0, we will have no stretching there. But at time 1, you will have no stretching in that direction. Because locally, what's forbidden for this point is this direction. And what's favored is this direction. So now, I will briefly show a little study of growth that is a super ongoing work. So this data comes from this article. They studied the growth of this leaf that becomes this after a certain time. So it's not the same scale, 38, 100. So this one is like three times bigger. And they followed this dot. So they put black dots. And then it became like that. How can you model the growth? The idea is to use the deformation module that we have built previously. So we need to say, what is the matrix C i attached to each point? The first question is, will there be a privileged direction? So which means, do square becomes rectangle? Losange? I say, losange. So here, you can see that a square, basically square here, stays almost a square here. So we say that it's going to be isotrope. So no favored direction. However, you can see that here, the growth is way bigger than here. So you need to define a profile for your coefficients. So the two coefficients of your matrix C i are going to be the same, but the higher ones are going to be smaller than the lower one. Because here, you have a way bigger growth than here. You can see by following the dots. So the idea is that this is the contour of the first shape. And we put almost randomly our point x i. We say that the first model, that the matrix are identity. So rotation equals to 0. And we need to define the profile of C. So here is the profile. It's going to be super small. So C here, and here is the coordinate. OK, a y coordinate. So it's going to be small for up high points, and big for the low down, sorry, points that are on the bottom. So this profile has been made by Alan. And it's like a polynomial one. And now if you want to use this deformation module, given this elastic practice in order to match the small one into the big one. So the big one is the contour of the second leaf. We will do a matching, optimizing the initial momentum with the deformation module. And we need to add a translation in order to position the initial leaf. This is really easy with our framework. We just combine the deformation module that generates a translation with the large scale. And here is what we obtain. So it's working quite well with very basic parameters. So I will skip next part. OK. And just one point, if you want to have a better matching, you can add a new layer of deformation module. So we have implicit modules centered at the group points. We have one large translation materialized by the red, big red points. And you can add some small transition by just adding a new deformation module. That's really easy with the framework. You just combine a new deformation module. So you can have as many layers as you want. And you perform a matching that is now a combination of the very constrained one with elastic properties with the almost unparameterized one with some local translation. And you can see that the matching is getting better. This was without the small translations. And this is with local translations. So you can see that here and here. So here we still have small prime, but it's getting way better and easy to improve. OK. So I think I'll stop here. And OK. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. OK. So we have time for questions too. All right. I'm not sure. So are you doing inference and actually optimizing over your controls? What do you call the controls, even? No. Your controls. So the control variable h. OK. Just go back there. OK. Just go back there. Are you doing any inference in your last leaf example? Are you optimizing the matching? What we are optimizing, it's the control variable h, of course, because that's what the Geodesic shooting does. But the coefficients ci are set. So there are some small try, I guess I don't try this, the profile, but quite easily you can find something that's working, showing you can see that it should be bigger in the bottom, smaller in the top. Oh, so you're not estimating that? No. No, no. But in the same way, like biologists, when they run some equations, they... But you've got dots. The whole idea of putting dots there is to infer the profile, right? Yes, yes. That could be... That's the whole idea of putting those dots in so that you can... Yes, yes. Otherwise, you don't have the conclusion. I mean, others you don't have to estimate the biological model. Yeah, these dots. Yes, yes, yes, exactly. Right now, we are just using... I mean, that's a very, very new application. Right now, we're just using them to visually... Estimate the matrix CI, but yes, it could be a good idea to use them to... To really derive better... I mean, yeah, to infer it. So you have a question? So I think... Okay, go on. Yeah, it's... So how do you modify your product if you really want to address the dots? You would have some landmarks that show the term. So if you wanted to estimate the CI, so you shouldn't have done that. You would have constraint essentially. You would have retrieved the right CI. CI didn't have the dots. When we do the matching, I don't have the dots. Yes, but if you have, as you're really afraid of the seeds now, do you think you would retrieve the same shape, or the same function as... Exactly this one? The one you want, that gets smaller dots at the top and the arrow at the top. So you mean that if we know that we want to... You know, given the dots, that you want to get that kind of shape for seed. So the question is whether it's unique or... What would be your expectation if you had to estimate seed in addition to the edges? On the boundary, I'm not sure you can do that because that's... That's right, I mean LDDM matches the boundary. So you cannot retrieve a CI from the boundary, I would say. I think this is really important. This is really some prior knowledge you have to have. You can have many, many different solutions. And if it's growing from the base, it could be a different thing, and there is no way, I think, this is something you have to say, you have to include. So the question is, I think we need to stop the discussion now. Maybe we have to... Sorry.
I will present how shape registration via constrained deformations can help understanding the variability within a population of shapes.
10.5446/59197 (DOI)
to be invited to speak in this workshop. I'm very honored that you managed to fix this before mine. Thank you. You're welcome. And I'm also very happy that you put me after Fabio. He explained a few things about bedding tables that I was using. So before you do variations, maybe you know that you have to play the original theme. So this is variations on the minimal resolution to the direction. So let's recap, look at the Lorentzini in maybe 93 or earlier how to conjecture. Let's see, 1993. Regarding what should be the minimal free resolution of a general set of points in P. And so for a general set of S points in Pn, we should have a minimal free resolution with no consecutive cancellations. So if you have minimal free resolution of some module, you will have a relation to the Hilbert series by sum of h i z to the i to be equal to 1 over minus 1 to the z to the number of variables times the sum of minus 1 to the i beta i j times z to the j. So this is something yet from the additive of the Hilbert function. So this is one way to see this. And another way, which is of course equivalent, is to multiply both sides by 1 minus z to the n. And you will see that you get the expression for the opening sum of the numbers in terms of the Hilbert function. So what Lorentzini proposed was that you compute this thing and then you have a guess for what should be the Betty table of this general set of points. So let's try an example. So let's take 10 points in P6. So the Hilbert function is 1, 7, 10, 10, 10. So the Hilbert series would be you multiply by T, E squared, D2, and so forth. And then you want to multiply by 1 minus z. Multiply by 1 minus z to the 7 here. And I can see that one of the factors here will cancel and you get something simpler. You get 1 minus z to the 6 times 1 plus 6z plus 4. Did you do the computation already? Three, maybe. And then you can actually compute this. I won't actually do it, but you can see it starts like 1, of course. It's not so surprising. But then the next one is minus 18z squared. And then it continues like this. And the last term will be minus 3z squared. And then you can write down the expected Betty table by taking these numbers. And you see that you have nothing here and following that rule of nothing because you didn't have a z term. And then that means that you have 18 coming from this column here. And then you have 52, 60, 24. And it goes down like this. So this is the expected Betty table. And then you can try it out. Ask my colleague to do this. And I did this morning. And I forgot to say that you should do this over a finite field if you want finite time. Excuse me. I was very worried that my machine would actually die from the fan. It was working very hard. But if you do it over a finite field, it will actually finish in like half a second. And you will see that this is actually what you get. So then you might try to add another point. And you take 11 points. And if you want to look at just the Betty table like this, what you're going to do is that you add like a costel complex on the last row. And then you see that you want to cancel as much as possible from that. So you have a 1 here. But that would cancel with the 18. So you get 17. So then you expect to have 17, 46. These numbers are really important, actually. Let's look at it in the back grid. 25, 18, 4. And then you can ask my colleague to again to do this. And maybe you know already, this is not what you get. It's instead, you will get something different in this position and this position. So this was something that the French Friar observed in a few years after we saw this conjecture. So this was probably first observation. This is not true, in a sense. But that was computational. And a few years later, I suppose that has proved that we have infinite number of conflict levels. So this conjecture is starting from this as the first thing. And they use something called B, transport and de-obdarity. And this gives an abstraction for this table to occur. That proves that at least you cannot obtain this. It doesn't tell you which one you will get, but you will not get this one. You get at least the five here, at least one here. But nevertheless, there are many cases where this conjecture was proven, like in P3, P4, and for a large number of points. So here's how this and Simpson proves the minimum versus conjecture for S really big, given that 8n for large enough. But it's really hard to get the actual bomb for what should S be explicit. So that was maybe the original theme. And the first variation is now the Musatta. The Musatta gave a variation. And maybe someone knows if 98 was the thing. And that was saying that if you have general points on your divisible variety in piano, we should have a similar thing, except that we cannot expect that the table to be the same as for general points, because you will soon later see that the ideal of the variety will be part of your ideal. So certainly, the citities given by the variety will be seen there. But the idea is that all the rest of the resolution will just be the same, like on the bottom lines of the other debited tables, there will be some two lines. And you will have no consecutive oscillation. And this has also been proved in a number of cases, like small degree surfaces in P3. But it has also been disproved by Fakas and Musatta. And Fakas disproved for curves of high degree. So there are a lot of cases where it's proven, but there are also cases where it's been disproved. So that was the first variation. So then you can have some other kind of projections like this. You can look at any kind of hermitage space where you have a general element. And you can ask what is the vegetative budget general element of this thing. So for example, something we saw earlier here for Gorenstein and for us, at least what we call compressed. There is a similar guess for the projection of what this should be. But even if you start with the in Codimension 3, you look at the books on Heisenberg's structure theorem. For example, if your Hilbert function is 136631, this is one of our favorite Hilbert functions, by the way. So then the vegetable will look like this. You'll have a 4, 1, and 1, 4, and 1 like this. So this is the vegetable you get. And there is no way you can cancel these things because of the structure theorem saying that you have an odd number of generates that you have this skew symmetric matrix in the middle. So here you see that there's another kind of obstruction. We saw one kind of obstruction was given by this Gale transform. So one way of looking at this is that you have this minimal resolution projection in general that expects everything to be as general as possible until you observe an obstruction and then your expectation changes. You no longer expect this to happen because you know what, you won't. But then you hope for the next best to happen that you actually get a 4, a 1 only, and not a 3 or something. That could also happen. So that's one thing. And another similar thing is just to look at, for example, symmetric, what I call symmetric, for instance, algebras. And they are symmetric in the sense that the Hilbert function is symmetric. But if you take the ideal, the denominator of the symmetric polynomial, you know that the algebras are spin by a single dual generator in terms of my equilibrium system. If you take this to the symmetric polynomial, like a symmetric with the expected action of a symmetric group on the variables, then you still have a nice primitive space of form. So if you think to be, for example, if you look at the case we saw earlier today, another Hilbert function, you didn't see the Hilbert function. It said g equals 7 on the board. And then you saw something like this. Was that what you saw? So this was the better table that occurred earlier today. So if you take a general polynomial of degree 3 here in five variables, we know from the previous talk that this is the better table you get. Because there is sufficient to find one example where this happens. The question is, can you find a symmetric polynomial where this happens? Then you have to do another example with a symmetric polynomial. And then you can try the number 2. And you will see it will not happen. So the question is, there has to be a microscope structure. Since we don't get the expected thing, we have to find the abstractions. So what is the abstraction for this table here? Well, that's coming from representation theory. So instead of writing this, this was the computation that we were supposed to do in order to get these numbers. If you just do this thing, this is what you see. But now you should think of all these things as being representations of the symmetric group. And then this is no longer just 1 minus z to the 5. But this is coming from the Cossill complex. You should think of that. You take your martinian algebra here, you transfer it to the Cossill complex, and then you compute the homonous. That's the way you compute many numbers in one way. So then this is no longer this thing. But this should be something like you have this symmetric representation here times nothing minus z times the next one will be this plus another representation, which is this. So this is a one-dimensional, plus a four-dimensional representation. And then you continue starting this. And then this one you can see, since you plug in a symmetric polynomial here, you know what is the representation there. That's the trivial representation. So the next factor will be something where you have this trivial representation. This is the way I write the trivial representation. I'm going to do it that way, but I don't. And then you have the similar thing here. And then you have the trivial representation again here. And then you make this computation. And when you do the computation, of course, what you do is take the tensor product of the representations and expand this. And then you will see what will be the, as you were looking at now, to be 1, 2, 3, 4 here. If you look at the coefficient of z to the 4 in here, you will see that you get two negative terms, representations corresponding to this partition, and two positive terms. And there. And this one has dimension 4. This has dimension 5, dimension 4, and dimension 5. So if you just compute the numbers, of course, they cancel. But they're different representations. So you're forced to have actually nine dimensional space here, nine dimensional space here. This is the least you can get, because you cannot cancel those as representations. So that's one other way of looking at this. So the corresponding minima resolution conjecture for symmetric polynomials like this would then be, OK, we expect nothing more than we see when we do this computation. No more nothing else, strange happening. So MRC for symmetric equations would be that no more sysages than we expect. Now, when we've done this computation, so now I would do. The theorem? That's the conjecture. The general conjecture would be that, OK, you, and in practice, you'll see a lot of cases where this is true. I have one other amusing case where, I mean, if you take the 3 by 3 matrix in the terminus, nine variables, you take the determinant of that. That's a cubic. And you compute the bedding table, and then you get something like this happening. And everything that you will see, and that number is actually 100, 100 more than you expect. But still, that 100 you can see from representation of the gl3 times gl3 acting on this matrix, both sides. So you can see that this is actually a thing that is like a tensor product of two 10-dimensional irreducible representations. So there's no way they can cast it, because they're not the same on this side. Presumably, in small characteristics, you expect even more. Oh, right. Yeah, probably. That's true. Yes, thanks. So there are lots of more variations to make on this. But now I'll move to the variation that is joint work with Christine Berkes here. And Daniel Herman. We probably wish to be here, but couldn't make it. So this is saying that, OK, we knew a lot about bedding tables up to scaling. So this is MRC up to scaling. So then look at this table, for example. I mean, this corresponds to this little function. And I say, OK, I'd love to multiply this by 2. So I take 2. I get 6, 12, 12, 6, 2. And then I can actually find a module, which is symmetric in a nice way, or you get 8, 8 here, 0. So it's possible to cancel these things by allowing to multiply by 2. So this is possible. So this is actually an example of a pure bedding table, where you have only one column C1 between each column. That's possible means that it's possible to realize that it's not a plural with this kind. Yes, so if I take a generic 2 by 2 matrix, symmetric, in a sense, symmetric matrix. And I think of this in terms of the inverse system, so I will get this resolution. I did this this morning, and I finished time for the talk. So 10 is possible to do that. So the question that actually Christina and Dan started thinking about before asking me to join the team was the following. Can you always do this? Councilations. And at first, when you think that, OK, we have this wonderful theorem by Isabel and Freyja telling you that every thing we know about the table up to scaling can be answered in terms of these pure things. And it's just numerical. After that, it's just numerical. And this is almost as saying, like, this rest is linear algebra. Just proving that the matrices has full rank or something. That's just linear algebra. So it turns out this is not so easy, even though it's just numerical. So there are a couple of phenomena that I'll, the first one is jumping. So this, so you might think that if you start with something you know exists. So you start with the better table that you certainly know exists just by taking two better tables and add them together. So this is the Cossill complex in the first row, and this is not a Cossill complex. Shift it a little bit in a higher degree. So you add these two, and you see you want to cancel these things. OK. And then you see that there's sort of a problem here. That was right. I didn't get it. I didn't get it. Even how much you multiply this by a number that will still be a problem, except if the number is 0. So what turns out to be possible is to do this cancellation here, but at the price of moving this one over here. So the two, the one over here, had to jump to make this happen. So this is the jumping. So that's the first surprise that you have to jump when you do this. And the second annoying thing is that you don't have uniqueness. So no uniqueness. So if you start with a number, but you check that I have the right number. So if you start with 3964 as the little function, then you will get some value tables that look like this. So this is the value table. And these are possible for all x and y's where x plus y is 3. So whichever number is possible or not negative numbers that you plug in here, those are possible. So if you do this to scale, you will see that you get a very large number of possibilities. This is going to get intable, but it's possible to do. So there's no uniqueness. So then we can look at this from another point of view. So you start by looking at the small cases, the first cases you can understand in terms of regularity. So regularity is just the number of rows, essentially, in this diagram, except if you had to add one. So if you have regularity 0, there's only one possible thing. That's the causal complex. So that's nothing. But the first interesting case, this regularity argument was 1 where you have something. And that means that if you restrict yourself to martinian modules, that means that the Hilbert function just correspond has two entries. So h0 and h1. So you can easily write down the code of possible Hilbert functions here, because you know that both of them have to be non-negative. So you know that this is the code of Hilbert functions. And then you might look at what are the possible meditators that are consistent with having no consecutive calculations like this. So then you realize that, OK, they have to look like this. That means start up with non-zero numbers, and then they go 0. And then you can go down to the second row, and then it starts getting non-zero. It starts getting non-zero. So this is the kind of meditators that are consistent with this minimum resolution projection. And then if you start seeing this, then you see that, OK, you can move this place over to the second line. And that will actually correspond to dividing this code into subcodes like this. Where in each subcode, this means that you're in this. The first one will be where you jump here, and the last one will be where you jump in the last. So it might also be wise to look at this instead of looking at the cone to take the hyperplane where the sum is 1, and look at what is twice there. And then it's just an interval. And this interval is actually divided into subintervals of the same length here. And the points where you go between 1 and the next is like i of n and 1 minus i over n. So this doesn't look like a hyperfunction to you, but if you scale it, it will look like one. So these are the jumping points. And that means that these ones are the ones that correspond to pure meditations. So you go between these cells when you hit pure. And actually, those things are given by, if you remember this polynomial that we wrote out in the beginning, so multiplying 1 minus z to the n by the series that gives you a polynomial in this case. And the coefficients of that polynomial will be the points or actually the equations for these hyperfets. So in general, the coefficients of this p of z, which is 1 minus z to the n times the series, will give the hyperplanes. OK, that was easy case. So then we move to regularity 2 for its more complicated. So maybe before doing that, I will say that the diagram that looks like this, where you just go across like this and turn, and I don't know, what would you call such a diagram? You would call it the snake. Right? And we did so until we thought of writing down this. I really thought that we couldn't have sectional snake diagrams or snake tables. So we call them semi-pure. They could be maybe natural or something, but it's a similar thing. So we call them semi-pure. So in regularity 2, it's too hard for me to draw this diagram in 3D. So I will do the intersection with this. And then you all know that you get this nice, maybe I should draw it more, since you have to see. I have this white thing here. And this corner corresponds to the hyperfunction 1, 0, 0. This corresponds to 0, 1, 0, and this to 0, 0, 1. And then we know that between this corner and this corner, we're in this region, where you go between 1, 0, 0, 1. So here, we'll actually have a number of pure diagrams, or pure ready tables, and in a similar way, you can go between this one and that one. And then you look at these hyperplanes, or these coefficients, vanishes. And then you will see that they will actually connect, if I'll draw them blue, they will connect these things. And this is the picture we'll see. And the intersection of these things, if you think about it, that means that two of your coefficients vanishes in that polynomial. But that exactly corresponds to pure ready table. So the intersections of these are the pure ready tables. And now, what you can see is that this region here is sort of triangulated. And this is the region where we have semi-pure diagrams. And this region was for a while called sort of mysterious region, or the non-snake region, or something like that. But that's the rest of it. So how do we explain that region? So we know that this is semi-pure region, but this region. So it actually turns out that this one corresponds to diagrams that look like a square. This one corresponds to diagrams that look like this, like a full first line, and an empty middle line, and then a full last line. So these are the ones that. So what we can say in general for any regularity is that this semi-pure region, so this is proposition maybe, the semi-pure region is given by the number of sine changes in this polynomial, P of z, or actually, P of minus z. So you can count the number. Sine changes in P of minus z, that will tell you exactly where you have to jump between one line and the next at the sine changes of that polynomial. And that is if and only if. So you know that you have to jump to the next line if you have the same sine. And if you have the right number of sine changes, then you know that you are in this region. So that's pretty good. And then we had a feeling that these are the natural variables that we actually meet, the semi-pure ones are the ones that we meet in reality. Sometimes you have some, but you don't get very far. So maybe you can say that, OK, if you take a module at random, you will get there to the semi-pure region. So what is the probability of ending up in the semi-pure region? So unfortunately, it's not very big. So it turns out that the semi-pure region is small. So one way to look at this is to measure it. The region does depend on the number of variables. But if you forget about the number of variables and just think that number of variables is very, very large, so you let n go to infinity. And then you look at how big is this region. So it turns out, in this case, you will get a curve which is slightly above this. So that's a limiting curve. So you get the region with a nice area, but it's not the whole thing. So in this case, it would be 1 third. Fine. But the theorem that we have, according to the probability to be in the semi-pure region, is what? So it's going to be 1 over the product from i goes from 1 to r minus 1 of 2i plus 1 to i. So if you plug in the first one, r equals 3, that would be 1 over 3. That's pretty good. So r equals 4 gives 1 over 30. And r equals 5 gives 1 over 1,050. r equals 6 gives you 1 over 132,000 something. OK, so it's going to be incredibly hard to get through that region if you throw darts. Yeah? So you're throwing darts by picking in the compact region the red function. Yes. So that's OK. Throw darts here and see what's the probability to end up in this region where we actually understand things perfectly well. So in this case, in a regular d2, we also understand the other part. And in a regular d3, we also understand what's happening. But there are more than two parts there. So it's a little more complicated. So our theorem is maybe that MRC up to staving holds at least for regularity up to 3. So in regular d3, what do we have? So we have three regions. The first is the semi-pure region. And the second one is where you have diagrams that go like two such things with appropriate things in the middle so that you don't get translations. So both of these regions are actually given by the information about the number of sign changes and that. So you just look at the convex hull of the pure bedit tables given by such a sign pattern. That will give you the vertices of the convex region, which corresponds to something like this. So it's more complicated than 3. But then there's the third part. And that corresponds to things where your module actually decomposes as the direct sum of two things, two regularity one things. So that means that you have something like this on the top and something at the bottom like this and something clearly in between which is giving you the distance between these things. But then you allow everything to be possible to fill here and fill here. But that's in a way that is compatible with that thing. So here we have these three regions. So the conjecture would be that we have in regularity R, we will have at least R regions. Getting more complicated. Complexity seems to increase by the regularity. So do you cancel things with alternating signs or they have to be a neighboring homological degree? The regularity 3 could have something in the bottom row and something in the top row that would be the width. Yeah, sure. This is the kind of jumping thing that we allow things to jump so they can cancel from this part to that part in a sense. But if you have two ones that were part of you could you be able to cancel them? We're not interested in that. So the minimum resolution conjecture is only about canceling the ones that are consecutive. We want to cancel those and we're happy once we've canceled all those. But we need to cancel all those. So one other way we went was to look at, OK, we know that it's incredibly hard to get to this semi-pure region. But for some reason we had a feeling that this is what we actually see. So one theory saying that this is actually what's happening, that is that for a release of embeddings of Pn of high degree we get to the where we land in the semi-pure region. So what I mean, so if I fix n and I take d at least large enough then sooner or later I will be sure to be there. Probably I will be there already from the beginning. But our proof only works for going there eventually. And this is actually related to work by Brenti and Belker about, let me say, embeddings and how they behave. So maybe I should take one example. So if n equals 3, so what is the, when you increase the embedding, the degree of embedding, what is the eventual Hilbert function that we will see? It turns out that that will be 0, 1, 4, 1 when you code it. So now d goes into n, which means that the Kettle and the number of others goes into n. And some of you might recognize this sequence. Maybe it's too short. But these are only the area numbers. And this is the first example. And then you can actually compute these things and see that you have the right kind of sign changes to land in this semi-pure region. So, but you're not saying that the resolution of the verin has been changed. No, no. But if you have a good question, yes. The resolution is not so important. But it's possible to actually numerically cancel all the things. You know that there are lots of things that don't cancel. But that's, a lot of that has to do with the multiple, I mean, the multiple grading. If you actually took into account the multiple grading, there were only fewer things that you were not expecting to. Yeah, that's a good question. Yeah, so if you peel off all the cancellations, they look like that. Yeah, I think I will stop here. So I have some pictures on my computer that actually were very helpful in pursuing this. Yeah, seeing this, but also like the 3D kind things to see how this looks. But I don't think I will trust to start the project, so you can ask me to show you. I'll show you later. OK. APPLAUSE Any questions for us? When you're talking about probability of being in this thing, how do you recall? Well, it's just, I mean, it's a strange way. It's just the area of this region compared to the whole area. So that's my probability of measuring. So. No, let's thank the committee. APPLAUSE
In ongoing joint work with Christine Berkesch and Daniel Erman we study the minimal resolution conjecture up to scaling. For Hilbert functions corresponding to modules of low regularity there always exist corresponding Betti tables with no consecutive cancellations up to scaling. For Hilbert functions of many naturally occurring modules, like coordinate rings of Veronese varieties, the Betti table can be semi-pure, even though the region of Hilbert functions corresponding to such tables is a tiny part of the cone of Hilbert functions.
10.5446/59198 (DOI)
I'm really excited to be here. This is going to be a really fun week. Lots of excellent talks already and more to come. So today I wanted to tell you about a story I've been thinking about for a long time and tell you a full answer to a question that we've wanted to know for the entire time we've been thinking about this. So I'm going to start by just defining the A-higurgy metric systems quickly for you and then give you my oldest favorite example and the goal of the talk will be to kind of explain why what happens in the example happens. So this is joint work with Roberto Brera and Laura Matisiewicz. Roberto was Lara's student who graduated last year, I guess a year and a half ago now. So I'm going to start with the matrix A. So this will be an integer matrix that's d by n and full rank and I'm going to denote the columns of the matrix by ai. And so our hyper geometric system has two pieces of input data. One is the matrix A and the other one is a complex vector beta that has as many entries as the number of rows of A. And we're going to work in the bioalgebra for c to the n, the number of rows here. And I want to put a multigrading on my bioalgebra. I'm going to get the sign right here. So I'm going to use the i-th column of A to give the degree for the variable and then the opposite degree for its corresponding partial derivative. And now I want to give you a left ideal in the bioalgebra that then corresponds to these different operators to give you a system of piece. That's what we're looking at. Okay, so the A hyper geometric system is the following. So we'll call it HA of beta. This is going to be the left ideal generated by these two things. So IA is the usual Toric ideal. I'm going to think about this ideal in the partial derivatives. So it's generated by rhinomials that correspond to column relations on the matrix A. So this goes inside of this polynomial ring. And then this other part is called the Euler operators. So I'm going to get one for each row of my matrix A. And I just sum along that row and use the entries I see as coefficients on the product xj partial j. And then to make the sequence E minus beta, I just pair each of my Euler operators one for each row with each of my entries of the vector beta. Okay, so now what am I looking for? Well, when you have a system like this, a solution to this system will say as a germ of a holomorphic function at a non-segular point in Cn. And it's going to be a function then that's annihilated by the differential operators here. So solution V lives here and P applied to V is zero for all of these. All right. Okay, so what's happening here? Well, the E minus betas are if you think about V in terms of a sort of power series or proso series expansion, the Euler operators are asking that every minomial that shows up in the power series has multi-degree beta with respect to the gradient we have here. And then the binomials here kind of tell you how to get from one minomial to the next at forces, conditions on the coefficients. So the word hyper geometric is coming from the kind of recurrences you get on the coefficients that come from these binomials. If you think about a geometric series, we expect successive like the ratios of our successive coefficients to be a fixed rational number. And here we're looking at ratios of coefficients in different directions being fixed rational functions that correspond to these binomials. So that's the word hyper geometric. Okay, and these show up all over the place. I think maybe one of my favorite places is looking at Picard-Fuchs equations on certain varieties. And these are very nice for things like classification decayed surfaces. You can find these kinds of, once you know you have Picard-Fuchs equations that are of a certain a hyper geometric type tells you a lot about the surface. So I guess another place, a Kuncklap showed that generating functions for intersection numbers of curves on, or generating functions for the intersection numbers and certain modulized faces of curves, get all the edges in, are also solutions to these types of systems. All right, so maybe just to give, I'm going to be looking at IA a lot here. So let's write SA for the function by IA. And then just know that this is another, this is a presentation for the semi-group ring over the columns of A. Okay, so what we're going to see is that actually the behavior, sort of the homological behavior of this ring dictates a lot about the behavior of this system. All right, so let's look at an example to see how this can start. We'll take some, we want to see two. All right, so the torque ideal in this case, well notice if we sum the outer two columns, we get the vector two four and that's the same as summing the inner two columns. So that gives us this binomial. And then if you play similar games, you get a few more. Okay, so here's a minimal jet rating set for this ideal. And of course, then the Euler operators, just write two out. It's the only one I'll do. So I'm just going to sum across the first row. That's how the three should be to the third, the second one. Um, the second, yes, let me see. This is, this should be a two way. No, that should be a plus minus minus. Yes, thank you. Better. Okay. Okay, so let's see. All right, so our system now we're looking for functions that are annihilated, say by these six differential operators. Okay. And so the way we want to think about this game is to fix the matrix A, fix or multigrading and then think about B as giving us some torus action homogeneity. And so we're looking for now for that fixed torus action, what can happen as we vary our torus weight. Okay. And so it's one more definition here. So the rank of h a of beta, our solutions will always be in the vector space. So this will just be the dimension of this vector space. Okay. So what we see here is that the rank of our system is four when beta is not the vector one, two, the guy we left out. And when we hit one, two, our solutions base grows. All right. So one fact I should point out here is that in general, we know that rank is upper semi continuous. So at least we knew this, we should expect something higher here, if not the same. But the thing that's always bothered us that we don't understand was how do we see that we get extra functions there, right? How should we think about that? In particular, if we look at the solution space here, what you see is the span over C of sort of three generic looking power series. And then two Laurent-Minomial functions. That looks like this. Okay. All right. And so if you think about it, you could think maybe I'm standing in my C2 and I kind of wiggle around in beta. And then when I crash into one, two, three solutions come from things that I saw before. And then I get these two kind of new guys. And why? Right? Like what happened? Where did those come from? And so something that you see here is that if you actually compute the local homology, the semi-group ring, those Minonials show up. Okay. So if we look at local homology support at the maximal ideal, this guy, this is multigraded now. And I guess I need the negative graded here. So we get zero if alpha is not minus the vector one two. And we get this, a single co-cycle. One alpha equals minus one two. Okay. So this has been the mystery that in many examples, this being the smallest example you could choose, you can go right down co-cycles and local homology and then go over, pick your favorite way to expand solutions of your system. And in every instance it seemed that taking the local homology gave you ways to write down these sort of extra solutions. And so I, to kind of tell you what we can do in code mention two, I need to back up and tell you a little bit more about the story for rank. So let me back up first there and we'll come back to this example in a little bit. Okay. So let's take some z to the d graded module over our polynomial in the partials. Okay. And then pick some homogenous element. And then when you give you an action of the Euler-Rafferator on this Euler-Rafferator on this guy. So I'm going to have him act on y. Oh, I should back up. Not on y. I'm going to look at one tensor y here. Let's have it act on tensor y. And it's going to act on the left, but I need to keep track of the multigrading. If you have my sine right minus the i-th degree of y. So this is now something that lives in the biologram. And I'll just tensor that as y. Why am I going to choose that? Well, the way that these Euler-Rafferators work and since the biologram isn't quite commutative, if I apply my non-commutative, like what the commutators do here, I can rewrite this. If n is, say, the polynomial ring, I would, well, if I think about things correctly, I could fake that this guy actually lives on the right side, but then this would fall away when I commute past. So I'm using a left action to fake a right action just as multiplication by e minus beta. Okay, and when you do this, it turns out that these Euler-Rafferators pairwise commute under this action. And so you can actually then, that entitles you to go and make a Coussela complex in the sequence of Euler-Rafferators on modules of the board with d-points around. So this is what we'll call Euler-Coussela homology. So I guess the complex will be taking, I'll denote it like this, where tensoring with d is always understood, and then the homology part, which is h. Okay, so now theorem that goes back to 2005. So now we're going to say that as our Miller-Uli Bauteur is, for a fixed a, we have the following. Okay, so the rank of our system is constant with respect to variation of beta. If and only if, when we look at these Coussela homology modules, we get vanishing above degree zero when we stick the seven-group ring in for argument. And the surprise here, the really nice consequence was that this happens exactly when the seven-group ring is called the collet. So what happens here? Well, they related this higher vanishing of these Euler-Coussela homology modules to the non-top vanishing of the local collet modules. So when you only have top local collet, then you're going to collet. And so the appearance of those modules then linked you to the appearance of higher Euler-Coussela homology, which then they were able to show forced some ring jumping parameter to appear. Okay, so in particular, they were able to characterize which beta's give you ring jumps in terms of the multigraded support of certain of the non-top local homology modules. Okay, I should leave this. Okay, so the key technical tool in this theorem was a spectral sequence that shouldn't be a surprise, maybe seeing the pieces we have here. So the idea is to take now a free resolution of SA, and then look at the associated double complex, where you plug that free resolution into the Euler-Coussela homology or Euler-Coussela complex object. Okay, so then you chase through the associated spectral sequences and eventually arrive at something that spits out this result. Okay, and so in particular then what really happens is they don't look at local homology modules, but this is actually done in terms of X modules then. And so the sort of downside to this argument is that at the end of the day you have a spectral sequence that's relating Euler-Coussela homology of X modules of the semi-airframe, and that spectral sequence converges to your hypergeometric system. Okay, well if you want to take solutions that's a contra variant functor, and so everything kind of goes the wrong way. If you want to now take solutions of these X modules, or these Euler-Coussela homology modules of the X modules, and try to use that to construct solutions of your hypergeometric system, things just don't quite match up. And so for many years all of us tried at various times to get maps to go the right way and just didn't seem to do it. And so that's why we said let's stop and look at CodaMention 2 where we understand things a little bit better. And so I'll tell you a little bit about that story in just the next couple minutes, and I think that will convince you that actually in general I think we actually do understand this pretty well, but it's because of the spectral sequence entanglement that things get kind of hard to write down, but the the schematics of what happens in CodaMention 2 I think do explain the general case pretty clearly. Okay, so the thing we need is some work by Pima and Stern-Bels. This is from 98, and so now we're in CodaMention 2. This is n minus d, and let's set r to be the polynomial ring and the partials, and then we want to be in the case where we have ring jumps. So let's assume that Ia is not Cohen-McAuley with b plus 3 at least four generators, minimal generators, then a minimal free resolution for Sa, not the Toric ideal has the form r to the b plus 3, r to the b plus 2, and r to the b. Okay, and the nice thing about this theorem is not that we we know exactly what this looks, not that we just know these buddy numbers, but they actually take these minimal generators and group them into fours and use this quadrangle construction to really explicitly write down all the cis-Sigis combinatorially. So if you take the matrix A and you look at its gale dual, you can find certain structures in that gale dual and really write everything down. So the key here was that we realized we actually knew exactly what this looked like. So we could go and figure out exactly what the x's looked like that showed up here, and in particular the spectral sequence that shows up in this MMW proof degenerates, and there's only one non-zero differential on the second page. So that allows us to actually write down a nice isomorphism of our a-hypergeometric system in terms of the Euler-Casool homology of the x-pockets. So here's what we get. So there's a little extra here that I'm going to not talk about. So there's some extra little details in the background, but in particular this is isomorphic then to the following. So we get... let's see here. Epsilon A is just the sum of the columns of A. Okay, so I have this guy with the E plus beta plus another sum hand. So it's the same with an x2. Still an E plus beta. And then modulo, the image of this one differential from that E2 page, and that's an H2 of an x3 mapping to an H0, this H0 of the x2 that we see in the numerator. All right, so what happens? Well, for most betas, these x modules will have no Euler-Casool homology at all. And namely, these homology modules will only show up if beta is in this risky closure of the multigraded support of the module. So most of the time these don't show up and you just get this guy. And this x2 is essentially, it acts like the saturation of the semi-grid brain. So this kind of picks up the Kohn-McAulay case. Okay, so most of the time this x3 doesn't come into play, but when you run into a range of being beta, for instance, then these modules show up. And so what happens is you get kind of your generic solutions coming from this piece, and then you lose a few and you gain a few. Okay, so what's actually happening over here is not that three generic things stick around, and then you pick up two more, but actually one of these guys, depending on how you choose to expand your solutions, one of these is actually also coming from something generic, right? But some other generic guy dies. So you actually get a contribution still of four here. Sorry, wait. Yeah, no, sorry, you do. Okay, you get four here, one dies, you get your generic ones, and then these two show up. Yes, it should be a one-two-one. But how do we actually write down an isomorphism between the solution spaces? Well, I told you that that local homology, co-cycle, looked exactly like those two solutions. And so what happens is when you actually write down these x-models explicitly, you can go and solve them, but there's some great twists around, and you're in the wrong parameters. And the way that you fix the multigrading is by going back through local duality, and you need a combinatorial version of local duality that's entirely explicit. And by doing that, you get the kinds of co-cycle generators that we saw in this example, in the 0, 1, 3, 4 example. And those co-cycle generators tell you exactly how to translate from the solutions you're seeing here for the x-modules, and they're the right thing to multiply by to get the solutions of the a-hypertrometric system that you actually need. So at the end of the day, you're taking these modules that we know explicit presentations for via P. Vesternfeld's, turn it through a combinatorial local duality machine, take those monomials, multiply through to get back to the right beta, right to the right parameter, and then you're able to write down all of your solutions explicitly. And in this case, you can see exactly what's appearing and disappearing. And so in general, if you thought we had more kinds of pieces here, this is still kind of explaining how things are connecting and interacting. So all right, I will stop there. Thank you. Thank you. Are there any questions for Christine? So the number that you get for the generic case to depend on the matrix, I mean, how do you get the answers? Yes, so in general, the generic right answer is the volume of the matrix, also known as the degree of the torque ideal. Yeah, so that was Delphan's kind of thought, and he proved that in the productive case. When the special just depends on the size of the variable. So the value of the other case, the five. The five? So the five is much more, that was my thesis. Getting those numbers is another computation running through a different specular sequence. So if there's no more questions, let's thank Christine again.
We construct an explicit local duality map for codimension 2 toric ideals, thanks in part to the explicit free resolutions of Peeva--Sturmfels for such ideals. We then combine this with our work on the parametric behavior of the series solutions of an A-hypergeometric system to explain how local cohomology causes rank jumps.
10.5446/59199 (DOI)
I joined work with Linh Chuan Ma and Alexander Ode's Stephanie. I guess I should also thank the neighboring institution for hiring so many great young people. And I have benefited a lot from working with them. So, I should also talk about nearby institution. I also advertised KUMUNU 2018, which is annual community algebra conference. This year will be organized on October 13th and 14th in Lawrence. Last year we had a candlelight dinner and an open bar. Many people told me that is the best algebra conference they've been to. Anyhow, a lot of people will come. So, I wanted to talk about, since I'm literally eating to your dinner time, I want to tell you some stories right away. So, what are these objects? This is a new class of rings that include Kohen-McCauley rings, Stanley Reissner rings, the so-called Dubois Singularity Singularity Zero, and FPU Singularity Incarretera Ristate P. So, they include a lot of good, you know, I also say that life is worth living, Kohen-McCauley rings, and a lot of you live very comfortably in the Stanley Reissner ring. So, this seems to be a good class of rings. Of course, you know, for this class to be interesting, we should also be able to say something, otherwise you can just define any ring to be a combo. So, we should be able to say something interesting about it. And since the theme of this conference is about CG's, I will state one of the consequence, one of the results, and then after this you can go to dinner if you want to. So, that's simple I, you could add some more I is logically full, whatever it means, I'm going to be five later. So, if S is regular local, then the projective dimension of R over S is less than or equal to the number of generator I. So, it satisfies a very strong individual banal sort of stillment, as a very strong stillment question, right? And also, if S is just a polynomial ring and I is gridded, then it also, we also have the regularity of I is less than or equal to the number of generator I times the, so here D of I is just the maximum degree of a generator, of a minimal generator of I. So, again, it shows you that this class of rings are not every ring, they satisfy very, very strong, you know, rather nice property homologically. On the other hand, they include also all of these classes in the variety itself. So, it seems to be a right side, that's what I'm trying to convince you. Okay, so that's the main thing I want to talk about today, but let me actually start with a motivation question, which actually comes from a paper by David Michemustata and stillman. And so, let me just remind you that, okay, so S is a commutative material ring and I and S is an ideal. And so, the local homology of S is, the ideal homology of S is supported in I is by definition and following. So, okay, so the direct limit of X module and here I sub E is any nested, you know, system that go final with, go final with, with the power. So, for example, you can take, you can take any system. So, for example, IE equal to I to the E or IE equal to I to the symbolic power or IE to the, equal to E to the Frobenius power in characteristic Cp and so on. So, this is a direct limit. Local homology module contains a lot of build, you know, each useful information. On the other hand, a big drawback of this module is, I'd say, really finally generated. And so, that motivates the following question, which I, as I said before, is due to, was first raised by, in a paper by David Michemustata and stillman. So, when is, when is this, direct limit actually a union? Okay. Well, the obvious reason why we want this to be true, namely that any question about this module can be now reduced to a question about IE, which is a finally generated module. Right? Of course, this is just mean. It's not hard to see, it's just mean that each of these map is injected. Okay? So, in order for this to be true, you need to, and so it motivates the next question, which is when is this map injected? So, when is this map? And in fact, one of our first observation is that this class of idea, which I haven't defined yet, will sort of answer this question precisely. So, let me, let me now focus on this question. So, here's a quick observation, namely that if you have a map, so if you have any idea in our system, then the map from here to here factors through this. So, for this map to be injected, you need this to be injected for yourself. And if you, let's say that you leave in a, if S is a complete local ring for simplicity, then by local duality, say S is a complete local, I mean the question here is, this is a local question, you can, you know, the kernel here is a finally generated module. So, to be injective just mean that local is, so, so, so you're going to work completely lovely if you want to. So, this is just mean that the J log of homology, so let's say SM here, of S mod IE is so J1 to J i and here J is just equal to N times i and the dimension of S. So, this is a suggestion, this is equivalent to that. Okay, so this motivates the following definition, that the main definition of the top, so let me introduce, so this is some time called thickening of S mod IE. So, let me just, let me, so to give an intrinsic definition without refer to some embedding and you need to make a formal definition of thickening, so let R and be local. A thickening of R is a suggestion from T to R such that the, such that the induced map on the repo, sorry, this reduce, on the reduce part of T and R is a nice form of esmophism. Well, example is T equal to Kx mod x square and R is Kx mod x. So, you know, this is a picture that you've probably seen in many beginning course into, you teach a beginning course in, in, in, under regulatory right, is a, a fattening of this, this point, a double point, right, so it's a thickening. Okay, so now I give the definition, which is motivated by this observation. So, so definition, so Rm, again I already gave the local definition point, is called homological full beef for any thickening. Well, obviously you have the same skin structure, so your thickening will be a local ring. Then the map, sorry, is injective for pi, so sorry, it's ejective. So, this is the reason why we call it's homologically full, so for all thickening, you don't lose any, it hit all the local modi element in R. And so that, the motivation question and the definition. I should remark that if you have an integrated, so remark, you can define the grid, because in fact, the equivalent to the, so R is standard grid, you can also take this as definition, then R is homologically full, even though if Rm is, so that's the definition. So let me, so the first result we can prove is that this relationship is actually a lot tighter than we just saw, so in fact the, this definition is almost the answer to the question by David. So long after a remark, a definition or a statement? Okay, we have a definition, if you want. So, okay, so a little bit of story here, so, so you know, you can, this, okay, so this is a lazy way to define it, right, so, so, so, formally you should define it as like any grid of thickening or that, but it will say it. So, the G point is defined, that's why I called it remark, I guess. Okay, anyway, so I will, so what is this? Oh, right, yeah, so, so the first theorem, say what I just said, so, okay, so we, we don't have the full strength, but let's say i is S mod i and here S is an unremifed regular local ring. If you don't, well, if you know what it is, you know what it is, if you don't know it, probably doesn't affect your life. This one, right, so, so then, then, then i is co homologically full in our sense, is actually equivalent to this map, injective of i, so, so in fact, it is the same definition. However, they are, you know, so, so we found that it's, it's when working with this, so then you can sort of take one of this as a definition. On the end, this is intrinsic definition and also when you work with this concept, this is useful to think in both ways. We won't see some examples of that if I can get to some proof, but yeah, so let's, yeah, so, so let's look at the examples. Right, so co homicoly, why is that? So co homicoly, well, if you have a, if you have a map like this, right, and you have some kernel here, then just as long as I see what a little homo should tell you that this is a co homicoly. So, hi, okay, hi, hi, and so below the dimension, it is this zero, so it's nothing, but above the dimension is, you know, this is vanish, right, so it's this, sorry, above the dimension, this vanish, so this is, or it shrinks away to look at co homicoly ring, it turns out to be an useful way. Right, what about Stanley Ryan's ring? Well, this is actually, this is true because, because actually Mirscher proved that this is an injective, an injection for square free monomial ideal. So, if you believe this then, you know, although I mean, directly it's not obvious to see that this this is property is false, but, and there are things like FQ and reward singularity, so the reward singularity property was proved by, in a paper by, by Lindscher and, and Carl Schroed and FQ is followed from the work by Lindscher and Fa. Okay, so, so, so this, although they didn't explicitly say it, but it's the same thing, they did prove those things. All right, oh, so, also I want to give some small dimension example, so basic, so basic property here is that if R is co homologically full, then R is unmixed. And so if, so if dimension of R equal to 1, then it's equivalent to the R's primary quality and if dimension of R is equal to 2, then, then R is homologically full, is equivalent to the following. So, let's say, okay, okay, so I will need to assume that I will ask for I when, when S is a complete rate, okay, so maybe, and this guy, this is a little bit of subtlety because when you complete, you lose some irrelusibility thing, so, so it needs to be a little bit careful of I as I, and this is, it's this probability, okay, so, so I is equal to, so basically all the connected component of the puncture spectrum of R are quite equivalent, okay, so, S mod I actually, component, connected component, component of a spectrum R, and each of these has to be equivalent. So, this fact use, the fact that the combative dimension of something that whose puncture spectrum is connected at most at minus 2, so it's actually a non-trivial statement, but anyhow, so this is some example, so you can see that obviously not everything is converging full if you need further evidence than you can classify in, in small dimensional case. Okay, so, right, so there are many properties that we can prove about this, this class, so before talking about budget dimension regularity, let me just mention that you can prove some sort of weak or a intervention property for this rings, you can compute Lubesnik numbers and all that, so, so basically a lot of things are true for this sort of statement that's true for this class is true for more, but let me just prove the connected components from the point, or want to be, does this pass from connected components? Yeah, so, yeah, so we prove that it's pass from connected components, so this full if and only pitch connected component of the puncture is full, and so, yeah, so if you have a Gwen Macaulay each component don't create some, so that's another way to get a new example. Right, okay, so let me just remind you of what I, so yeah, so for the last five minutes maybe I will sketch the statement, so once you, once you have this is not so hard, so one is like a positive dimension, so again i equal to s more i, and r is from degree four, this less equal to a number of generator of i, and also the regularity of i is less than equal to this number of generator i times this degree of the maximum generator, maximum degree of generator plus the dimension of r, yeah, so this is a two of the property that's the best most relevant to the theme of this workshop, so how do you prove the, how do you prove the first, so this is really quite easy, the first one, so let's see what do I want to say, so I just want to say is that, oh yeah, so x i of s more i, s inject into h i i s, right, and so this means immediately that if this is non-zero, this is non-zero, and so that means that the positive dimension of s more i is less than equal to the homological dimension of the ideal i in s, and this is less than equal to the number generator, that's very easy, because you can compute low commuji with a with a check complex on the generator i, so so that is easy, the second statement is a bit more involved, and again I know everyone's hungry, so let me just, so for the second statement actually we start by proving it in characteristic p, and basically we, first we prove the following, so step one, so if s more i is commuji full, then s more i to the Frobenius power, okay, oh note that not all monomial ideals are commuji full, so all standard rational ideals are commuji full, but if you look at the thing I just talked about in the dimension two, there are many monomial ideals that are not member-colleges, even though, so we are, so for instance this is a statement is, okay, and this is a Frobenius power, i to the, f to the, p to the e, i to the, p to the e, so this is important, so this is like a base chain statement, right, because you basically base chain using Frobenius, so you prove some base chain statement and it follows that this is also commuji full, but now you, so for hq, so for hq we have some sort of nq, so that i to the nq is the, let's see, what do I want to see, I want to see this containing, it's a thickening of, so right now I, okay, it's inside i to the q, okay, so because of this, right, so then that means that h i of s mod i to the nq is subject on to h i of s i to the, so then the a invariant here is, so the a, so the top degree here, right, have to be at least the top degree, so i, okay, so well Frobenius is flat, so this you can, you know, just multiple, and then on this side you have results that estimate the top, well the a i of power of ideal, you know, using, we know that this thing eventually become, behave like a linear function, right, so if you, and then you do, then you have to understand how this nq related to q, which is just a, bishanhole principle, so if you just, I'm running out of time, but once you have this estimate, and you use the fact that that's a, that's a, you know, the regularity of power of ideal is a linear function, you get the statement, so I think this is just about time. Okay, any questions? Yes? Yeah, so well at least asymptotically it's about for completing sections, right, I mean, this is about that happened to a lot of nice things, you know, this is about, so if you take a bunch of, a complete section then the regularity is basically a sum of dv of generator, that's, so if you take a Athenian completing section, it's the Bao-i-Shah, yes? How about predictive services? Oh, so you're talking about isolated combative pool on a function? Yeah, I mean you have, you understand the curve, you know? Oh, right, right, right, oh you mean like a same statement for dimension three? We have some thought about it, but it's already there, I think it's pretty subtle, so it's some action on the local motion, it's two local motions, so it's, it's not something I can write down, but yeah, that, yeah, that might be, yeah, that's right, yes, yeah, well, Alexandra is here, so, so in fact the reason we want to give this talk here so that you can ask me questions like that, yes, absolutely. So this means that you can write the local homologous unit of X-modules into a test of P for this homological pool? Yes, yes, in fact this is, this shows you that, that, that, that, that for question two, for the first question of Eisenberg, Musselstatt and Stielman, homologous pool is the answer, so if you, so, so the local homologous in characteristic P, this is a union of X, even though if S-module i is, I don't know if it's true in characteristic zero, but because, because this gives you a ready made thickening system too, yeah, so yeah, so this is, so, so in case you're, this answer precisely the question by, by Eisenberg, Musselstatt and Stielman. Any more questions? Thank you. If not, let's thank you all again. Applause. We'll meet tomorrow night.
Inspired by a question raised by Eisenbud-Musta\c{t}\u{a}-Stillman regarding the injectivity of maps from Ext modules to local cohomology modules, we introduce a class of rings which we call cohomologically full rings. In positive characteristic, this notion coincides with that of F-full rings studied by Pham and Ma, while in characteristic 0, they include Du Bois singularities. We prove many basic properties of cohomologically full rings, including their behavior under flat base change. We show that ideals defining these rings satisfy many desirable properties, in particular they have small cohomological and projective dimension. Furthermore, we obtain Kodaira-type vanishing and strong bounds on the regularity of cohomologically full graded algebras. Joint work with Alessandro De Stefani and Linquan Ma.
10.5446/59210 (DOI)
So thank Julio and Jason for organizing such a wonderful conference with such a wonderful group of people, such a great weather because I was watching and up until about a week ago it was supposed to be rain every day so you guys really have some fun with someone. So this is joint work with Matt Mastroni who will be postdoc at Oklahoma State. Let's shout out to Chris starting in the fall and Mike Stilman and I actually talked with Mike yesterday and he sends his regards to everybody and says he wishes he was here with us. So the score parts of the talk I'll start with history and motivation then theorem and some tools, a bunch of examples and some discussion of future work. So that's the game plan so I'll get right to it. So the history or motivation comes from the chemical curves and greed's conjecture. So we start with some geometry. So if I look at the Betty table of a canonical curve, it will have the following form. See if I can do the table so it's nicely is masted. So this is what the Betty diagram of a canonical curve looks like and if the curve has genus G, then C is sitting inside P to the G minus 1 so it will have co-dimension G minus 2 so this step right here will be G minus 2 so I will be equal to G minus 3 so here's our Betty table of a canonical curve. So Greene's conjecture which is the motivation for a big chunk of the work on scissor G over the last two or three decades is that the NP property which is a curve to satisfy NP or anything satisfies the property NP if B sub i, well let's just keep stated for canonical curves, a piece of i i plus 2 is 0 for i less than or equal to P that is your resolution stays on the top strand for at least P steps. So this is the NP property and Greene's conjecture relates this property NP to the Clifford index so I won't write the whole thing out so NP relates to the Clifford index of the curve C. So then the point of this is that the geometry of your curve is determining the shape of the Betty table. So continuing with some history it's a classical result due to Enrique's nother and Petrie that B13 is 0 that is I don't need any cubics in the ideal of my canonical curve as long as C does not carry has no G12 has no G12 G13 or G25 that is G is not hyper elliptic trigonal or has a map to P2 that leaves you with a plane quintic. So this one if you remember your harsh whoring or even if you don't hyper elliptic curve doesn't the canonical divisor doesn't even give you an embedding it actually maps to P1 and then embeds as a rational normal curve so this one doesn't even give you a better trigonal and plane quintics. If you get bored during the talk you should do a little exercise to convince yourself that if you have a plane quintic that and you this David mentioned in his talk the classical construction of a junction so if you take the conics through the curve there's no singular points on that plane conic so if you take a divisor a curve of degree D minus 3 that's a conic it'll cut your quintic in 10 points and that's actually the canonical divisor and then that gives you the map to P5 and in fact the the conics give you a map to the peronese surface and then you need cubics to cut out your canonical curve so that's why it fits. So the point of this is that due to this classical result of Enrique's noether in Petrie canonical curves are Gorenstein so canonical curve is Gorenstein and quadratic. So here's two nice properties and there's a survey about algebras which are Gorenstein and quadratic a survey and lots of open problems in a nice paper of Miglioire and Nagel from 2000. Pube did it get the year right? So this is one place to look for lots of problems on these these sorts of creatures but there's something more that's going on and this is where the kazool comes in. So let me come back over here. If you if you are left cold by this geometric history we're going to get into the pure algebra very shortly so what about kazool? How about kazool? So in 1993 Vyshik, Bickleburg, showed that if C is general by the way this is one of those cases where this is the order on the paper so I'm respecting the ordering of the authors on the paper instead of writing in a standard egalitarian fashion so C is general and G is greater than equal to five then the canonical C is kazool followed by and these these are papers I paid attention to because I was in grad school so you know the things you see in your formative years stay with you. Do we have any grad students? We have a couple grad students right? No? No grad students? All right. You guys you all look so young to me. This is an advice. Then in 1995 Polish Czuk removed these general genus five so it's kazool if it is not if not star so star would be these conditions right so these guys we needed cubics to cut it out so you have no chance to be kazool you have to be quadratic so as long as you are cut out by quadrics you're actually kazool automatically and then 1997 then Chereshi and Pernobarajna gave another proof this is sort of a classical machinery they give a vector of unapproved and now we get to the motivation for me at least is in 2001 Konkath 0 and C A Rossi and Voila gave a VB they give a graviter basis proof for this but they did more so besides giving a graviter basis proof of this back with the canonical ring of a curved kazool they also showed that if you are Gorenstein plus they showed Gorenstein and quadratic and regularity 2 implies kazool this is this is now outside the context of canonical curves and and Gorenstein quadratic and regularity equals 3 and co-dimension less than or equal to 4 implies kazool and then the last we go backwards in time a bit 2000 and julio in his master's thesis still unpublished just think of some four grad students thinking oh maybe I can prove something maybe I could prove this in the case of co-dimension 5 and then finds that it's in julio's master's thesis so we have this and the question the motivation for this talk is the following question from Aldo and Maria Tito so CRB ask can we dispense with this co-dimension condition so quadratic Gorenstein and regularity 3 implies kazool so that's the question and what was interesting to me is when I read this paper I thought well why do we need this regularity 3 hypothesis you could ask if you were an optimist which we all are as mathematicians if we could dispense with the regularity 3 what would be the evidence for that so the one thing that we also know the nicest type of Gorenstein ring of course is a complete intersection and we know do to take that quadratic c i's so based on this flimsy evidence one could say does quadratic and Gorenstein apply kazool and so the answer to this which is not as far as I know written down anywhere the answer is no due to a result of Matsuda and so I want to describe briefly Matsuda's result does everybody with me so far everybody good to go okay I don't have any coffee prizes down here I have an apple but this could be used as a prize or as a projectile so so in the words of the I think classical philosopher Claudio Riku once you know there's an obstruction your thinking changes that's a great quote it's really true so here's a visit Matsuda gives us an instruction so here it is here's a toric sevenfold in p14 with bedding tables so here it is the h vector of the artinian reduction is 171471 so for me the interest so notice this does not address the question of Aldo Henrietta Tito because it has regularity one two three four right so this is not a counter example of their conjecture but it tells us already that maybe that was the the question was could have a negative answer so it leaves you at least wondering so the other things I want to point out about this and what got me interested first one aside this of course is it's a toric right so you think okay well I bet I can find once I look at what the pattern is I can construct a whole bunch of other things so in Matsuda's paper this comes from a graph unfortunately the construction in the paper does not generalize so this graph lives in solitary splendor and so this over this is interesting the way that he proves that it's not fuzul I believe someone not many said that Macaulay too was not too smart I think Macaulay too is pretty smart because it should really be a co-author of this paper since the way it's proved it's not because so will is from Macaulay to show Macaulay to shows that tour three a phase there ring here okay okay and degree four as dimension one so this of course is not equal to one but different so it's computation which is not edifying in some sense except that it tells you it's not fuzul so and what you probably are wondering is well tell me something about the the semi group that this comes from so the semi group is really pretty cool so it's of course it's in p14 so I have 15 lattice points their convex hull is a polytope there are no interior lattice points so this has no interior lattice points and the polytope has the following f factor we have our this is the empty face I have 15 lattice points I have 77 edges I have 189 these are two faces 252 189 77 15 and because I like symmetry although it's not really part of the f vector for the heck of it let's say that the polytope is part of our f vector and we count the piece so there it is so that's the f factor so that's pretty surprising to have it I mean h vectors are symmetric if you're coming from a superficial polytope but this thing is the f factor and so that's what this polytope looks like in terms of the vertices edges that not two faces and so on where do those 14 quadrics come from is it reflexive I checked and I can't remember I think it is the equate so let me say just two things so there's 14 quadrics I think it is no is it I can't remember you need to have kids and then you won't remember either great the 14 quadrics come from the fact that these two faces there's 175 triangles and there are 14 two faces that have four vertices and that's exactly where and so in particular this tells you that sort of we don't have anything the that looks like this right where we have a something that's squared minus x i x j in fact they say a little bit more than the defining ideal so the defining ideal is coming from y zero y i i plus one minus one seven so the variables are y zero y seven and y i i plus one i equals to seven with the caveat that when i is seven this turns into a zero so this is one set of seven relations and the other set of seven relations is y i y i plus one i plus two minus y i plus two y sub i i plus one so they're very pretty quadrics there's a lot of symmetry there and I still I think this was our jumping off point we worked hard on this for several months trying to understand what the general pattern was and we didn't get anywhere so nevertheless a rather pretty example and it tells us that we should be doing something so that's the motivation for the talk and so now I want to do something what's the line in 20 pipe on uh unfolded now for something completely different so we're going to switch here now so um let me kill this too what was the motivation for Matsuda the motivation for Matsuda will be discussed by professor Hibbe whose student he was so Takiyuki where are you oh bad to play hooky during the talk so I believe he was actually so the motivation was that he was looking at properties of various ideals that he built from graphs and doing a lot of I think he was looking for graphs with certain properties that gave them a toric ring that was kizool and this thing popped out and so it was an outlier and so the paper essentially is is an analysis of this example and so here's a I think because the title is you know a toric a toric ring which is quadratic and gourds need but not because it was something like that all the lattice points vertices yes that's exactly what I'm saying all the lattice points are vertices so this polytope has no interior last one and it doesn't have like lattice points on the edges that I'm going to see no yeah so again Matsuda is not a student maybe so now we exonerate Takiyuki is now exonerated so all right he's a co-op to them right so all right so this is actually I this is a I find this a really cool example I think it's really really good so now part two theorem and techniques so if you didn't like the geometric history this is pure algebra so I want to talk about a technique that we probably have all encountered at some point but which I had never realized would be useful so this is idealization so all the idealization fans in the audience can feel free to his head so I start out with a ring R so let's say M is an R module and then I form our Jaxon M the rate at which is written right this way this is the usual way it's written so the idealization is the ring R R comma M with point wise addition so when I add R M to S N I just add the first coordinate wise with product R comma M times S comma N is equal to RS and then RN plus SM so the name comes from the fact that it's easy to check that 0 comma M is an ideal R so this is this construction the place where I had seen this for the first time was in the context of hyperplane arrangements where actually the idealization was used to take the homology ring of an arrangement complement something called the Orlich-Solomon algebra and then if you actually take the hyperplane arrangement and you thicken it so you take a tubular neighborhood I'm thinking in complex arrangement that's called the boundary manifold of the arrangement and the homology ring of the boundary manifold of the arrangement is actually the idealization of the Orlich-Solomon algebra with respect to the canonical module so that's the standard construction so here's a lemma due to let's say lemma 1 this is due to at least Galaxon-Reyton-Suproxby all in 1972 which is if A is an artin local ring then A omega A is Gorenstein and I'm going to start writing this when the module that I'm using to build the idealization is the canonical module I will write this as A twig. So you're only supposed to prove topologies I'm going to prove two things during this talk so this is a fun one so an easy one so what's the proof I need to show that the socle of A twigle is one-dimensional so it suffices to show this so let's take something in the socle let's say A comma M is in the socle so we know that A comma M times B comma N is zero this immediately tells us that A times B is zero for all B so in particular that A is in zero called M or M is the terrible notation with maximum ideal M twigle I guess so A is in the socle of my original ring I claim that A is actually zero suppose not then there exists some element let's call it alpha in my canonical module with A times alpha not equal to zero and so that means if I take zero comma alpha times A comma M I get exactly zero A alpha that's a contradiction because this was in the socle and so this would have to be zero so we see that A has to be zero and that means that my socle element A M has to actually be zero comma M right and this M is in my canonical module but the only thing in the canonical module it's also being killed by the maximum ideal the only thing in the canonical module killed by the maximum ideal is the unit and so this is M is a one-dimensional and that proves it so it's actually and this is lifted straight from Brinds and Herzog so if you didn't like it go read Brinds and Herzog so it's a very very cute construction and I think I guess for me the point of this is that and if you want to zone out of course you can zone out at any time I won't really throw the apple but the the raison d'etre for idealization in my mind is suppose you want to construct something that's quarantine with an odd property well start out with the ring which itself has the odd property and as we'll see that propagates Jason's nodding so that propagates to the idealization and that's now you have that's the gets to the top so so now I want to let's go to limit two so limit two so suppose that I start out with r equals s minus i oh I should say let me before I go to one of two let's set some general notation so to set up let's say henceforth I have i equals f1 through f sub r containing s equals k of x1 through xn an ideal of quadrics so this is going to actually be sitting inside the degree two piece and these are homogeneous and standard rated and I'll only be interested in the case where r equals s minus i is r t so this is our setup so then limit two and situation is above if a is our r-remmer our twiddle is quadratic so the idealization is quadratic and I'm always idealizing it with respect to the canonical module if and only if omega r is linearly presented and I'm going to call such algebras so I have an artinian algebra it's linearly presented the sockles generated in a single degree but it's actually linearly presented so it's not just sort of the sockle in a single degree but then I have only linear scissor g's and so there's really no choice but to name such an algebra super level oh come on that at least okay good that was my bad math joke I at least got some kind of a cringe it's a lot of old but it only has an array has linear presentation so these are super level alters so and what's the point I want because I'm interested in this if I want to try to find things that are that are quadratic and gore and skew but not because oh I gotta at least have a quadratic idealization right so this is it and this one's not particularly hard to prove but necessary and then the theorem um setting in this setting setting if r is super level and non non azule then our twiddle is quadratic so what I had anticipated that the proof was going to be so we have because the canonical module is actually an ideal inside r I have a map from my um uh my original ring yeah or from the idealization rather than r and the kernel of this is just omega r considered as an ideal in here right and so in particular the point is that then if I have a resolution of my residue field over here and a resolution of my residue field over here then you would expect oh I'm going to have to use chain rings to relate their resolution here to the resolution here so I find out we'll just chase through some change of rings spectral sequence but in fact it's much nicer than that again I'm homage to our um Scandinavian friends so the key to the proof is the result of Gullick's end from 72 relating the Poincare series so this is of course the series summation of the s and t uh beta rj of uh let's say what we're doing this way instead k toward i r of mk degree j s to the j tpi so the Poincare series of this is for an arbitrary r module m so Gullick's end result relates the Poincare series of the r module this is not necessarily the canonical module of an r module m r and the idealization so there's a nice theorem of Gullick's end which relates these things and then you just crank the machine and it and out pops the result so one doesn't have to go through the change of rings for this so that's the theorem and then we're left with one last question which is if we want to apply our machine we need a bunch of super level and non-casual algebras to feed into the machine so um and that's going to appear in the example section at the top which turns out to be more than just examples so there's one of those I don't know fortuitous situations where you do one good example and that example spawns not just an example but a theorem so example one I should pass for one second and say does anybody have any questions so far yes the idealization is a standard grade still yes it is standard I need to standard so if I assume standard graded that's part of the lemma that's now erased but then I get a standard graded that need idealization to generate a degree one I have to with an appropriate twist okay yeah so example one this is due to the name here crystal lack 1975 so if you take five generic five bricks four variables then you see the following Betty table oops so this I have one more yeah so here's the Betty table and this is non-casual there's a number of reasons or ways to see this I want to do a little tiny aside here so a shout out to the study of the Betty tables or Betty diagrams of of causal algebras is actually a very interesting and active investigation so I mentioned results of let's see at Ramov Tough no Adam, plus sons of A, and Mark. And map. In particular, one of the things that appears in this paper of Lucho, and Eldo, and Strikat is that the, the, the, uh, many numbers vanish above the diagonal. And, uh, I think this is in their paper, if not it's not hard to prove. The first scissiges of a causal algebra have to be generated by linear scissiges, linear scissiges, and quadratic causal scissiges. And you can see here, the number of quadratic causal scissiges is at most ten, and so I have way too many quadratic scissiges for this guy to be causal. The fact that this is non-causal, um, follows from, sort of, right, just anyone, well, certainly this result here. So there's a non-causal vety diagram, and it spawns. So here's the, um, here's the theorem that McCauley II proved. So it's easy to code up this idealization. We'll, we'll stick it, we haven't stuck it on the web yet, because it's, we haven't cleaned it up enough. There's an idealization package, and it spits out the following vety diagram. So idealize or apply the theorem, and we get... So there's the vety table. And we have a quadratic Gorenstein non-causal algebra of regularity three. So the H vector of this is, is one, nine, nine, one. So this is a negative, this gives us a negative answer to the question that Aldo and Maria and Tito asked. Um, but there's more. So the, the, I want to say two things. First off, there's one kind of amusing comment here, I think. So this has degree 20. This is the artinian reduction. So if this was, this is in nine variables. If this was actually the artinian reduction of a curve, the curve would have lived in 11 variables. So the curve would have lived in P10. So this would have been a canonical curve of genus 11. So somehow, at least in my mind, I think of this as the resolution of a fake canonical curve, right? So this is a fake canonical curve of genus 11, right? It's not the artin... I mean, it wants to be the artinian reduction of such a curve, but it can't be. Because it's not causal, right? So, um, the other thing, so that's our, the first example. And now I want to toss in some more, um, nice results of the Scandinavians. So you might say, well, that's an isolated example, um, and it doesn't generalize and so what. So it's just some curiosity. So, here's a theorem. There's a lot of Scandinavians in this talk. So here's a theorem due to Kroger and Luffa from 2002. So, um, remember we had R quadrics, and our polynomial ring was s equals K of x1 through xn. So if i is gen by R generic quadrics, then s by i is causal, if and only if, R is less than or equal to n. So this does pick up our complete intersection case. R is greater than or equal to n squared plus 2n over 4. So that means, of course, that in this range of n plus 1 up to a vocabulary, well, I'm just going to admit the inequality of 2n plus 2 over 4, that these guys are all non-paisole, right, for generic subsets. Now, of course, then the issue is, can we prove that such things are super level, right? We need them to also be super level. And again, we'll call it 2, who tells us if we do a bunch of examples. Yeah, they are. So now you've got to prove it. And the proof is, I think, rather cute. So that's the second proof that I will do. But I don't actually need them all. So there's a result, let me put it down here, perhaps, of Hoxter and Loxov. So, from 1987, it says, if I is generic, by generic forms of degree D, then I in degree D plus 1 has max growth. So for us, the point is that I can create, remember, I wanted my Gorenstein things to have SOC in degree 3. And so from this idealization construction, it means that my input algebra should have SOC in degree 2. So I want quadratic algebras non-fazool with SOC in degree 2. So I want to kill off everything in degree 3. And so if you just combine these two results, it's easy to see that for R, let's put it over here, for R less than n squared plus 2n over 4, and greater than or equal to n squared plus 3n plus 2 over 6. So in some numerology, we get quadratic non-fazool algebras with SOC in degree 2. So this is the range we're interested in. And what I want to now prove is a little proposition. So the point is that what I want to do, because I'm looking at things now, these are SOC with degree 2. So my Bette diagram of these algebras is going to be 1. And what I need to prove is these last two slots here are both 0. That's what I need to get super level. I don't care what's up here, something here and here, but these two things have to match. So it would suffice what's our trick for doing things like this. We pass to an initial ideal. So what I want is I would like to find an initial ideal such that the easiest way to do this would be that the initial quadrics in the ideal emit two of my variables. And then I can't go back n steps. If I emit two variables, I can only go back n minus two steps. And the point is that we're saved by the generosity, right? Because the quadrics are generic. That means that if I just take the Grobner basis for these things in degree reverse lex, I want to miss my, what do I want? I want to miss x1 xn up through xn squared. There's these things. There's n of them. And I also want to miss x1 xn minus one up through xn minus one squared. Right? I don't have to miss xn minus one xn. I missed it over here already. There's n. So I need to miss two n minus one monomials. So if I'm at the very upper end of this, my space of quadrics is of course just n plus one choose two. I'm subtracting off this many. So let's see n squared plus two n over four. And what is this? Let's see. I guess two n squared plus two n over four minus n squared plus two n over four, which is just n squared over four. And I need that to have at least two n minus one. So if I had space at the end for these, so that means n squared is greater than or equal to 8n minus four, which is true as soon as n is greater than or equal to eight. And actually because I was cavalier with this being inequality, you can easily bounce that down to n greater than or equal to seven. So the point is that as soon as n is at least seven variables, I get super level algebras. You might say, well, but these are probably pretty sparse. This range, this is about 1,6 the dimension of the whole space of quadrics. So there's lots and lots of these algebras. And so then we have tons and tons of examples. And in fact, we have better than tons and tons. So the corollary to this little computation, and in fact, let's see, so I guess example to continue. And so this was approved for n greater than seven, four, five, and six also work. So for n greater than or equal to four, algebras in the range star, this is star right here, are super level. And the corollary then, when you do the math, is in fact one comma m comma m comma one is an H vector for a quadratic, Bernstein non-causal algebra for all m greater than or equal to 25. And in fact, for all m greater than or equal to nine, with exceptions, 10, 11, 14, 15, 18, 19, 20. So that was example two, and I have one last example. So example three. So was that last thing that you said again? I mean, you can't realize this value. They're not possible. No, no. So this construction provides examples for everything bigger than for these, for this H vector, except these things. So I need to get these a different way, not with the generic quadrics. And that's where the next example comes in. So there's a little bit more work to do here. I think the theorem, the result will be tightened up. So here's example three, you can use for any plus six. There's an example of, let's see, I guess there's an example with 12 quadrics, which is non-Cazool, with H vector 169. And so this gives you an idealization, which is 150 to 151. And so once you allow yourself to deal with non-generic things, I expect my guess is you can plug the rest of these holes. This is all work in progress, I should say. So I, and I don't think it would be hard to plug these holes. What is true, however, is however, the idealization construction can't address M equals 6, 7, 4, 8. So my expectation is we can fill those holes. I don't, you can't fill these holes with idealization, which of course is natural because I'm allowing myself a second math show, 6, 7, and 8 are of course exceptional. Oh, come on. E6, E78, right? Okay, so it's too late in the afternoon. I apparently used up my bad math joke, leash, right? So then the last part of the talk I say in the last two minutes, maybe future directions. I think the first really interesting question is in the parameter space, so in the parameter space of Gornstein algebras with a fixed-state vector, where do they sit? What's the locus of these things? Where do these algebras sit in this locus? I guess the first question, this is question, I guess question zero is for those gaps, but I think it should be easy. So, number two, this was all I've been talking about was for these Gornstein algebras at the Sockle degree three. The construction, this idealization construction provides you lots and lots of examples for other Sockle degrees, so this example, or what, so this is something we're working on to describe what you get for a larger Sockle degree. You can't get, so here's the one thing that I like, is we can't get the Matsuda example. So then there's more things going on here than just this idealization construction, so you can't get Matsuda. So, I still, we plan to go back and to push harder on our initial attempt, which was to use Toric methods to find a bigger staple of examples. And I guess three would be just, is there any way, and this is very vague, can these examples shed any light on Green's conjecture? This is, I think, a rather dubious sort of proposition, but nevertheless they're interesting examples of things that want to be, in my opinion, they want to be the Artanian reduction of canonical curves, but they're not. And I think that's a good place to end. And what, I guess the takeaway, Rota says that every talk should have a takeaway. Again, the takeaway is that this idealization construction gives you a nice way to build counter-examples. In fact, Maria just told me that there was, I don't remember what the exact context was, but there was an example that she saw some time ago of an idealization construction that did exactly this. So you want something Gornstein with a bad property, take something non-Gornstein with a bad property, idealize it, and so that's the takeaway. Thanks again. Thank you. So, what's the cubic that you take? So you take, if you had, say, F1 up through Fk is your ideal, you look at its inverse system, the annihilator, look at the annihilator, no, backwards, I'm sorry, these are my quadrics. So I take the quadrics that I started out with, and then I introduce some ancillary variables, and I consider the cubic summation Yi Fi, and so there's my cubic, and it's the inverse system of that cubic. And the, a nice place to read about this, this construction is actually, there's a paper, or not a paper, rather, a book on left-shits properties, and this is, they actually prove this in the book. In fact, let me see if I can pull the reference, I don't, I want to get, it's five authors, so I want to get the name correct. I don't have the names here, so maybe anybody, I'm picking on Japanese people, do you know that there are five authors for this book on left-shits properties? Watanabe is, so Watanabe is one of the authors, and then Numada. Thank you. Numada, Akida, Maino, Watanabe, and another one. Hanfurima. Hanfurima, okay, so there we go, so proper, proper, so this, that's, those are the cubics, yes. So, this, it's like a straight, this should again, a little bit, you know, you're writing down an element of a straight, something, like, cubic of a straight, something, right? How you write it? And I wonder if that means something, but the most obvious question for me is whether the generic cubics is correct. I mean, that's not original, it's not true. I don't think so. It is? It is, because that's what I meant, the generic is, because it's... There are two possibilities for you, that one you specialise is a conical curve, and you get an example, that's one that's... Because we don't know... sorry? But the, the conical, the localities, what happens? So, you can use an example. And, or in our paper, we give a explicit requirement of the cubic, for that if you have those, you know, linear form with a certain pattern, you build enough information on the idea so that you can prove this conical. And there's really a description of the law to say, where it is because it is clearly open and non-empty. And, yeah, quite a question. I mean, your example, you have the first non-linear scissors, for the, for the conical, and the, you have the cubic and then you have the non-linear scissors. Yes. So, it might suggest that the statement of Gore-Eston, and the good idea is that the three with sum and p property is conical. Maybe. Maybe. So, this is one of the things, actually, so, that's an excellent question. So, one of the, what's coming out of this, in fact, the original example, so there's a place in David's book on geometry of Sizzigis, where there's something where he's got a matrix and he says, where did that matrix come from? And so, where did the original, the original example we had didn't come from this generic thing. It actually came from a curve. So, there's an example in an old paper of Mike and I of a curve of genus 7 in p5, which is generated by five quadrics. So, it's an almost complete intersection, which is smooth, projectively normal, but not causal. And so, that was the example that we ran this construct, the artinian reduction, and then we ran this construction with it. So, that's the idea that maybe there's some geometry lying behind it and that you can somehow lift that geometry to this case, I think, is interesting. So, anyhow. Any more questions? Yes. So, it's sort of like a fake curve, you call it. Do you know that there are domains that have all these reductions? They can't be. If they were, they would have been fixed. Okay, well, thanks again. Thank you.
Let R be a standard graded Gorenstein algebra over a field presented by quadrics. Conca-Rossi-Valla showed that such a ring is Koszul if reg (R)<= 2 or if reg(R)= 3 and codim(R)<= 4, and asked if this is true for reg(R)= 3 in general. We give a negative answer to their question by finding suitable conditions on a non-Koszul quadratic Cohen-Macaulay ring R that guarantee the Nagata idealization of R with the (twisted) canonical module is a non-Koszul quadratic Gorenstein ring.
10.5446/59211 (DOI)
Are you getting numbers of balanced distribution complexes? OK, so thanks for the invitation to this workshop. And we've heard a lot of really business experts apart. So I want to speak about a project with Jonas Nemo-Vetter, who's also at Osnabrück. And I will start really basically, I want to start by explaining what are the potential balance of the complexes. So as you make, everybody knows what is a substantial complex. But balanced distribution complexes are maybe not that popular. So we start with the d minus 1 dimensional substantial complex and on a given vertex set v of delta. And then we say that this complex is balanced if the warm scale can be colored with d colors. So we basically look at the vertices and the edges. And we want to have the coloring in the graph theoretic sense. So there should not be any monochromatic edges. And if we can color the warm scale with d colors, then the complex is balanced. So in other words, this does just mean that we can partition the vertex set in d sets, which are disjoint. And then if I look at any phase of the substantial complex, so here gf should be a phase. And then if I look at the intersection with any of those sets vi, then there should be at most one element. And if I have a phase, there's exactly one element. And you should consider those vi's just as color classes. So a basic observation, which somehow seems maybe extremely basic, but which is important for this talk, is that whenever I have a substantial complex that is balanced, it cannot have too many edges. So in the sense that if I take two vertices lying in the same color class, they cannot be an edge in the graph. And having not too many edges basically means that the Stenerism ideal has a lot of dv future. This is something I would like you to remember later, because this will be strongly used in what follows. OK, so let's look at some examples. Let's look at the easiest example, namely the simplex. So I have three simplex here, and I've already a coloring. So we need four colors since all edges, all vertices are connected. So we have a complete graph. And in particular, if I have a d simplex, then I always need d plus 1. So what does this mean? On one side, it means that the d simplex itself is balanced. But when I pass to the boundary, it is not balanced anymore. Because now I'm one dimension less. OK, what you also should see from this example is that somehow, balanced simplex complexes are those simplex complexes which have a minimal coloring. So minimal, in the sense that I can use a minimal number of colors possible. So because I can't color a d minus 1 dimension simplex complex with less than d color. OK, so let's go on to the next example. Let's pass to the cross-pollet. So just going to try and you can describe the cross-pollet as a join of zero spheres. So we take v0, w0, which you see here. And I take the join with v1, w1, v2, w2. And then, OK, I've already indicated the color. You can just color anti-pollet vertices with the same color because they are not connected. But so in this example, we need three colors. And you can do the same more generally. For any d-dimensional cross-pollet. So if you think about usual simplex complexes, the smallest sphere you can get is the boundary of a simplex. And here, if you have the balance setting, it turns out that the cross-pollet of its boundary complex is some of the minimal balance sphere. OK, rather questions so far. Then let's, OK, so this one is balanced. So let's write a more devoid of the question. So the question we are looking at is, if you give me any simplex complex with a given number of vertices and a certain dimension, can we bound the graded Bettey numbers of the same use time? So I take the minimal free resolution. And I want to have upper bounds for the Bettey numbers. And those bounds, OK, I would like to be them tight. And I would like to give constructions when those bounds are obtained. And those bounds would only depend on the number of vertices and the dimension. So what do we know in general? So here, I don't require a balance. So there are bounds in the case that we have a boundary of simplex and polytope. So here, we require the characteristic of the field to be 0 because the result uses Leffler's property. And basically, what Uwe and Juan show is that you can bound the Bettey numbers by taking the Bettey numbers of the lex ideal whose field dot function is the g vector of the polytope. And the bounds are obtained exactly for the case that they have the so-called Bettey polytope. OK, so what else? Let's recite Professor Toschi from 2015, where he gives bound for Kuhl-Makauly complexes. And again, what? Well, that's your paper. I can show you the paper. It's my paper. That's a joint work of Shibi. But it's on tightline relations. It appears in the paper with tightline relations. But you have it there. OK, let's talk later. OK, so maybe he'd be, but I will also mention him in just a minute, I guess. So basically, what does Toschi do? He takes a linear system of parameters. He modes out by the system. And then again, he passes to the lex ideal. And then the bounds, he or he be, I don't know, right now, get, or maybe both of them, get given by the Bettey numbers of the power of the maximum ideal. OK, and then there are other bounds in the case of norm and pseudo-many folds, at least called the linear strand. Again by Toschi, I hope. And OK, and then there is a result by Hibi and Terai, who explicitly compute the Bettey numbers in the case of steps fields. And in particular, those steps fields attain the bounds for this, which we have in the pseudo-many fold case for the linear strand. Yeah, so now you can ask the same question if you have a substantial complex that is bound. And the bounds you should get should be better. I mean, you should maybe allow not only to depend the bounds on the number of vertices, but also to depend on the cardinality of the color classes. This would be somewhat natural. OK, so what can we do? So let's start with, somehow, a first and also stupid bound. I would say because the truth is extremely simple. So let's assume we have a balance of the complex with a given vertex partition. And then let me note by gamma the J minus 1 skeleton of the clique complex of the departite graph, where I have a cardinality of V1, many vertices of one partition, and a cardinality of V2, many vertices. So next one and so on. So if I look at the J minus 1 scalar losses clique complex, then I can bound the better number of my substantial complex delta just by the better number of this whole. And I should say, OK, first let me say something concerning the proof maybe. So it's easy to see that if I have any J minus 1 phase in delta, then it also has to lie in this clique complex. Because there I take all possible phases which might lie in the balance of this complex. And then basically the only thing you have to do is to use Hoxter's formula and to use this clique complex gamma since we take just a J minus 1 skeleton has dimension J minus 1. So I want to remark that we also can compute those numbers explicitly, but I don't want to put them here on the slides because it's not that the formulas are really nice. It's just we can compute them and there are explicit formulas. But for the purpose of this talk, this I guess should be enough. So now let's pass to the Columbia Collie case. This would be somehow the next iteration. And again, there are bounds and the notation is the same as before. So I denote by n the number of all vertices. And then, OK, we can, if J is at least 2, so here I miss the linear strand. Then we have, yeah, we have to get those two sums as bounds. And those numbers p and q, well, I only wrote here they depend on this commonality here, which is, again, maybe first say what is this sum here. It basically tells you how many degree 2 generators do I have and the standard is no idea for sure. Because this is just the number of sum of forbidden edges. And then having these numbers, you can compute those p and q. And I don't want to say more. I just want to say there are explicit formulas for those numbers. But again, they don't look too nice. So I don't put them here. And yeah, so moreover, if you want, if you are, if you, OK, if you don't insist to be those numbers tight, then you can also just try to maximize those numbers. And it follows from the proof that those numbers here are maximized exactly when all color classes have the same commonality. And in this case, you then get just bounds which only depend on n and only. We get rid of the dependence on those colors, of the size of the color classes. Moreover, so what about this case, j equals 1? So first of all, the proof method doesn't work. But we can also not expect to have an upper bound which is better than in the general case if we only want to include n and d. Because it's easy to construct for any number of vertices and for any dimension, it's easy to construct in bounds called Macaulay complexes that attain the bound for the general situation. So the bounds from Satoshi or Hibiki. So let me comment a little bit on the proof. So the proof method is very similar to what Satoshi does. So we start with the standard use learning, we take the necessary parameters. And then since we're in the code Macaulay situation, to compute the better numbers of the standard use learning, we can also just compute the better numbers of the team reduction. Then if you look at the polynomial ring, so s should be here the polynomial ring, and you mod out the linear system, that as a ring, it's just a polynomial ring if you are in a fewer number of variables. So in particular, you can write this quotient here as a quotient of this polynomial ring r and the homogeneous ideal j. And then you play the standard game. You just pass to the next ideal. So you can bound those 30 numbers here with the 30 numbers of the corresponding next ideal. And then, if you check what I said at the very beginning, we know that there are a lot of degree two generators. And we use this to bound the number of generators of higher degree. And as soon as you can bound the number of generators of degree at least three, you basically just use a linear solution. I mean, I'm cheating a little bit here because to bound the number of generators is not as easy. And this is really most of the work. But this is basically the idea of how you put this. So let us look. Since the formulas are not very nice, I put two tables on the slide to give you an idea how to bound in the general case and how the bounds in the balance case look like. So here, in this example, we are in dimension three. We have 12 vertices. And I chose a case that all color classes have the same size. So this is really the case where our bounds are maximized. And then in the Kolmkoli case, the bounds you have, so in the general Kolmkoli case, the bounds you have are those in this table. And in the balance case, the bounds are this table. So just as an example, we have 4,600 here. And we have 2,200 something here. So it's not half, but more or less. At least the bounds really improve. And this is something you don't see from the formulas. So let's come to the pseudomaniacal case. So first of all, one is a pseudomaniacal. So a normal pseudomaniacal. So it's just a simple complex that is pure. So I have dimension d minus 1. It should be connected. And then I want that every d minus 2 phase is contained in exactly two facets. This is one condition. As a second condition is that if you look at the link of a phase of dimension at most d minus 3, then it should be perfect. So here it's dimension at most d minus 3. Because for the d minus 2 phase, we just have two vertices. OK. So what can we say in the pseudomaniacal case? Again, not really surprising. We only get a bound for the linear strand. And the formula does not look that bad this time. So this time it's rather easy. It's basically 3 binomial coefficients. And one appears with the negative sign. And again, those numbers, a, r, and s, depend on the number of vertices and on the dimension. But we can compute them explicitly. We do not know if the numbers are tight. And I strongly suspect that they are not, to be honest. Since I have enough time, let's say something, how we move this. OK. So somehow, the main difficulty at the very beginning is that since we are not in the coordinate case anymore, we cannot play this game with the linear system parameters. But there is a result by Vogelsanger, which actually tells us that there exists d plus 1 linear forms, such that if we mod out by those d plus 1 linear forms, at least the degree 2 part, we can compute the dimension of the degree 2 part. It's namely the difference of h2 and h1. And the very numbers can only go up. So basically, what Vogelsanger shows is that if you mod out, let's say, by theta 1 up to theta i, then this form theta i plus 1 gives you an injection from degree 1 to degree 2. And then if you put this together, you get those bounds here. And then, OK, now what's the basic idea? The basic idea is, as before, to bound the number of generators in the corresponding next segment. But this time, we need to bound the number of generators of degree 2. But how do we bound those generators? So there is a balanced lower bound theorem by Steve Clee and Isabella Novik, which tells us that we can bound h2 by d minus 1 divided by 2 times h1. And using this, we get a lower bound here, which means we have an upper bound for the number of degree 2 generators. And again, OK, we pass to the next ideal, and we compute the Elio-Hubert resolution, like what we use Elio-Hubert to compute the value. And this is how we get the bound in the same set. So when I start my work, OK, now let's first, again, let's first compare the bounds, which such as it has in the bound we have. So it takes the same number as before, dimension 3, 12 vertices, and equal partitioned color glasses, even so this doesn't matter for the bounds here. And here, you see the bounds for general pseudomaniacals. Here, you see the bounds that you get with bounds. So again, they are better, which is not surprising, than those appear, even so the difference is not as good as in the case for, as in the comic-all case. So the last thing, what I would like to do is to look, so at the analog for state spheres. So I showed you, at least I told you that there are formulas for giving a tarot, where they compute the graded-butting numbers of state spheres. And the analog of a state sphere is a cross-pollinitable state sphere. So what do you do? You take a certain number of cross-pollinitables, so let's just start with two, and then you identify them along a facet in such a way that colors, vertices of the same color, get mapped to each other. And you do this, yeah, you do this, I don't know, here n over 2 over d minus 1 many times. And so here in the picture, you see what happens if I stack three cross-pollinitables. And what you also should, why I put the picture here is to show that there are different combinatorial types. So I have some choice how I stack those polytopes, and here you see three different possibilities which you might get. And similar to what hebe and chari show, we can compute the bedding numbers. So for the linear strand, we get this formula here, and this does not depend on the combinatorial type of the cross-pollinitable. And for the trace band, we get this formula, which actually looks nicer than the one for the linear strand. And as I said, this is independent of the combinatorial type. So how do we prove this? I guess I only want to give you the idea. So you could try to somehow decompose the cross-pollinitable stack spheres, but just removing one cross-pollinitable. But unfortunately, this does not really work. At least we were not able to do so to get somehow a formula from this method. But basically what you can show is that if you just say you take a facet where you clued two polytopes together, then there's somehow an opposite facet. So if I may withdraw this. I'm so short. But for example, if I'm in the Sviedemann case, say I have cross-pollinitob, someone looks like this. And let's say I have clued along this facet. Then I have the natural way there is somehow the opposite facet, which is this one. And basically what we show is that if you remove such a facet, then again you get just better numbers of the stacked cross-pollinitob sphere. So you won't change much. And this, in the end, allows us to use some kind of induction. So yeah, I guess this is everything. Thank you. Thank you. Thank you. So in this episode, will you take the next idea? Could you take the next power idea? No way. Let's say the height of the politics. I guess so. I didn't think about it, but it might be possible. To get better about it. To get better about it. Yeah, in my group. Because it's, yeah. I also should say it's still very, very broken. It's still not all everything is written up. Any other questions? Thank you again.
A (d−1)-dimensional simplicial complex is called balanced, if its 1-skeleton is d-colorable. In this talk, I will discuss upper bounds for the graded Betti numbers of the Stanley-Reisner rings of this class of simplicial complexes. Our results include both, bounds for the Cohen-Macaulay case and for the general situation. Previously, upper bounds have been shown by Migliore and Nagel, and Murai for simplicial polytopes, Cohen-Macaulay complexes and normal pseudomanifolds. If time permits, I will also mention, what can be said for balanced normal pseudomanifolds. This is joint work with Lorenzo Venturello.
10.5446/59160 (DOI)
I'm often with collaborators, I'm from Peru and free biologists from Montpellier and Peru and also I have benefited from a very interesting and helpful discussion with Eric Carle and Maria Carre. So it's a model that is coming from programs in population genetics and more specifically with sexual populations. When you have a sexual reproduction, if you consider the character of one individual and how it transmits to the next generation, you will see that you have some mixing that will be going on because you have recombination at the genetic level. So you have recombinations like that. And basically there are two situations where you can do something theoretically, either when you have very few alleles that you consider for instance a specific gene that allows you to be resistant to a disease, or the opposite when you have a lot of alleles that contribute to a phenotype. This is something that can be encountered in many situations. And then you have what is called an infinitesimal model that was established by Fisher which said that if you look at a specific character, the trait of an offspring is actually a Gaussian distribution centered between the trait of the parents. So this is what we're going to start with. And we'll add a special structure to this population. So the population will be n, density over time t, special variable x and a field type y. And we have four types, but this diffusion in space, selection term, right, you tie if you're not very well at that thing, which means if your trait y is too far from the optimal, let's be trait where you are. A competition term that regulates the population size, and then this reproduction term that we had before, right, we recognize here's a Gaussian distribution, and here's the two parents that have this as well. OK, so this is a model we want to understand. And we want to relate it to another model that is more widely used by people in ecology, which was introduced by Kier, Patrick, and Barton, and describes the population not through a distribution of the space and field type, but by the population size and minifigure trait of the population in time and space. We have this equation. And this equation is widely used and interesting because it has a complex dynamics. You can have extension of the population. You can have survival in limited range or propagation. And it's gaining interest at the moon because it's a quite precise model to describe the population in submitting to climate change. So is it significant that you have the same diffusion speed of both the population and the field type? Yes, yes. But in some situations, this can change or present that at the end, actually. OK, so how can we move from one to the other? Well, we can just consider the moments of the distribution of the first model. So we have this model that was structured by the field type trait. We can consider the population and the min trait and write an equation on those two quantities. And then we'll have higher order terms that appear. And we can guess what those higher order terms will be if some parameter goes to infinity. So that's what we can do. And the parameters that we can choose to go to infinity, this gamma, it's a rate of reproduction. It's a term that is in front of the reproduction term. And if it's fast, then the reproduction will be quick. And actually, if you consider the effect of those reproduction of the higher moments, you will see that they will force the moments to go to certain values. So the second moment will converge to A, which is related to the variance of the Gaussian distribution. And the third moment will go to 0. So if you do that, we obtain indeed the K-patrick-Barton model. And you can do numerical simulations to check that things were correct. So this is a simulation for the infinity model, so the kinetic model. And you can compare it to the limit model, and you see that it fits quite well. So we have a good candidate that is interesting, both from a theoretical point of view and for biologists. So how can we go further than that? This was heuristic. Well, it would be to use the contraction that is produced by this reproduction operator. I say that it has an impact on the second and third moment. But you can go further than that and show that actually it is a contraction for the Vassach time two distance. It's very close to the Van Akei inequality. And the goal would be to use that. What's the kernel? What's the gamma A over 2? Sorry, it's a Gaussian distribution with variance A over 2. So it's very much like a sticky particle model with kind of thermal baths that would be this Gaussian. So we would like to use this technique inequality when we have on top of a spatial structure. So we can write the equation on the normalized population, because if we want to work with Vassach time distances, it's more convenient. So we redefine n tilde as n divided by the population size of this location. And we obtain this equation where we recognize some diffusion like before, the reproduction term here, and then some extra terms that come because of this renormalization. What is the first step? Well, the first step is to have some estimate on the target model. So what we use as a strategy is to assume first that we have a uniform bound on the l infinity norm of z, of the mean-field speed trade. If we have that, we are able to show that we also have an l infinity bound on higher order moments, right, fourth order moments, actually, which is very specific to this model, of course. And also to the situation where we consider here the solution here, towards not in a full space. Then we use this higher order estimate to show some regularity estimates on z and n. So if we look at the target model, we see that the most difficult part is this gradient n over n, which is a bit unusual. But actually, this term only appears in the second one. So we can use lp estimate on the first equation to bound this one and have access to our estimate. OK, so the microscopy equities are regular. And that's interesting, because then we can use this tenacian equality to deal with this fancy big variable. So this is the equation. We let gamma large. And if we do so, we can show that n tilde, this normalized solution, is close to the manifold of local max value uniformly in x. And to show that, we use the Duhamel formula. And it can be written as a Duhamel formula. We've here some transport, some fundamental solution of a linear problem like that, which is not too bad, because we have bound it this time before. All right. And since we have done that, we know that we are close to local max valence. And this can then be used back in this equation to show that we have, indeed, this propagation of the ln ln 3d bound on z that we use to start the acrobat. If you put all this together, we end up with this result that if you have a gamma that is large, then you have a uniform control on the distance of this normalized solution to the local max valence. And the solution satisfies the microscopic quantity n and z, satisfy a system that is close to the acrobatic barter model that we have. So now, we present briefly, or not so briefly, because I think fast, a specific application of that. In the case where you consider the effect of this dispersion, so if you look at the natural populations, you know that it's impacted by climate change and trying to understand what we can expect in the future for a given species is really important. And if you want to understand the impact of climate change, you have to know that actually the species are very not homogeneous. They are really adapted to their local environment. So if you look at one species, it's actually not a uniform body of individual, but each individual will be typically adapted to the local climate. So the beach trees, for instance, that live in the southern Europe are not the same as the one in the northern Europe. So if you want to understand the impact of climate change, you need to understand the evolution of this whole structure of the population. And this is the kind of model that you can use. And there was an idea that we could use the pollen dispersion, the effect of pollen dispersion, to save more easily trees than other animals. So how can we see the impact of pollen dispersion? Well, the impact of pollen dispersion is interesting because when you disperse pollen, you disperse the genes. You disperse DNA, but you don't disperse individuals. When you have seeds, the dispersion of individuals and genes is related. But with pollen, you don't have this anymore. So you can write it like that. You can have the same model as before with density n over tx and y. But this time, in the reproduction term, you will have the model, let's say, that would be a local person. But then the father is obtained through pollen dispersion and can come from far away. It's a non-local term. And can come from very far away in practice. And so you can do the same process as before, at least horizontally. And if you do so, you obtain a new Kier-Patrick-Barten model that is this one. And this time, you have the dispersion rate for both are not the same anymore. Because the dispersion of traits is not directly related to dispersion of individuals anymore. And you have also here a system. And the result of that? Well, the result is you can actually understand what is happening and see that the effect of pollen is not always so favorable for the species. So you have situations where dispersing your pollen helps to survive climate change. And others where it's actually the determinant of pollen. So that's it. So that's it. I think it's over. Thanks. Thank you. So here we wonder what is the maximum climate change speed that allows you to survive. And so depending on the 4,000 climate change speed, you will survive. And if it's too fast, you will not survive anymore. The optimal one is the pollen dispersion rate that allows you to maximize it. So it's optimal in the sense of the most recede and the species can have towards the climate change. Yes, exactly this. Sorry. Oh, yeah, yeah, yeah, so this is the rate of generation. So we want to this gamma to affect, we don't want it to affect the population size, I'd say. So we want to actually write gamma times this minus n. And then this reproduction term will not affect the population size of the mid-30s picture. So thank you. Thank you.
We are interested in evolutionary biology models for sexual populations. The sexual reproductions are modelled through the so-called Infinitesimal Model, which is similar to an inelastic Boltzmann operator. This kinetic operator is then combined to selection and spatial dispersion operators. In this talk, we will show how the Wasserstein estimates that appear naturally for the kinetic operator can be combined to estimates on the other operators to study the qualitative properties of the solutions. In particular, this approach allows us to recover a well-known (in populations genetics) macroscopic model.
10.5446/59165 (DOI)
with Jean-David Binaloud, Simone DiMarino, Compilzard, and Luca Nenor, which addresses some connection between entropy minimization and minfi games, a certain class of minfi games. So the title is entropy minimization for minfi games. Bonjour, energies. So I want to start with a remark that is entropy minimization is a tractable proxy for optimal transport. So if you want to learn it, so entropy minimization is a tractable proxy for what's in it. So it starts with a optimal transport problem. Let's say on RD, you have a certain cost, C, you look for transport layer, gamma, between given probability measures, Mu and U. Let's say density is nice probability measures. It's in general difficult to solve. It's linear programming in infinite dimension. It's a bit rough. So one way to approximate it is to add a small contribution. And the natural one is entropy, who aspect, let's say, to the Lebesgue measure, as in this case here. And now this can be recast as a relative entropy minimization of gamma in respect to the Gibbs-Carno C over epsilon, e to the minus C divided by epsilon, or more transport methods. So you start with linear programming, and you end up with a sort of projection problem, but not with a square distance, but with a cool backlight block. Epsilon times the integral of gamma along gamma. Epsilon of epsilon times the integral of gamma. It's epsilon times entropy. Right. And here, what I do is I divide everything by epsilon. I put the cost here in the logarithm, and I end up with this. So the problem becomes finding a transport plan, which in cool backlight block distance is as close as possible to the set of invisible plants. So how do you know that you have a gamma sector, gamma and log gamma is not infinity? So how do you know that you can find at least one which is absolutely continuous? Oh, OK. So I make some assumptions on the, OK, imagine that the margin is finite entropy. So I can take the product measure at this finite entropy. OK? Think that, of course, otherwise it's not a good way. It's not a good, it's not very consistent. So assume, for instance, that entropy of mu, entropy of mu is finite, OK, if you want. Otherwise, I can change the reference measure. OK. So why is it better? Because it's a strict complex problem. So that means a unique solution. But there's a density with respect to the bag. Gamma is gamma xy dx dy. And the optimal gamma has a special form. It's e to the minus c of xy divided by epsilon times the tensor product, a of x, b of y. OK? So you can think of a or the exponential of a and the exponential of b as being lab-range multipliers associated to marginal constraints. So it has this form. And now a and b should be such that the marginal constraints are met. So this is just two algebraic equations, which say that a of x times the integral of e to the Gibbs kernel is the first density. And similarly, for the second marginal, you get an equation like this. So the two lab-range multipliers are related by this relation. This is actually, I mentioned this, because this is a system which appears for historical reason in a paper, a famous paper by Schrodinger and the Fertigies. So solving this is just finding two densities, two positive potentials, a and b, such that this condition are met. And you know, if b is fixed, you find the a. If a is fixed, you find the b. And you can iterate. And it turns out that this is an algorithm which somehow amass to iterate some contraction or some fancy metric called the Hilbert metric. So iterating this equation, these two equations in the Schrodinger system converges. This is called the single algorithm, which is very simple to use, very efficient, pretty fast in practice. And of course, as epsilon go to 0, you approximate the initial O. Am I clear so far? So entropy minimization is much easier to solve than linear programming. So now I'd like to give you a message that epsilon equals 1 might be interesting in itself. And I would like to illustrate this in the case of Mifig games. So Mifig games with the variational structures goes like this. So variational Mifig games, you are basically looking at a minimization problem where you function to fix time interval 0, t. You look for a curve of measure in the velocity field which minimizes the kinetic energy plus maybe some running cost plus maybe some terminal cost. You minimize with respect to rho and v. R0 is fixed. It's a given probability measure. Again, think it has density, it has time interval p, it has finite second moments. So rho0 is fixed. And the second equation is this diffusion equation. In D3-Rome, you replace basically the continuity equation. You add a lot of pressure in it. Plus divergence of rho v equals 0. So it's a sort of optimal control problem for this equation. So for the problem equation, and I'd like to convince you in the end of my talk that this can be recast in the end as a problem like this, as an entropy minimization problem. So f of rho t and g of rho t also? Sorry. So this is terminal time. OK, so this is a running cost. So think it's interval of rho square, for instance. And you want to end up at the end of the day at t. There is some potential or some function, which can be 0. OK, so why are we interested in this? Because at least formally, the optimality system conditions always problems. Let me call it 1. For 1, the system of pd is, of course, various, a Fokker-Prandt equation. Since you are minimizing energy, you can figure out that the optimal v should be the gradient of a potential. So we've got a Fokker-Prandt equation for rho, the drift, the gradient of some potential u. And the u, which is some sort of costate, I joined state for this equation. So I'm going to make a viscous and a Newton-Jack-Cogbie equation. So be careful. This is a general state. So it is backward. This is why the minus sign is coming from. And on the right-hand side, there's something which depends on rho, formerly F derivative of capital F. And there's boundary conditions that are important. So rho naught is fixed. So this is forward in time for the evolution of rho. But this is backward in time for this value function t. So your t is j over t, which is a sort of trans-semitic condition associated to the terminal cost in the sense that j is the derivative of j. So we want to solve this system. It has a sort of game theoretic flavor. So of course, you mentioned that all this was introduced by last year in Reynolds. And there's a thousand. And it is just a special case of mythic eggs. Mythic eggs theory has developed a lot. This is a special case. So now the question is, can one, if formulated, let's say, a laschködinger as an entropy immunization problem, which is more tractable, at least from a numerical point. Or maybe it's interesting in itself. And the guess that this is the case actually comes from recent results. And some people in the audience are responsible for two papers, two nice papers, one by Chen, Pavel and George Yu. And the other one by Ivan Fistor-Leonard and Richard Ipani, which consider this is maybe a simple problem as well, but it's related, which is a sort of noisy benign formula, which goes like this. So let me take two probability measures. Again, think that they have densities, which are our finite entropy. And I consider two variational problems. The first one is optimal control of Hock-O'-Plank, if you want. So given new and new given some time t, let me introduce the least cost of Hock-O'-Plank of somehow transporting mutinu. So this is the infinum of the same guy, rho v squared. So rho v forms the equation, same equation. I should have given the name to this in the beginning. 0, it's going to be star. And there are boundary conditions. So rho 0 is mu, rho of t is mu. Could be plus infinity if the measures are not 1bA. And let me introduce another problem, an entropy minimization problem, which I have to define a reference measure on the past case. So the continuous pass of this time interval 0 t, which is a so-called reversible linear measure, which is defined this way. So you take the standard Brownian motion, starting at 0, call it b on 0 t. But to change the initial condition, you launch it at x time of x, and you integrate with respect to the initial condition uniform. So why is it called reversible and venerable? Because the back measure is in binary. So now there's another problem, which I call s t mu mu, which is minimized with respect to probability measure on the past case. Another boundary condition that's the loop of the push forward by the evaluation time 0 of q should be mu. And the terminal measure, so this is a measure on the past case, it induces our genomes for each time. So you prescribe them q t equals mu. And the theorem, which is proven here, so the result of Yvon, Christian, and Richard slightly more general, well, there was a connection between those two guys that the minimum entropy is the same as the Pocopron cost up to the entropy of the initial condition. And now I'd like to make a remark here. Sorry, I'm confused. What is this r? It's a reference measure. So it's a measure. It's not a probability measure. So the b depends on t, or what's the b here? How b is standard pronoun motion, but is there a time? It's a pass. Sorry. Because it's a whole trajectory. You look at one realization of the pronoun motion, but starting at x, and then you integrate with respect to the initial condition. It's uniform. So if I integrate the function, what do I get? So it's 4. OK, that's a good question. So the expectation, the same with respect to this measure, well, it's not an expectation. So phi is a function defined on path. So omega is a pass. d r omega. So this is a point of space, which is infinite dimensional. It is you take one trajectory of the pronoun motion, which you start at x. It depends on the one. Maybe you can take a cylindrical test function, which depends on finitely many time realization of the motion, if you want. And this is random, so you have to integrate. You get the expectation, and then integrate with respect to x. Because it's fine. And now I'd like to make a remark, due to Christiane's name, that you might think that this problem here is complicated, because you look for a measure on a set of paths. This is really a dynamical problem with two layers of dimension, in fact, it simplifies. And the reason why it simplifies is the following. So take q such that the q max equal mu, qt equal mu, q is a probability measure of the path space. You can define a transport plan, which is the joint evaluation and starting and end point of q. So this is a transport plan between mu and mu. And now the relative entropy can be decomposed h2r. And similarly, I can define r0t in the same way. So this is basically the heat gamma that we have here. So it disintegrates. So it's a relative entropy, sorry, a total gamma, plus the integral on the path space of the entropy of the conditional probabilities. So I do not borrow the conditional probability. We've got more x0, xt. So this is the probability on the path space. It induces a conditional probability on paths which event points x0 and xt. And this is a nice property of relative entropy that it decompose this way. But now these are probabilities. So it's always positive, not negative. And this is 0 when this guy coincides with this guy. So in fact, the infinite dimensional path you can forget about. The best strategy is really to optimize, to minimize this with respect to gamma. And then this you can make 0 by equating this conditional probability with this one, which is called the So-called Brouhman Bridge. So in fact, st of mu nu is nothing else than the infimum of our transport plans. So now on rd, now, we're not on trajectories anymore. And the minimum value of relative entropy with respect to r0t, which is very nice. It's e to the minus the square distance divided by 2t, basically. This is a problem we started. This is a problem we started with t equal to sine. We started. How much time do we have? We started a little late. Say six more minutes. Six more minutes. So I'll be quick. Time for the Brownian Bridge. Sorry? Time for the Brownian Bridge. OK. So this is a 2n-point problem. So now this can be generalized to more marginal constraints. Because here, you see in a mid-fielder game problem, we look for a whole trajectory of measure, a whole path of measure, to more marginal constraints. And once you know this nice formula here, and you play a little bit with properties of entropy, what you can prove, in fact, you can discretize a problem in time, which numerically is what you're going to do. And consider more general functional. So let me set some notation. So let's assume that you have a path of measures. So it's a continuous curve valued in the past ocean space. And then you can introduce two least energies. The first one is just the middle. We have to look to the velocity field v of the kinetic energy. So mu now is fixed. Subject to the diffusion equation. So now we have basically a start with heads. Basically, you prescribe the good variables v mu, the momentum variable. So it's like prescribing the divergence of this momentum variable. So this is convex in the initial problem, in fact. And there's another one, which is look for a probability measure on the past space that you prescribe all the time margin, the Qt equals mu t for everything. And now it turns out that they are the same as mu is exactly your mu plus the entropy of the initial condition. And I don't have time to enter the details. And you have district, maybe, time discretization, which consists instead of fixing a wall pass, you fit snapshots at different times. So you can give yourself mu into mu m, mu probability measures on our d. You can define the discretization of this guy. So what is it? It's just a sum of the focal length value, least energy, mu k plus 1. So you just fix the constraint at times kt divided by m. The value of the bastard trajectory should be mu k. So you can do the same here, define as n mu dot mu m as being the infimum h2, we respect, does a more investible. We don't measure, but you don't fix all the marginals, but only those at instance, districts times kt divided by m. And now this is just a solution. Sqrt satisfies star. What is the star? Yeah, it's a dT mu minus r. I see. Yes, right here. So avoid having two unknowns. So maybe I just like to connect the minimization of entropy. So we saw the link with Brouhlin Bridge. Brouhlin Bridge with two legs, two end points. So this problem Sn, it's a bridge with more than two legs. It's nice that some bridges have more than two legs. So the connection, with my properties of entropy, is the following, Sn mu dot mu m. So it uses a lot of the fact that Markovianity properties of the reference measure are. It's just the sum of what we saw before on the time it would be divided by m of mu k, mu k plus 1. That was a neural term which is the sum of the values of mu k, rk, but rk is the effect measure. So this is just entropy. The usual entropy. So if you combine this decomposition of the least entropy and the Bellabou-Boucouin formula, you're recombined. So this is the same as the least energy subjected to focal plane plus these conditions at mu at time kt divided by n should be mu k. It's fixed. This is the same as m mu n plus the initial entropy of the initial connection. So now you see that if you recall the problem of midfield gain while you had to minimize the kinetic energy subjected to focal plane plus something else, but it's the same as minimizing over probability measures. Now the starting point was fixed, which should not be more or less. So this is the same as the least entropy plus the running cost of F of Et, which was equal to t plus terminal cost. And of course we discretize the entire. So this is an example because there is diffusion in this least beginning business where in fact the problem with diffusion, you don't light up silent to zero. So it's really interesting because it's a nice strictly complex problem which is pretty attractive. Thank you very much. APPLAUSE Any questions for you? What is the optimal Q here? That's a good question. You mean here for the whole stuff? Yeah, it's optimal one. So the optimal Q, it's a good point. So first the optimal row is of course the projection with respect to time with the optimal Q. Now the optimal Q, you can obtain it by just a lot of theory. It's a chain, it should be absolutely continuous with respect to the Lebesgue measure. And now it should have a special form because it minimizes entropy as we started with. And in fact it should be given, the change of probability is given by some drift which is related to the Hamilton-Jacques equation we saw in the beginning. I don't know if it's a change of probability on the past phase. But for all these it has the same marginality and utility. Ah, you mean for the wolf, when everything is fixed. Ah, but okay, then you have a multiplier of four million. For each, it should be given by an exponential of something which there is a multiplier for each time margin constraint. So Q should be obtained by R with a certain change of variable with this exponential density. And the phi should solve some dual dorsal limits. So roughly speaking it's also a differential equation. This is formal because it's not clear that you have any irregularity. I don't know what you're doing. That's why here this problem is well posed if you make reasonable assumption on f and j. And of course it takes a complexity itself. Does it answer your question? Is that right? Any other question? Okay, thank you. I hope you had a star studded session this morning. So please reduce, share Robert Ben MacKann with his beard.
Entropic regularization of optimal transport is appealing both from a numerical and theoretical perspective. In this talk we will discuss two applications, one from incompressible fluid dynamics and the other from mean-field games theory.
10.5446/59167 (DOI)
list secretary 不 money toe Australia a plan provide incredкі föret continuous one fireworks indefinite b ᵉ n ʻ y ʻ dʻ mɔ ʻdʊ p ʻq ʻq ʻq ʻq ʻq ʻq ʻq ʻq ʻq ʸq ʻq ʸq ʻq ʸq ʸq ʸq ʸq ʸq ʸq ʸq ʸq ʒq ʸq ʸq ʸq ʴq ʰq ʰq, ʷq ʸq ʸq ʸq ʸq ʸq ʸq ʸq ʸq ʰq ʸq ʸq ʸq ʸq ʸq ʸq ʲq ʸq ʸ chairman z wa from rd to the n into r. Be it bounded smooth, whatever assumption, technical assumption you need, I am not going to be more specific because this is not going to play an important role later. So this by this I mean rd cos rd d time and so any x here means x is h1 sn xr to rd. Now I am going to define a two function. The first one is the u of tx. This is the expectation of phi t of phi x1 plus square root of 2 wt. So this is the second function. So this is the second function. So this is the second function. The second function is v of tx is the expectation of x of tx plus square root of 2 v of tx plus square root of 2 v of tx plus square root of 2 v of tx. So you notice the difference between here and there is in the first case you have n particle which are moving randomly. So it is like if someone is controlling the way they are moving randomly. So someone is making them move with the same random and this is called common noise. And there they are moving with different independent premium motion and this is I mean common noise. So this is the second function. So this is the second function. So if you look at the equation on the technical assumption the equation satisfy by v, v equal to partial derivative of v with respect to time is the laplation of v. And so the laplation is of course the laplation in x1, sn. Now at time zero, v of zero is g. Now you look at the equation satisfy by u, dt u will be equal to the laplation of u plus the sum j is not k. The trace, the gradient with respect to si, sj, sk of u and u at time zero is g. So in over a while I can write this as the sum of jk. jk I don't impose that j is not different from k with trace of the gradient si, sj, sk of u. So here I want to add more room to write something here. You look at the Fourier transform of v. You get negative pi 4 pi square with sum j from 1 to k, psi j square. And you look at the Fourier transform of, so let me call this operator o, p of u. You look at the Fourier transform of p what you get is negative pi 4 pi square with sum j from 1 to k of psi j square. So here is what I want to point out. I want to point out that this is uniformly elliptic because this is written as zero when you are far from zero. And this is degenerate in the sense that because this sum can be zero even when we decide j are not zero you can use the regularity in certain direction. And when I let n go to infinity I am going to obtain an operator which will still satisfy these properties. So one will have a smoothing effect and the other one will not have a smoothing effect. So what happened in Minfield? We got interested in this because we were studying Minfield. And we wanted to develop our intuition why with one operator you produce smooth solution with another operator you don't produce solution. So we tried to get rid of your Hamiltonian just to study the equation about your Hamiltonian. And then this is the reason which explains why when you have a individual noise then you produce a smooth solution. So let me write a list of questions I want to address. I am studying an equation on rd cross rd n times and I want to let n go to infinity. And I have a solution let me put the n here because it depends on n and let me also make the g depend on n. I would like to compute the limit when n goes to infinity of u n makes sense. And if I call this operator p n and call this delta n I would like also delta n p n I would like to know what the conversation is. So this limit you can show in some sense that it is l2 of 0 1 2 will be rd. So once you show that. What do you mean by delta n? Laplacian. I mean it is to make it depends on n right because this is defined on rd cross rd n times. And so we will have this Laplacian on n on n. And the sum up to k is also sum up to n. Yes, so k equal to n. So that was the problem. However once you obtain this limit and you try to pass to the limit here it will be very difficult. You have not have enough compactness. We don't know how to do it and I don't know many people know how to do it. However what can help you. You question this by p n. You take rd cross rd n times. You question this by p n. Where p n is the set of permutation. So this is a different p n than the other p n. So that s n is the operator p n already. You have a p n over here. Oh yes. So s n is good. Make this one s n. The new one s n. So this is the set of permutation of n letters. And this set satisfies much better compactness property. If you compute the limit when n goes to infinity of this set you find the set of probability measure on rd. And this way of thinking in fact is going to guide a lot what we are going to do later. So we are saying that instead of working on this space you work on this quotient space. And later we are going to say instead of working on that space you work on this quotient space. What is the advantage of working on this space? There are two reasons. One reason is if you are not familiar with some of the differentials structure on the set of probability measure. You need some knowledge on the mass transport to understand that. If you are not familiar with it you are familiar with differential structure in any Hilbert space. And second, Leon, when he was developing his theory of mean field game he will use at least formally the differential structure here. And so with Adrian to the rescue we prove a theorem saying that both differential structure are equivalent. So once you do that, once you use this quotient space you can show that the limit of u infinity exists. The limit of the infinity exists. And I am going to show you that the Laplacian is going to converge to something I call O. And P is going to something I call the partial Vastastane Laplacian. I am going to argue that this is a Vastastane Laplacian. So one of the first list of argument is if I define an operator on the Vastastane space which is on the operator where I am working with infinitely many particles and I call it the Laplacian. If I replace infinitely many particles by one particle I better have a classical Laplacian. So we are defining an operator such that if you apply to u at the Dirac mass at S you get the classical Laplacian of u at S. So before I can define this Laplacian I will need to go back to the differential structure here because I am going to define the question, take the trace of the question. So this is also suggest data, if you don't want to use this partial Laplacian suggest over possible Laplacian you may try. Sorry this is the PN, is that right? This one. Which one is which? This is the limit of the PN and this is the limit of the Laplacian. So this one will continue to have smoothing effect and that one only in some. So that's not zero. It's a whole. Yes. We're just taking second moment condition comes from the heat of two. It comes from here. So it comes from the fact that I am working on a, here I am working the Euclidean unknown and I am working with the L2 now. So if I put LP here it will be P from the table. So this is the L2 function and you have to measure it. Yes. So you have to do some. Oh yes, because when I am averaging I am using the, depending which method you use to approximate your function. So let me talk about differential switching on P2 of R. So set H to be L2 of 0,1 to be Rb and M is P2 of Rb. So if a path is from here to there, people have been doing that for a long time. Here you can say that you are working on Euclidean coordinate and there you are working on Lagomitian coordinate. So I am going to define an equivalence relation on H, which is very simple. This is a law which is to X, the law of X equal to X, which is for the very measure of 0,1 to be T. So in other words, I am saying that if I want to measure a set in Rb, I take the pre-image by X and I compute the Lube measurement. And if mu is the law of X and mu is the law of Y, when you are defining the quotient metric, you have a low choice. When you have your quotient relation, you have no choice. It has to be very simple. So for the number of X bar minus Y bar, where the law of X bar is the law of X and the law of Y bar is the law of Y. And it is what has been known for a long time that this is nothing but the vastness and distance between mu and mu. And so what is a trivia from here is that H, the scientific by the law, is that you have an isometric embedding. So it means that it can serve the distance. We all know how to define, how to get a different, define the gradient of a function defined here. And so whenever you give me a function there, I am going to leave the function to the whole space. So if we have a polynomial theorem, this is my set with a classical. Let you become P2 of Rb into the relative infinity of plus infinity. I am going to write, I am going to specify what I want to write later. And I put in parenthesis, you take up, in fact what I mean is that you take up side to be the gradient of a function of plus infinity. You can take side to be that. But if you do that, it will be too restrictive. So you close this with respect to P2 of mu. And this is what is called in Ambrose, you give me a savare the tangent space at mu. And lift u to our 10. u tilde of z is u to the left of z. And I will give you the sub differential of u at mu. If I don't lift side composition x, it belongs to the sub differential of mu tilde at x. Before I continue, you may think I never told you what is the sub differential. However, here on Hilbert space, we are all well known what is the sub differential. You just do Taylor expansion. And you say that the function is above the first order approximation of Taylor expansion. So you have two possibilities. If you don't know the Ambrose or Ginglian savare sub differential definition, you take this as a definition. If you know it as a definition, you take this as a theory. So if instead of, this is a point wise statement, if instead of the point wise statement you want to prove that u is differential in a neighborhood of mu is equivalent to this whole and that this function is differential in a neighborhood of x. That is a much easier statement. And this is the assumption Lian was making to use that as a definition of his sub differential. And similarly, you can get the definition for the super differential. And so see what you learn from here is there are many x such that we love s equal to mu. And so if I change this x by x bar, it mean that this thing hold. So this is equivalent to psi composition x is in sub differential is equivalent to psi composition s bar is in the sub differential of u t of s bar. As a consequence, you cannot conclude that the gradient, the vastest time gradient of u are coming, and the most important composition x equal to the gradient we respect in the sub differential of u t of s bar at x. And same for fashion. So this relation I wrote for differential I can write it for fashion. And so the point I am trying to get to is I have told you how to define fashion because you know how to define fashion in the Hilbert space setting. And by this formula, you know how to get fashion on the vastest time space. So something psychologically observed that h at psi is an element either from rd to rd, it is a vector field. So I am writing that the vastest time gradient of a function is a vector field. And the two vastest time gradient of a function will be called rd cross rd, and the vastest time gradient of symmetric matrices. That is a good to remember. Now I want to define a partial Laplacian. So for a long time many of us have been looking for a Laplacian, and so some progress has been made in one dimension. But what is not very satisfactory is you hope you will get a Laplacian on the vastest time space which will be invariant on the rotation. If you are trying to use the eigen vector of the fashion to define a Laplacian, you will not get something which is invariant on the rotation. Because you have infinitely many vectors, if you want to define a question using all the vectors, the eigen value must go to zero. And if the eigen value go to zero, it means that they don't have the same weight. And so you have to change the weight in your Laplacian. So we are rather working with a partial Laplacian. And what I am going to do here, you can do it using a basis of these sets, but you have to put it. So definition. Let E1, Ed be any autonomous basis of an argument. Then you can check that it becomes an autonomous basis of a space. Rehabilitation of a u at mu is a sum j of u at mu is the sum j from 1 to d. You take the question of u at mu and you apply to EjEj. So if you want to rather define a q Laplacian, you are going to put qn here. And the sum will be 2 infinity. And so because you put the qn, you are going to use the invariance by rotation property. Now when you write this, I want to see a Fede. I want to say that this is a differential operator I can compute. So this, I am going to list some fact supporting the definition that this is reasonable. Maybe the first fact is we got this by accident because we were looking at a main field game and we were trying to separate the operator. And we realized that when you group some of the terms together, you get this. And then we realized that a main field game is just Hamilton Jacoby equation and you add a u. So theorem. I claim that this has an explicit representation. It is very difficult over rd. You compute the divergence, the usual divergence. You take the bussier's time gradient of u at mu. And remember that this is a vector field and so it depends on x mu dx. And you add the integral over rd cross rd. You take the trace of the fashion of mu A x mu dx. And you remember that the trace is defined, the fashion is defined on rd cross rd. And this operator is O. I am going to call it O of mu. Now you can compute the Fourier transform of this. When you compute the Fourier transform of this, you see that the first operator is uniformly elliptic. The second operator, the eigenvalue. So you can compute exactly the eigenvalue. The eigenvalue are like this. I from 1 to n. And n goes from 1 to infinity. So you have a list of eigenvalues and it shows you why this whole thing is degenerate. And when you compute the eigenvalue of the first operator, you find out this eigenvalue. Let me list a few arguments to support that, to support the fact that I have a Laplacian. So let me define a common lowest motion. So this is the definition of that is theory. So common lowest order p of rd. So I am going to use a common lowest which was used by Geo-Münster. So I choose let mu be in p of rd. And I want to define a random path which is theta from mu. And I define it to be identity plus square root of 2 wt. And let me call this vt of mu. This is a graph of the path which Geo-Münster wrote. And I am going to define another path, b bar t of mu. This is a dt convolution mu, where t is a green function for Laplacian. And I am going to state the theorem. Let you from p2 of rd into rd of class c2. Because we know that what it mean to be of class c2 on a Hilbert space and the by definition of ith theorem is transferred to class c2 on the Vassus time space. So I claim that the Vassus-Tenla-Plaschen of mu at delta x is the Laplacian of mu at s if I define the opposite to the mu of delta z. So in the case of one particle, the Vassus-Tenla-Plaschen coincide with the classical Laplacian. Let u of t mu be the expectation of, sorry, I am going to call this vt. Let u of t mu be the expectation of vt of pt. Then the partial derivative of mu be the expectation of the Vassus-Tenla-Plaschen of mu. Let u of t mu be the expectation of the Vassus-Tenla-Plaschen of mu at delta x. Let u of t mu be the expectation of the Vassus-Tenla-Plaschen of mu at delta x. Let us look for a harmonic function. So what are here a lot in a basic, so take the g of mu, let us take g from rv to rv, b of class c3, symmetric, plus some condition. Actually here I don't need c3, c2 will be enough and I am going to impose that the secondary derivative are bounded for instance. Center g of mu to be the integral of rv cross rv of g of x minus y in units. So this is a functional which are very often and if you compute the Laplacian of t you find you get c. So what do we learn from there? We learn from there that the Laplacian cannot have a smoothing effect because otherwise this satisfy the heat equation. We will initial condition g and it cannot get a smoothing effect if g is not a greater than that. However if I take the Laplacian, if plus, so this is what I mean, I mean u of g. So u of g is not zero and so this doesn't contradict with that this doesn't have a smoothing effect. In fact we have a theorem proving that if you take a g plus o you have a smoothing effect in some direction. So I am not going to write the direction here. So there is a countable set of function and when you are in this direction we prove that g plus o improves regularity. However g is going to, the Laplacian improves regularity in some spatial direction. So another thing to look at is the subolive spaces. So we have been precise this is the idea how to defend subolive spaces. You make a list of the eigen function and the eigen value and as it is done classical in finite dimension you look at what corresponds to the Fourier transform and you are going to define hs of e2 of rd if the Fourier transform converges with the parameter s. And in hs we can show that when you solve the heat equation the Laplacian w dTu equal to Laplacian w of u. u at time zero is in hs. So you get a differentiable in hs but again what is hidden behind hs is hs is defined using some eigen function although they are countable they don't cover the whole set of eigen functions. So you are improving regularity only in some direction. Is this related to the way we have an operator against domain in the correct way? Yes. Thank you. Any questions? Is there an irisly form description of this operator? Yes. This is a kind of disappointing. We were able to introduce a measure because you use a measure to define it with a form and you can put integration right back for a small class. Let us look like this. So you use the… So if you take something that is in the domain of the irisly form you cover all the volume below the irisly form decrease which is equivalent to generating a positive if you use a ring semi-query. Oh we have an intent on that. So we have an infinite dimensional space but you are just checking the hsion in finitely many directions. So in particular more generally than this if I take any functional rate which is invariant and they are translating the measure that would be harmonic. So a plus would be zero. That is correct. You don't have any curvature of volume? The curvature of volume. Any other question? Your space is a subversion of an L2 space. You are using your buster space as a subversion of an L2 space so you expect the positive to be light-sparked. Yes. You can see that for example in the hsion. So was that the… Okay thank you very much. Thank you. What's the situation with the video? It's rain. No it's stop. It's stop.
We study stochastic processes on the Wasserstein space, together with their infinitesimal generators. One of these processes plays a central role in our work. Its infinitesimal generator defines a partial Laplacian on the space of Borel probability measures, and we use it to define heat flow on the Wasserstein space. We verify a distinctive smoothing effect of this flow for a particular class of initial conditions. To this end, we will develop a theory of Fourier analysis and conic surfaces in metric spaces. We note that the use of the infinitesimal generators has been instrumental in proving various theorems for Mean Field Games, and we anticipate they will play a key role in future studies of viscosity solutions of PDEs in the Wasserstein space (Joint work with Y. T. Chow).
10.5446/59168 (DOI)
And he's titled this ultimate transportation with free end times. Thank you. But your time is free end time because lunch is waiting. Thank you. I'll try not to keep you too long. And I'd like to thank all the organizers for inviting me and to Naseef. Nice to meet you, Naseef. And I'll be talking about work that's with Naseef Kaseub and Yohan Kim at UBC, where I'm a postdoc. And it's all fairly new. So I apologize for my mistakes like in the first slide that gave today's approach to 10. But the topic will be relating these two classical problems. Where the first problem is familiar to all of us, the lunch of transportation problem. The one just considering how to move a big pile of snow, which is piled according to distribution U, into a snow fort, for example, new, and do it in an optimal way to minimize the transportation cost. Whereas the scrollpot embedding problem starts with a Brownian motion, which begins according to a fixed distribution U. And you have to construct a stopping time tau, which such that the Brownian motion at the stopping time tau has distribution nu, the target distribution. So the metaphor here is a pollination of strawberries by dangling. And there are some real models, real applications for this problem that there's a metaphor. But one of the main questions to ask is when the transportation problem consider when the optimal transportation is given by a map. Here we're really considering when the optimal stopping time is given by the hitting time of a barrier. So the first time the Brownian motion enters some close up. So the barriers are? Oh, yeah. I won't use R later. So here's a little bit of a tour to connect these two problems, these two seemingly different problems. And the first step is something that's come up a few times in this conference so far, which is the dynamic formulation for optimal transportation with the fixed end time. So here we look at optimal transportation costs that can be written as the infimum of a Lagrangian over curves connecting the initial point y and the target z. And so the first sort of consider maybe for the quadratic Lagrangian, which which builds the quadratic cost function is considered by Venemo and Brenier, kind of relating it to fluid mechanics. And later, Bernardo Bufoni with applications to matter theory, as he mentioned yesterday, and Fatiha Tagali also extended that work. And I apologize right now that I'm a hazard of new system in these fields and if I miss some important references, please forgive me. So our main project will be focusing on a variation of this problem where the, instead of the fixed end time at one, we consider the optimization problem where we're not only optimizing over curves gamma, but also with a free end time tab. And so it shows up both as the integral sign should probably be inside brackets, the integral of over the integral of the running cost or Lagrangian and the end position is now the curve at time. But in this formulation still use a transportation cost or an optimal transportation problem. And it's natural now to kind of generalize this to a control theory perspective, which is some of my, my background's been studying control theory. And this has been done in the fixed end time case by Leon Agriche, the difference, the things we change here are instead of optimizing over all curves with a Lagrangian that depends on the derivative of the curve. We now are optimizing over control policies and end times and the dynamics, the curve is determined by the differential equation with some prescribed velocity function. So this was considered a fixed end time developing the theory of using a Pontiac and maximum principle, but now we can consider it with the free end time as well. But the next step is actually a big leap and also has to do with my background in stochastic optimal control theory, where now we want to look at the stochastic version of the problem. So we replace the deterministic differential equation for the control, for the trajectory gamut with a stochastic differential equation for a process xt with a new diffusion term. And the big leap here is that now the optimization problem can no longer be formulated as an optimal transportation problem. But what we want to do is use the techniques that we can, or use the analog of the techniques for optimal transportation to also study this problem. And this has already been had important contributions at least for fixed end time in optimal transportation related literature, starting with Mikami Ikulin who looked at the limit as the diffusion goes to zero and how this recovers the optimal transportation maps. And then Shinti Yanargin-Rapani and as a young Shintian earlier Chen Pavone in Georgia looking at the linear quadratic case and how this relates to a minimization of the attribute. But so the actual problem we'll be discussing in this talk is both this free end time deterministic control and then we'll just simplify and look at the scroll card and then a problem where there's no control. We have the Brownian motion but we keep the minimization of a running cost or the ground gain and the freedom to choose the end time tau. And this has had a large and very large and important literature starting in probability even more recently because of applications to mathematical points. And I'll discuss. We lost the XT here. So I replaced XT so XT had a stochastic differential equation but now we're just working directly with the Brownian motion. So here WT is just the Brownian motion that we assume is beginning with the distribution mu but then we're constrained to have it stop with the distribution mu using the stop time machine. So this is a subset of this problem where this stochastic differential equation is just XT equals WT. F is 0 sigma 1. Yeah. So next I want to kind of give an overview of the type of problems we're going to consider. And so to begin with just the general problem is just the classical existence in this theory. And so here I first am referring to the results of Gingbo and McCann because this gives a sort of direct analogy to the way they resolved the problem for the optimal transportation is kind of a direct analogy to our approach for this problem. But the history of course is a lot longer and includes many contributions. But a important thing to consider for the uniqueness aspect is whether or not the optimal transport can be given by a transportation map. And so in our case this adds a not so classical element which is considering whether the end time can be recovered as the hitting time of a barrier. And so one of the tools we use is the Pontiagin maximum principle in particular the transfer sality condition for the optimal control problems with free end time. But then also in the scrollcott and vetting literature this has been considered by these authors, Brute and Roast. And the other tool that we'll make use of in particular to translate the techniques from the deterministic version to the stochastic version is the Eulerian formulation of the optimal transportation problem. The mentioned four has been considered by Bernoulli and Bernard DeFony. And the important thing is that this relates the problem to fluid mechanics, to kinetic theory, you'll see the formulation of our problem is closely related to kinetic theory. The interesting aspect of that is when we consider the Cantorovic duality, we find that the Cantorovic duality can be expressed using as a reboundary PDE. In particular an optimization over a variational inequality or quasi-variational inequality of the type that's been considered for example by Vincent Sain and Jons as the equations that determine the value functions for optimal stopping problems in stochastic deterministic optimal control. But then the free boundary PDE is also a reason in the scrollcott and vetting problem. There are some of the authors that have considered it. But the new part that we'll bring to this is how we consider this as part of the dual optimization problem to our free end time optimization. And this all leads to a lot of questions involving how to understand the solutions to these differential equations, whether weak solutions or viscosity solutions and understanding regularity. And this is a topic that I think affects pretty much everyone. Everyone here, lots of contributions from everyone. So the literature is too big to summarize. But in particular one thing that's been very helpful for me is reading the great book on the introduction of variational inequality as a professor, Kindler, and Stapakia. Which is available in the book online. So part one. Is this available at Ectronik, man? Yes. Okay, so here's the problem the first problem we'll consider, which is the occult transportation with the cost function defined by the occult control problem with free end time. And the first thing I want to discuss is what is the Eulerian formulation for this problem. So the theorem is under certain conditions, the mu is nice and everything else is nice. The infamon value over the transportation problem is equal to the infamon value over an Eulerian formulation where we work with Eta, which is a time dependent density on base space as in kinetic theory. So we're satisfying that the initial condition, at least the spatial marginal, is given by the source distribution mu. And then we have a stopping measure, which is just the ordinary probability measure on space and time, satisfying that the spatial marginal is the target distribution mu. So instead of the free end time case where you're transporting from the density from time 0 to time 1, which gives the sort of interpolation of auto transportation from the con. Now we have a sort of smearing of the transport plan as a space time density, which has as its spatial marginal the target distribution. And so the continuity equation takes this form where this is an equation for each t and x. In the weak form this also will include the source constraint. And so you notice this is immediately indetermined on the velocity space. So either it could be any measure on velocity space, which you can think of as corresponding to some generalized trajectories for our optimal control problem. But the one dimension, one direction of this theorem is sort of obvious that if you have a transportation map and nice enough trajectories, you can embed them into a density in stopping measure and get a weak selection to this continuity equation showing that the E is less than or equal to B. That the equivalent we have to consider the dual picture. So to begin with, I want to review how the duality shows up for the optimal transportation problem given by Lagrangians with a fixed end time. So this is the standard Cantorovic duality using this sign conventions. You have the constraint is that the n potential psi minus the initial potential phi should be less than or equal to the cost. And so the standard duality theorem states that this is equal to the initial value function. And then here, you can always assume the potentials are optimized in this sense. So since we're maximizing this, we want phi to be as small as possible. So we can always choose it to be the supremum over those that satisfy the constraint. But then when you see that when you have this supremum, you can actually replace this with the infimum over the trajectories to get this is just the supremum over all trajectories where we have the n potential now evaluated at the terminal position and the integral of Lagrangian. So this correspondence really relates the potential to the value function j psi, which is the solution to Hamilton-Jakobi equation. And this Hamilton-Jakobi equation runs backwards in time starting at time one equal to psi and giving phi at time zero. And really, the solution is you could determine as just the supremum over the trajectories starting at time t and at position y. And that shows the equivalence between the phi and the Scotty solution, Hamilton-Jakobi equation. Which can also be expressed then more classically in terms of the Hamiltonian flow where you have the gamma, the trajectory and its momentum satisfying Hamiltonian equations in the relationship being the momentum should be the greater value of the function j. So now I want to go into the free end time problem and show the relationship with the duality. But now I take a different perspective and do it directly from the Eulerian formulation. So if you remember in the Eulerian formation there are really two constraints. There's the target constraint that the spatial marginal of the stopping measure mu is equal to nu and the continuity equation. And so those have dual functions. But I don't know if I emphasize this, but the Eulerian formulation we had before was a linear optimization problem. They had a linear, Eulerian equation that was linear in eta and a linear cost in eta and only linear constraints. So we can formulate the dual as a linear optimization problem with the dual function psi corresponding to the target distribution constraint and the dual function j corresponding to the continuity equation constraint. And because we had the condition eta and rho were non-negative in Eulerian formulation which is a very significant condition we only get inequalities. And so the two inequalities say that psi should always be less than or equal to j and that j should always be a super solution. Well, okay. So this solution holds now for each t, each x and each v. But we could do the same thing of optimizing over the potentials, particularly optimizing over j. And a well-known result in viscosity solutions is that the viscosity solution to, in this case, a quasi-variational inequality can be expressed as the minimum, point-wise minimum over all super solutions. So what we see here is these equations here are the equivalent to being a super solution to this quasi-variational inequality where we see if this, the max of this being equal to zero implies each one of these individually is less than or equal to zero, but also implies at each point one of these should hold. And so this is given, the viscosity solution by Perron's method is given by the point-wise minimum over all j satisfying this equation, which is necessarily satisfied by the, or we can assume satisfied by the optimum of j. Here. Now this is just the equation in T and x because we've used the Hamiltonian where we've also taken the soup over, and we're showing that in this equation. And the proof that these two are equivalent problems just happens from the dynamic programming principle, essentially the same as I mentioned on the previous slide where now that you have a representation of an optimized potential phi as the supremum over all trajectories gamma, n times tau of the integral of the cost and the n potential at the end position. So why does this help us characterize the solution? Well here's just a simple proposition, you can state using improvement, a lot of generality that is the only time dependence on the equation occurs in the Hamiltonian, which comes from the time dependence of the cost function L. So if L is increasing that implies the Hamiltonian is decreasing and can show that that implies the unique discussed solution j psi is not increasing, by decreasing I mean not increasing. And the same occurs if L is decreasing. So in both of these cases you have an explicit expression for the free boundary which separates the points where this is an inequality and this is an inequality just given by taking the infimum over times for the first case or the second case. And so then the analog to optimizing over the n potential psi is then given by having psi satisfy this transversality condition along this free boundary. So a couple of pictures, see these pictures, psi is on the support of the target distribution nu is in blue and here's the case of A so j is in red and it's decreasing until it begins to hit psi. And this position psi again is in red and here's a double snapshot in time as psi increases and separates from j. So those points where j, transversality condition, so I don't know, in the Pontiagin maximum principle terminology, transversality condition is used to, to describe the terminal conditions for the generalized momentum. And when you have a free n time problem, you get an additional constraint on the, or the Pontiagin maximum principle which is that the Hamiltonian should be zero at the n position and n time. And that's really what's showing up here. In fact, it doesn't imply that there's any sort of transverse intersection. So, okay. So, so under these, under these, under these conditions, particularly with L, strictly increasing or strictly decreasing and a nice enough Hamiltonian like strictly convex and smooth and nu and nu have indensity, for example, you get a unique optimal transport map. And this is a transport plan which is given by a transportation map, which is just the optimal introductory gamma x evaluated at the optimal n times tau, which satisfy this transversality condition at the, which, so, and this condition is the Hamiltonian now is strictly increasing or strictly decreasing, this uniquely determines n time tau. And particularly it determines it as the pre-downing on gamma. And then for previous times, you have the same sort of Hamiltonian flow for the trajectory symbol. And the Eulerian point of view, the Eulerian point of view has nice compactness properties. It's easy to show attainment of optimizers. And then given, given the characterization of the dual problem, the support of the target, the support of the stopping measure should be in the graph of the free boundary. Also, it turns out that the value function is C1, then you can see that it determines the momentum part of the equation and you just get that the support of eta in the velocity space is always at the derivative of the Hamiltonian. And the density is uniquely determined by this continuity equation, just a nice transport equation, but now nonlinear because it involves both J and eta. And what's sort of unresolved is in the case even if everything is nice, the solutions J might not be C1. And the case that they're elliptous, kind of, in general you could have like discontinuities in the gradient along shocks. And so there's still a bit of a question in the Eulerian formulation of whether the optimal eta is uniquely determined in that case, in the case that J is on the ellipticians. And so in this slide I just want to make a little relationship with the classical transportation problems, which is what when you get, you choose this Lagrangian where, so it's infinite if the speed is greater than 1 and it's the derivative of some time dependent function. The speed is less than or equal to 1, whereas function is increasing, it's zero. So the optimal trajectories connecting any point Y and Z is the straight line with speed 1. And so in this case the transportation cost has this familiar form of a convex or concave function of the distance of Y and Z. So this of course has been studied by Gamebo and Necan, showed that existence of a transfer map. So now here we have this sort of picture of the difference of the solutions in these two regimes, which is that if E is convex, that corresponds to our case where the cost is strictly increasing. And in this case the free boundary is hit from below by the trajectories, so the time goes in the, is increasing in the vertical direction. So the trajectories have slope zero and are hitting the free boundary from above, and in this case the G is concave, they hit it. Or hit the free boundary from above in the case G is concave, which corresponds to orientation preserving or rotation reversing the object. Okay, so now I'd like to move on to the scroll cut problem. So the techniques are basically going to be the same. So in history, 28 constructions by problem lists and one view, so maybe a few of these. But then recently in the mathematical finance literature, it's been related to important problems in the options pricing, and there's been a nice review by Avaloch considering all the different constructions and relating them to optimization problems, which is generally not how they're constructed in the first place. So our optimization problem we're going to consider here is just this minimizing over stopping times tau that satisfies this target constraint, the transport embed, the Brownian motion, beginning at view, and the distribution. So earlier in formulation is a bit simpler now. But we have to use it because we don't have an optimal transportation formulation anymore. And so now we have non-negative density eta and stopping measure rho satisfying this heat equation, which is really like a substitution of the heat equation since rho is non-negative, starting at mu and transporting, stopping along mu. And the dual problem has the same form where we just are maximizing over the difference of the potentials. And the potentials are a solution to something that looks like a parabolic equation going backwards in time. This is the sign, so on. And inequality. So similar to before, the system of inequality is when we minimize over J is given by the quasi-variational inequality, where one of these conditions should hold at this point. But to prove the dual problems actually attained, we have a couple remaining problems to deal with. So here, one thing, whenever psi is always less than J, you could subtract a positive function. But clearly that doesn't maximize the cost, so you can just always assume psi is the largest function below it. But then a little harder to deal with is you can subtract a subharmonic function from both psi and J, which still results in a super solution to this equation. And this shows unless the measures mu and nu are in subharmonic order, that would imply that the dual problem is unbattered, that the dual value is actually infinity. So a necessary condition for the primal problem to be feasible is then that the measures mu and nu are in subharmonic order, which is a stand. Easy to show just properties of Marni-Gales as well. This is harder to handle, but you got some ways to do it. It's a bit of a work in progress. And then to conclude that the dual problem is attained, and not only attained, it's attained at sufficiently regular psi and J. So that means twice-weekly, differentiable, and continuous in particular. And psi, yeah. And so the point of having them sufficiently regular is to make sense of these conditions. So to check optimality, you really just have to prove that there's no duality gap between the optimizers. That's just saying that if you have admissible eta and rho and admissible psi and J psi, then if the costs are equal, they must be optimal. And then given the equations, you can see that this immediately yields the sort of complementary slackness conditions, which is saying psi is equal to J, anywhere where rho, everywhere on the support of rho, rho almost everywhere. And the backwards parabolic equation is satisfied almost everywhere on the support of eta. But then we have this exact same proposition that we had in the deterministic case, which determines the, which describes the dual functions J psi and psi. And so once again, the monotonyst E and psi of J psi only depends on whether L is increasing or decreasing. The free boundary can then be described explicitly. And sort of important for the proof of dual attainment is that in fact, you can choose psi so that the, it satisfies this sort of transversality condition where there's a plastic size given by the free boundary. And as a result from the previous slide, you can conclude that eta has support in the set where, in the set of x where t is less than the free boundary. This follows from the L being strictly increasing that if this condition, this condition is satisfied at S, it cannot, the inequality can't, it can't be satisfied for t greater than s for psi, which was must in constant. So, so we get this condition on the support of eta, which uniquely determines eta. And if you assume extra regularity, then you can see that, okay, well, actually it's just the equation that's relating the rate of particles stopping on the free boundary, which is given by the gradient of eta dotted with the gradient of the free boundary with the target distribution. So in fact, everything, all the dependence now goes away besides the free boundary. So eta doesn't depend on J and psi at all, as it did in the first part of the talk. It did depend on J and psi in the first part. Now it just depends on the free boundary. And one of the consequences of this is that in fact, there's a, there's a rigidity which says that if you have a free boundary S, which determines the optimal stopping time for some increasing L, then it's optimal for any increasing L. And then it is unique. So, so the reason is, well, once you have a free boundary, then you cannot just use this equation to redetermine psi. And then given this equation, you can just solve the backwards parabolic equation to get J psi that's equal to psi on the free boundary. And by the comparison principle, that will be a super solution to the equation. So it's optimal. And the case A and B are really the same. You just have to replace the domains with either the hypergraph or the hypergraph of the free boundary S. And this, this equation for the case B would reverse the orientation in a sense, so you replace new with new. And finally, I want to like relate it to the work of these three authors who go about Cox and Houston got an influential paper relating the scrollcard embedding problem with optimal transportation. But they, and they took a different perspective, which was the perspective on pass space. And what, how they expressed the duality of the dual problem as the maximizing over all martingales on the probabilities case, maximizing the difference of psi and the expectation at time zero with the constraint that the martingale is always greater than equal to some continuous function psi. So what we've done is kind of reduce this problem to just something that's on the, on the space time. And, and can see given the optimal J and tau, you can explicitly construct this martingale just as the evaluation of the function J along a Brownian motion with the expectation of the remaining cost. And, but one of the important aspects of our work is now using the PDE method. We have sort of new ways to prove the dual attainment, which they didn't have. So they did have a similar characterization of the stopping time as hitting of a barrier using something, something like the cyclic monotensity principle on pass space. So just to end, I want to kind of go back and talk about this sort of general case, which I skipped over. So in the general case, we have some stochastic differential equation with a control variable A, and we want to choose both the control variable A and the n time tau to minimize the expected cost subject to the, to the stopping constraint. And you can write it in this or in that way. Before we had the integration over the velocity space, it's almost as natural just having a general control space. And it's a well continuity equation or a factor plane equation. With row, row is still non-negative. Stopping measure, eta takes values as measures on this control space. And then immediately as of any optimization problem, you have some duality where the dual problem is posed over, over the same, same potentials. In fact, the same proposition holds that when you have the solutions, you have the same monotonicity properties to characterize the tau. Although, okay, there's some details when you have the diffusion depending on X and A, there's some technical details you get fully nonlinear equations. But even in the case where sigma is just constant, there's a lot of interesting things to be done here. And so I think there's a case where sigma is constant and as we had in the first part, that was just the velocity. And then L was the Hamiltonian, the grandian. You get some interesting, interesting problems depending on the growth conditions for the Hamiltonians. So in particular, it's sort of, you get these three sort of distinct cases where if the Hamiltonian grows super quadratically, quadratically, or sub-quadratically, well, when it's growing super quadratically, you can sort of immediately get some quarter space estimates. And actually, that's nice. This ship sort of shows you can really control the processes. When it's sub-quadratic, you can sort of show that the density always has finite entropy. So you can never control it beyond at least having finite entropy. And then for the quadratic case, this is sort of what's been considered by Leonard and Gentian and others and having really nice relationships with the problem of minimization of entropy on path space. So I think I'll leave it there. Thank you everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Questions? Let's look at it. So the role, when the role is optimal, when you get the role and can you say more about the role or can you? How about row? About the structure of row. So the easy thing to show is based on this condition. Without any, with only, okay, just assuming that this psi and j are continuous, this immediately implies that row, that the support would grow is in the set where j psi is equal to psi. So what we're really doing is in the case where the dual problem has a solution that has this sort of monotensity property so that j psi is equal to psi on the epigraph of a function and the cost is increasing, then you can show that row is actually supported on the boundary. And the reason being that, so for the square root of the writing problem, if it enters into the set where j psi is equal to psi, we have the second condition that says that the parabolic equation should be satisfied, but now j psi is equal to psi so that says that one, that the positive of psi should be equal to L, but L is strictly increasing and it's L is strictly increasing in time in this case. And so this is not, and so it's satisfied along the free boundary but not satisfied at later times. So we have to get a strict inequality for psi, which leads to a strict inequality here, which implies that, well, in this case it implies there's no support for eta but then from the conjugate equation you can apply there's no possibility you have row. So the row, the row needs to be supported on the free boundary. Answering question about regularity of the free boundary is a very difficult problem, but of course it's not applied by anything I've discussed. In the stochastic case, if you thought of applying the kind of techniques you're using to another kind of problem, the problem is state constraints. It's across the infinite, or many of my settings makes sense. Yes, so certainly there's a connection with a lot of these problems. A lot of the literature, at least I've seen on these problems, takes the perspective of backwards stochastic differential equations and they pose the backwards stochastic equations as some sort of transversality condition for the state constraint. And it's closely related in some ways, but to draw the connection with the stochastic differential equations is pretty clear, but using these methods for state constraints, I haven't really considered yet. But you think there's an unrivaled practical combination, this one is right today, stopping time. Right, so actually it's probably the same just because the condition in state constraints is generally, I would say, I think that you just restrict to the row that are properly measured. Some set O, right, so just restricting to the property is the same. But I haven't really thought about how it really relates. Any other question? Everybody's hungry? Thank you.
We explore a dynamic formulation of the optimal transportation problem with the additional freedom to choose the end-time of each trajectory. The dual problem is then posed with a Hamilton-Jacobi variational inequality, which we analyze with the method of viscosity solutions. We find properties that imply the optimal stopping-time is the hitting-time of the free boundary to the variational inequality.
10.5446/59170 (DOI)
Thanks a lot for the invitation. I would like to talk to you about the two projects that have been done for a few times. The first, all concerning the long-term behavior of a long-term equation. The long-term equation from a probabilistic point of view is only the degenerate thing, which is talking about position and speed, where you suppose that you've got a simple friction, a simple confinement potential, and you try to estimate the convergence to equilibrium for this SDE, where the equilibrium is given by the Hamiltonian, which gives measure associated to the Hamiltonian. So you've got something like that, where h is, as usual, u of x plus the square root of 2, which is why there is the square root of 2 here. And so we are interested here to the long-term behavior of this SDE. So you can also rewrite it from the PD point of view, which is given by the flow of probability measure. And we will write the equation at the level of the density of the probability measures. So mu t, so d mu t will be f t d mu, and so the equation will be this one, d log t f t, which is what has been possibly written by Anton before, which is v grad x of f t, which is equal to the rotation of v of f t minus v t grad v of f t plus gradient of u of x gradient of v of f t. And our main purpose is to find some distance in order to say that mu t converges to mu and that you've got some, let's say, some constant and something which is perhaps not exactly the same distance that you used, which is related to the behavior of the original measure from which you are starting and your final measure. So in fact, you've got many choices. The first one is to look at the distance and we will be mainly interested here to the entropy and the Wasserstein distance. But I do not know if I will have enough time to talk about Wasserstein distance. So let me talk first a little bit of history. In fact, there was not really a competition because we do not really know what we are doing at the time, but there were two cook-horns, probabilistic guys and PD guys. So in fact, that's a probability guys who are at first at the first result, I believe, around this problem, which was done by Wu and Tali, roughly at the same time, which would be 0102 where they proved exponential convergence equilibrium in total variation distance. And in the same time, roughly you've got de Villette and Villani, which obtained the convergence equilibrium for the first time, but for L2 distance. I bet it was not exponential, it was sub exponential in the case where we should hope to be exponential. After that, you get Ero and Nier who have done a nice spectral study of the kinetic focal point equation, or the Langevin equation, and they have improved the convergence to equilibrium. And after that, you've got Villani, which has produced a theory of hypocrisy, where he proved in a very nice way L2 convergence, entropy convergence, equilibrium under values condition that we will go just after that. And in the same time, a little bit after, there was this total variation, roughly for this one, this is L2, this is L2 plus entropy, decay, and we've got here Francois Bollet, myself and Fleuron Malheur, where we did it, it should be in 2010, roughly in 2010, where we did it for a Wasserstein distance. And after that, there are a number of papers, of course, Anton had recalled Dolbun Wohr-Schmeisser, where it was also in L2, but without using initial regularization, which was quite nice. And so this one was in 15, like, three years ago, even it was in after a long time before, and you have also the paper by Einstein now, if I pronounce it well, Anton, which is here, and Schmeisser, Schmeisser, I don't know, I don't know. Sorry for him. I slightly, quite good riddance, quite good. So in fact, let me recall what they have proved on these people, and what was the limitation that they got at the time. So in fact, the first thing is that if you consider convergence in total variation, it is in fact very, very general for the condition. You've got an exponential convergence. If you've got something as simple, as a linear behavior of you, you have only to suppose that you've got this when you're quite enough. Of course, it's very general, and it relies mainly on the main 3D sphere, where you have to have some control, some mineralization control, and so it is exponential, but it is rather qualitative, more than quantitative. So it's a very nice result. It's what we can hope at best, but of course, it's not really informative for what can be done. Concerning the L2 convergence, it is also quite general. In fact, what you need is something like a general control, like this one for the gross control of you, and another one, which is the Poincare inequality. I will recall it quite a little bit later. So it's quite general and not so difficult to verify, even if it gets some conditions which are less general than this one, but in fact it's roughly equivalent to a Poincare inequality these conditions. So after that, you've got the entropy convergence. And for the entropy convergence, you've got something more drastic. You have the second derivative, which has to be bounded. And with that, you have to add some logarithmic sovereign finality. So it's quite strange that you've got some such hard constraints, which is quite not natural when you think about the Fogart-Lompe equation. And then after that, you've got the W, the Wasserstein convergence, where we've got, with Francois and the Floran, very bad result. In fact, you've got something which tells you that you has to be a perturbation of quality, let's say. Perturbation of quality. So it's quite strange from the point of view of the Fogart-Lompe equation that you've got some drastic conditions for the entropy convergence and also for the Wasserstein convergence. And so my main points here will be to show you, I believe quite quickly, that we can do really better concerning the entropy convergence, and I hope the Wasserstein distance. So the true goals that I have are this one. The true goals are this one is to get entropy convergence without bounded action, without bounded actions. And the second goal will get Wasserstein convergence in a quite general setting, in a more general case. I will put this there after that. So what can be done for the first goal? So let's first try to tackle this one. And in fact, we will probably be not very innovative in this one, because we will try to do exactly what Cedric has tried to do. And reproduce a little bit his proof. And in fact, what was quite nice in his proof is that he says something that he needs and he wasn't, it was a mistake. It was not what he really needs, and I will try to show you why it was not, or he was wrong, in fact, and not only in politics. Something, sorry, Guillaume. Okay, so we will adapt, we will adapt, okay, Villain is technical. So let me recall you very quickly for people who may not be familiar with this one. Of course, you've got something quite simple. If you consider usually something like this one, let's take the usual for Compton equation. Usually you take the probability of the variance of mu of Ft, okay, and so as it's nice generator, you use integration by part, and you get that this one is the gradient of F squared of mu. And if you've got some point-array inequality, it tells you that this one, it's exactly the point-array inequality, which tells you that this is less than variance of mu of Ft. So this is very nice. So if I take not the Ft that I have here, my density with respect to mu for the kinetic Focke-Planck equation, but for this Focke-Planck equation, it is quite nice. But here, for the Focke-Planck equation, you've got that d over dt of the variance of mu of Ft is only given by something which is degenerate, because you've got only noise in speed, so you've got only this. And so this inequality cannot be true. You cannot choose something which is related to point-array anything because you've got only gradient of mu of Ft. And if you take initial things which are only in position, which are depends on your position, this is zero, so you cannot do anything. So you have to modify the Lyapunov functional as I say that you consider. So you have to modify the Lyapunov functional, and the way it was done by Cédric, it was done in fact by two different things. And the one that I will use is what is called the multiplier metal. In fact, it's nearly the same thing that was done at first, but it provides a little bit more general, so it was to modify it. So for example, if I want to do it enough for the variance, but for the entropy, you use the entropy as before, and for the entropy, you've got the same thing. You've got also something which is degenerate for Ft, and you add something which is some function which is depending on x and d, which is not function of matrix. Okay, d mu over Ft. And if you choose well, ST, you may have some good things. If you choose ST to be of course a symmetric matrix positive, in order to be able to recover something for your comparison, for the H1 norm, for the information, visual information here, you may adapt the argument in order to get once again this. And so we try this for the L2Ks. And in the L2Ks, doing this, Villainy has to use what is called the Brask-Compli inequality. So Brask-Compli inequality is something which is related to a variance of exponential minus V of F, which is less than gradient of F, less than the variance of V minus 1t gradient of F, gradient of F, V exponential minus V. And so when you try to do it for the entropy, trying to do this multiplier method, and you will see that what will happen is that you really need something which seems like this one, but of course for the entropy, this is false. The entropy was complete with inequality. It does not work. It's false in general. So it cannot be in what's stuck at this point. So that's stuck. As an entropy, Brask-Compli inequality is false in general. If you replace the entropy and they are divided by F, or they gave square. Brask-Compli inequality is false. It has been done by, as well as it is proven, by the same Lodou a few years ago. But in fact, as you will see, in fact, it's not really a Brask-Compli inequality that you need. And even in the other two cases, in fact, it was not a Brask-Compli inequality. I do not know why he has been so far and tried to prove it by Brask-Compli inequality, whereas it was really easier to prove. So what can be done with this? In fact, we would choose it. And I believe that it is what he tried, also. So we choose S to be St of Xv to be something like epsilon cube, alpha cube, H. H is my Hamiltonian, minus 3 eta, for some, we'll take after that. Here you take epsilon 2 H squared, alpha squared H minus 2 eta. Here you take the same symmetric, H minus 2 eta, and 2 epsilon alpha H minus 1 eta. And so for eta, which is positive, and also for alpha t, which is 1 minus exponential minus t, and for epsilon, which is between 0 and 1. So let me just think. You choose something which is depending on, you've got S t which is not just diagonal. If S t was diagonal, you will be stuck with the same kind of thing with this one. You will not get something which is, which gives you some position. So you have to mix the derivative. The alpha here, which is 1 minus exponential minus t, is to take care of the fact that you do not want to use initial regularization, because this one will be 0. This term disappears at 0. So in fact, you do not have to use initial regularization. So you avoid the problem that the double-worn schweizer tries also to avoid. So it's a bit simpler than the arguments, because also regularization is not so difficult. And after that, you add some weights, which is a little bit stringent, because you take the Hamiltonian, and then you try to really take something which is a strengthening of the quaternion equality or the loxobalaf inequality. So after some calculations, but I do not show, of course, because not so difficult, but a little bit tedious, you've got something which tells you that this gradient of F t, S t, is a gradient of F t derivative. This one will be greater than, what do I say, something like minus c times h minus 2 over eta gradient x of F plus h minus 2 eta gradient square of u plus c tilde times the gradient v of F t squared. So in fact, you've got some good terms, because when you take the derivative of this one, you have a minus term which appears. So this one is a good term for the other square. So this one is good, okay? And this one is not good, because it's in positive sense. And it will be the, what is important, is that as this one is not good, because you can only compensate it by taking here something which tells you that you will take something which is sufficiently large here. If this is sufficiently large, you will get a minus this term, and so the minus with this term will be able to compensate if you've got this which is bounded, okay? As this I will only consider this term. This one will give you minus this gradient v of F t squared over F t, so you'll be able to compensate this term if this one is bounded. So in fact, you see the condition that you want to get in order to get the control, the entropy convergence through equilibrium. In fact, you need two conditions. First, that there exists some eta, something positive, so that h minus two eta gradient, h u infinity is bounded, and you get, after that, you need some weighted Luxor-Boulet inequality. You need something which is of this sort. You need c times h minus two eta, the gradient x of F squared, plus the gradient v of F squared d mu. And it's in this point that, I believe that Célic has written down probably clearly these things, and these things is what you need because you cannot compensate only with the entropy, which is giving you something which is a constant, minus a constant, which is a constant, and the other term, you have to do it and try to recover it to get something which goes back to the entropy. So you need something like that. This one will be a first condition, but you have the freedom of eta, and then you have to find condition under which this weighted Luxor-Boulet inequality is given. So you see that h is roughly of the same thing by gradient square of u, so that's why he believed that he needed some entropy, bruscombe, equilibrium, equality, the same order in the sense of entropy. But in fact, it's not quite the same thing than the entropy, bruscombe, and inequality. And so in fact, it's not so difficult to prove this weighted inequality. So this Luxor-Boulet inequality, how to prove it? And in fact, how to prove it is something which can be done using the technique that the probabilist has done, the Lyapunov condition technique. So let me show you something very simple, a very small indication of what is the Lyapunov technique approach. As I will have no time to consider, I'll give us a time distance. So what is the Lyapunov technique approach? The Lyapunov technique approach is something simple. So let's go back to my Fokker-Planck equation case. There is something very simple. If you find some function w such that L of w is less than minus lambda w plus b1 some ball, let's say. So something which tells you that your potential is sufficiently confining. In fact, you've got, and in fact, it's equivalent to Poincare inequality. And it is a three-line proof for one sense, the sense which is important for us, but it implies Poincare inequality is quite very short. In fact, what is the, how to prove it? The variance of mu of F is always less than F minus c square d mu for some c that I will choose. Then I use this condition which tells me that this is less than minus L w of w F minus c square d mu plus b0 L of F minus c square d mu. And so this one is in fact some local Poincare inequality. And if you think of v to be c2, for example, it is always true. So this is always less than gradient of F square d mu over a ball or not over a ball with some constant which depends on L, but it is very easy to do. For example, by perturbation of the Poincare inequality for the Lebesgue measure, for example, and using Rotehouse arguments, not Rotehouse. You're right. So the second term is quite easy also. If you take this one, so here you have to take c, of course, which is the mu of 1 of b over 0 L of F in order to center it well in order to get this. But this one, using integration by art, you get something like this of F, let's say F minus c, but I will forget it right now. And this, if you do the derivation, it gives you 2F gradient F gradient w over w minus, of course, very delu here, minus F square gradient of w square over w square. And if you look at it carefully, in fact, it's simple. This one is exactly gradient of F square d mu everywhere, minus gradient of F minus F gradient of w over w square d mu. So you've got this and you've got this inequality. So you've got a very simple proof of Poincare inequality, and in fact, you can extend it to nearly all functional inequality which have to prove convergence with premium. So in particular for the entropy, the only thing that you do in order to prove our weighted loss of inequality is that you consider another generator which is a natural generator associated to this directly form. So it has not so good form, but after that it will be easy to do. So you've got this minus h minus 2 eta, 2 eta gradient x of h over h plus gradient x of h gradient x minus v gradient v. So the equation is v. And then with this one, it's easy to find. You have only to find l eta function w such that you've got stringent, stronger inequality than this one. You have to add that you've got an extra force which is the potential. And then with this one, it implies the loss of inequality but you won't do exactly the same technique but resorting to the superpointer inequality. So after that, once you've this, it's not difficult to find conditions. So I will give you the theorem at the end. The theorem is this one. If there exists eta which is positive such that you've got three conditions, there can, if there exists a kappa in 0, 1 such that gradient x of u is less than kappa times gradient x of u square which is not so expensive. The second one which is the gradient x of u square which is greater than c u 2 eta plus 1. And the second one, the third condition which is the one that we have already found is this one that's translated only in u. It is u minus 2 eta gradient at the h of u less than some constant. With all of that, you've got that the entropy converges to equilibrium with some rates which has to be precise but with some constant here, some other constant here which are not so good. I completely agree with the people who can say that but you've got some nice things. And of course if you look at all these conditions, you've got all u of x to be x over k for k larger or equal to 2. You can also go up and up to u of x to be exponential of a times b if b is less than 1. So you have still plenty of potential where you can prove that you've got an exponential decay. So which is nice, what is nice is that you can recover in this way not using the propagativity nearly all the results given by the propagandist using the Vyakunov condition. So if I had more time, I would prove you also what can be done by the Vassarstein distance and coupling technique but I believe it will take me too much time. So I prefer to stop here and thank you again. Thank you very much. Okay, thank you very much for this very nice and very fast reading talk. Who has questions please? Could you use a weak L'Au-Sablex inequality? If you want to, you know, that would get some subject control of course, yes. But you have to prove a weak L'Au-Sablex inequality which is, you have to have some... The same assumptions? The same assumptions. And your theorem? No. You want to get a weaker assumption if you want to use weak L'Au-Sablex. I've not written it but you can do it, yes. So this Vyakunov technique to prove the Palkar inequality, does it work in general? Yes, I've written it for this Fokker-Pontek equation but it works for every reversible Markov process. Okay. Not only because it's in fact a large division argument but in fact what you are trying to compare is something which is related to the rate function, a comparison between rate functions of a large division principle. Okay, this other condition that allows you to get a lot of equality? Yeah, this one is really to try to verify this one. Yeah, but this one is almost the same without the characteristic function of the ball. I mean, I missed what the H is. H is the determinant. Okay. So the difference between the... So the condition to get a lot of problems should be... The difference between L'Au-Sablex and the... And the point that I raised on you is that you need to hear the potential. You have got a strenture force of return to equilibrium in fact. More stringent than the other one. Oh yeah, of course, because this is going to infinity at infinity. H, okay. So you have some more strenture. This one for the Hamiltonian, it's that specific for diffusion things. Yeah. Yeah, right now we have written it only for diffusion. Okay, so for general Markov processes you have another one, please. In fact, the problem for general Markov process does not come from the part which is this one, which is in fact a good one. It comes from the local L'Au-Sablex inequality or the local Poincare inequality. Well, it's not so easy to write it down. For example, for the Berserker process, it can be done for some certain levitary process. You can also do it, but it's not so easy to write down the local Poincare inequality or the local L'Au-Sablex inequality for more general process. Okay. Now the job has been at this point. This is easy. It is often easy because it only gives you the way that you go back from infinity to some bounded set and after that you have to deal with the local set. Okay. Any more questions? Send that to Sam. Thanks. On the whole, I'm going to try and... applause And then we stop again at 3.30. First slide, please. There are a few things that need equality. In fact, there's five questions, for example, in the... In fact, it has been done only in every month. Thank you.
We will present here two different approaches to study the long time behaviour of the kinetic Langevin equation : 1) hypocoercivity technique for entropic convergence via a new weighted logarithmic Sobolev inequality ; 2) Wasserstein convergence via a particular reflection coupling.
10.5446/59171 (DOI)
Okay, so I will discuss some symmetry and symmetry breaking issues for positive solutions of equations like the one which is there. So it's a no linear, it's a linear equation in the first phase with weights. So it looks different one, but there are weights here and here. And actually it's equivalent, we could consider an equation without weights, but with manifolds. For example, on a sphere compact body, force, infinitesimilitors, etc. So we will see that the discussion about symmetry and symmetry breaking is going to be very linked to the use of nonlinear flows and entropies. So if it's well in the software of the conference. And so there will be three parts in my presentation. First, I will speak about an elliptic approach of this problem, so you take a setup. And I will speak there about two works, one with Dolbo and Loss, and the other one with Olbo, Los, and Mulatori. Then I will go to the parabolic setup, parabolic approach. And this is the paper that I gave in 2016 with John Michael. And finally I will arrive to the main goal of the talk, which is to speak a bit about linearization and how symmetric solutions are in the same labor. So I will start with this elliptic setup, even if it's not the main goal of the talk, but it is a little bit of a user talk. So these equations actually are the ordered-arranged equations related to this kind of inequalities. These inequalities are apparently called, need-empartied inequalities. So you see that you are controlling the L to P norm of some function with some weight with the L to norm with weights of the gradient plus the integral of WP plus one with another weight, and then you have some interpolation exponents there. And here they have the range of parameters for which the inequality takes place. So this is a family of the empirical inequalities. And actually there is a particular case, we look at the range of piece here. It's going to be between one and P star. P star is this number, which depends on the parameter gamma here and beta here. And actually there is a particular case, which is simpler to explain. And practically in my full talk, I'm really speaking about the simple case in which T is equal to P star. And in that case, theta is equal to one, so this term drops and you have just the other two. So this is this case, which is equal to P star. Theta is one, and then the above inequality can be written just like this. So it's a simpler case because it's a simpler interpolation. Okay, so these are also perfectly equal inequalities. And they hold true for, so in this case we have two parameters, three parameters, the exponent here, Q, B in the way here in the denominator and A in the denominator here. And this inequality holds true for B's between A and A plus one. Well, you have to exclude B equal to A in the case of dimension two. And A has to be different from this number, the minus two over two. And there is a relation actually between the three parameters. So there's a relation between the exponent Q, B and A. So there are only two independent parameters in this one. Okay, so this inequality, this inequality, you see that it's everything is divided by notation. The weights are D, the operator. So everything is divided by a notation. So we could think that if there are extreme functions, they could be a radial symmetry, okay? So in order to discuss that question of symmetry, let's define CAB the best constant for this inequality and C star AB the best constant when we are only looking at radial asymmetry functions. And by definition it's very easy to see that you have this inequality, this odd symmetry. Now the radial case is completely known. So up to a scalar multiplication and dilation, the optimal radial function, the streamer functions for this inequality are given by this function here. So everything is explicit in the radial case. So the radial optimizes are known and so you can compute the value of the best constant C star. So when you are looking only at radial functions, you know everything. You know the optimizes and you know the value of the best constant in the inequality. So if it was the case that the streamer functions are always radial symmetric, then the problem would be fixed. So the question is whether optimality, this inequality is achieved always by radial symmetric functions. Okay, so immediately I can tell you that this is not the case. For example, if you look at, I mean, there was two words, one by Catalan one and another one, completed this previous work of Catalan one by Ferri's neither, they proved that there is a set of parameters a, b, so in the, in the, what you have here a and b, there is a set of parameters, this red zone, in which the, the, you look at the stability of the radial streamers for this function and then you see that they are stable. So what happens is that for this value for parameters a and b, you look at the radials, at the radial extremals, this function of course is a global minimum in the set of radial functions. But if you look in the set of all functions, there is a subtle point. I mean, the subtle point, it can't be a local minimum, an epistodial local minimum, it can't be a global minimum. So it can't be a new one. Okay? So in this set of parameters, there is no, there is stability rate. We don't know the minimizes, but in any case they can't be a delimited. Okay? So this was an answer, it was even assigned to you by Catalan one and they did part of the work and this was completed by Ferri's nine. Okay? So what about the opposite case? What do we have from the estimator? In general, when you are looking at nonlinear PDE, so you are looking at variational problems, the typical methods to discuss symmetry are either symmetrization or moving planes or sliding planes of these kind of methods. So by using variations of these methods, there is a green region in which you can prove symmetry for the extremal functions. Okay, so in the green zone, it has been proved that the extremals are radialized. Okay? So the radial extremals are. So the resulting results, so today only show instability in the red region or they also show stability in the complementary region? Only in the, I mean, only in the stability is only the red. So the red region is exactly the region in which the radial extremals are at stake. So it's exactly. It's exactly, yeah. And this curve is specifically done, so this is totally characterized in these two pages. Okay, so the region in which by using these classical methods to prove symmetry, you have symmetry for the extremals. And in the case A positive, so in this, this on the right, it was proved by 2 and 2 and only 2. And in the case, more difficult case, when A is negative, but B is positive in this triangle here, it was proved by Pettabroc-mercaltoan posteri. Okay. But there is a region that is the, which is not green and is not red. You look back at the red, you see that the red is a region here. Okay. And the green is this triangle at this zone here. Okay. So you have all these zones here where you don't know what happens. The radial extremals are stable. So in principle, they can be done. But the typical, I mean, the typical resource or methods to prove symmetry don't work any better. So this has been, this question has been open for a number of years and there were some partial results by Ling-1's, Metz-Willen and then in something, a work we did with Torgo and Tarantelo in 2007-2009. And then we were working on this problem with, for several years with John and Michael. We wrote a series of papers in which we used different methods to attack the problem with different technologies, let's say. And the conjecture was all the time that the minimizers are radially symmetric outside of the failure strider zone, outside of the red zone. So, I feel another way of saying this is that the symmetric minimizers are global minimizers whenever they are stable. So if they are local minima, they are global minima. That was the conjecture. So in another way we can say that the stability of radial minimizers is, sorry, the stability of radial minimizers is the only possible cause of symmetry. This is to understand these kind of things, I mean, this is a particular problem, but understanding why symmetry is very important in models in physics because you want to understand which kind of phenomenon is going to create a possible symmetry. So we tried to understand all these in this particular problem. So this was the conjecture. As I said, we were working several years and finally in 2015 we proved that the answer was yes. Whenever the radial minimizers are stable, they are global minimizers. Okay. So I'm going to, even if this is really not the goal of the talk, but I'm going to explain how we proved it, to generalize the proof, and then I will go to the parabolic setup. Okay, so let's define a number alpha, which is, I call alpha FS because it's failure strider. So alpha is this number. Well, you will see what it is. And the stability zone will correspond to alpha less than or equal alpha FS. So this is going to appear all in all my talk. Okay, so the theorem says the following thing. If alpha is less than or equal alpha FS, so this means we are not in the red zone. Okay, we are in the stability zone. Then optimality is achieved by radial functions. Okay, so what is the idea of the proof? The idea is very simple. We needed a number of years to find it, but they are actually based on a simple change of variables. And then, well, use of metals that are not linear flows, et cetera. Sorry, what is G and what is N? Yeah, just now. Yeah, I changed the position of the line, but it wasn't a good idea. Okay, so what we do is the following thing. We take, so we have our function, well, called w. So we are going to make change of variable, which is stretching in the random variable, and we don't touch the angular variable. So for every w, we define b like this. Okay, and now we choose numbers, and once we, we have to choose the numbers in the right way. So we define N like this. So N depends on all the parameters of the problem. Okay, and we define also a pseudo gradient, which is like a gradient. You see polar coordinates, we don't touch the angular derivative. The only thing we do is that we multiply the radial derivative by alpha. So it's like a gradient, but this alpha comes from this stretch. So this second equality sign defines the fact that alpha, right? This one. Yeah, the office. Yes, yes, it's inside. Okay, so alpha is here. So you are going to see what is the game. So the game is the following. We do this and what is the game? The game is that by doing it in the right way, the gaffericon even made an equality with all these weights, and different weights, etc., becomes something that is much nicer, which is this inequality here. This inequality, let's stay a little moment with it. You look at it, it looks like a sobriety inequality, because p is equal to 2N over N minus 2, by using this, all these inequalities, okay? So we choose the parameters N alpha, etc., so that p is equal to 2N over N minus 2. And then you look at this inequality, this looks like sobriety inequality, because we have the integral gradient v squared, which controls these constant times, the integral of p, the p, at p is 2N over N minus 2, which is the sobriety exponent in the dimension N. And when you look at the measure that we have here, this is the levity measure in dimension N. So this looks really like sobriety. But, and there are many buts, there are three buts. The first one is that I am speaking about dimension N. There is no reason why I can't speak about dimension N. I mean N can be a number which is not an integer, because N is given by this expression and N can be a real number, okay? So when I say dimension N, it could be a fraction of dimension. Second thing is not really like so-called, because this is not the gradient. It's like a gradient, but it's not exactly the gradient. But the worst thing is that this measure looks indeed like levity measure in dimension N, but we are integrating in R, okay? So this inequality scales like sobriety inequality, typical sobriety inequality, but it is not sobriety inequality. But since we were looking for a method to give you this problem, we said, okay, this looks like sobriety. Let's try to use some methodologies that have been used to this task sobriety inequality. Okay? So that's what we are going to do. So this very simple change of variables allows us to write the graphical intermediate by different ways, et cetera, and now what we have here is actually the same way in the two different terms, that's the main achievement of this change of variables. Okay? Now, and so the answer now to your question, Robert, is that N is this number, so alpha is given by the dimension minus one, divided by N minus one, that's correct. So once we write inequalities like this, we are going to use nonlinear flows in a way that can't have been used to deal with sobriety. So now in some notation, so I denote by gradient omega, denote the gradient with respect to the angular variables in the sphere, and delta omega is the trans-veltrami operator on the sphere, so I already defined the pseudo-gradient, it's like a gradient, but with alpha here, and then we can define also a self-adjoint operator L, which is like a Laplacian associated to this gradient, so L is just minus the adjoint of V times D, okay? And then when you write in Polish coordinates, it's like a usual Laplacian, but you multiply all the terms that you have, the derivatives with respect to the radius R, then you multiply by alpha square, okay? So it's like a Laplacian, and everything behaves in a good way, in the sense that you can, from the mental property of this operator, is that you can integrate by parts, and you see that it works like Laplacian gradient, okay? So the work point. Okay, now once we have introduced these things, let's just define the following things, so we have V was the previous unknown, now let's define U equal to the power of V, and P was 20 over N minus 2, so now we are going to express the integrals that were applied in the quality in terms of the new variable, the new function unknown U. So the integral of Vp is just the integral of U now, and we can, if it is an integral, we can write it in the following way, we can write it as I of U equal to the integral of V, we can write it as the physical information function, so we can write it as U dp square, when P is the pressure function that is written like this, okay? So now what we want to prove is that this, this integral, given power is less than this integral, given power times equals to, okay? So we have written the inequality now with the new unknown, which is 2, okay? Now, the plan is the following, the strategy is the following, we define the first diffusion equation, a flow, nonlinear flow, which is written in the following way, so dU dp is like a nonlinear heat equation, equal to Lum, where M is 1 minus 1 over N, and then this fracture at the engine line number, and so we define this first diffusion flow, and the strategy is the following, what we want to prove is that in the set of parameters, alpha less than alpha fs, so in the region, where we have stability of the radial minimizes, the radial extremals, what we want to prove is that the mass of U for the integral of U stays constant along the flow, and the fission information decays along the flow. So we want to prove that, and we want to prove that when we have that derivative with respect to t of the fission information is equal to 0, then this applies until this radial is in the same way. So this is the strategy, and the goal of this, we can prove the third, okay? So we have to try to see if this is true or not. Now, the first problem is that this flow, we are going to use it in a formal way, because for the moment nobody knows if this flow is well defined or not. So that's why we use it, we are going to see that we use it in a formal way, and finally, we don't really use it in the proof, the regular proof. Okay, but it gives us an idea of how to do the regular proof. Okay, so let's imagine that the flow is well defined, and there is a solution for all times. Now, to see that along the flow this integral of u is conserved is very easy, very easy computation. Now, what is more complicated is to look at the derivative along the flow of the fission information. This is a long computation, even formally, it's a long computation, and what we can prove is the following, I didn't even write the proposition of theorem or anything, because we don't know if the flow exists or not, but what we can prove is that when you differentiate along the flow the fission information, what you get is that it is equal to minus a positive constant times an integral of something, and this something is a complicated calculation, but you can see, and I am not going to spend a lot of time on that because this is really not the topic of my talk, it's just an introduction, so kappa of p is something that can be written as the sum of a square, you have a first term which is a square, a second term which is another square, another term here and another term here, so you have four different terms, it's a long, long computation to do this, and all of them are positive except this one, in which you have a constant here, which depends on the relative value of the alpha with respect to alpha fx, and so when you are in the complement of the red zone, when you have the stability, alpha is less than alpha fx, and so this number is positive, and you have that this kappa is the sum of four positive terms, and so you have that indeed the fission information is the actual function of all this, okay? So it's only when alpha is less than alpha fx that this function here of p, of the pressure, is positive, and so you have the derivative of the fission information, isn't it? So in order to prove this, there is a lot of computations. A reason that is a little bit bothering is that when you, in order to get this formula, you need to do a lot of integration by parts. So apart from having difficulties with the flow, whether it's well defined or not, we have a difficulty also with integration by parts because due to the present of the weights, you could have singularities at the origin or at infinity, and so when you integrate by parts, it could be the case that you can't get rid of the boundary terms of infinity of the other origin, okay? So supposing that the flow is well defined, actually in this first work with John and Michael, we prove actually that we have enough regularity to make the integration by parts. Is that the critical part? Yeah, on the critical point, yes. So we prove that, and it was a painful proof, I would say, so technically, it's long, not a very elegant thing, but we prove that. So what we do is we integrate in a large bowl, minus a little bit of the origin, and then we can get rid of the boundary terms in the integration by parts. Okay, so we prove this, but as you see, there are many things that are in the flow, we don't know if it's well defined, et cetera, so we couldn't read through for real, very important. But it gives us a good hint about what to do. And the real proof is the following. We use actually an elliptic, we don't use really very, very, the flow. What we do is the following thing. We put ourselves in the region in which we have stability. We consider p0 or u0 because it's equivalent, critical point of the oil at a range of position, corresponding to the kaffiricon in-evaluate that we are looking at. And what we do is the following thing. We just differentiate the efficient information at time t equal to zero, so we don't really use the flow. We just, at time t equal to zero, and we know that the same rule, this is equal to the derivative of i at u0, and times the derivative of u t at time equal to zero, so this is actually equal to the derivative of i at u0 times l, u0 m by the flow we had. Okay? And now this is equal to zero. This is equal to zero because u0 is the critical point, so the derivative of i at u0 is equal to zero. But now using the computations we've done before, we don't remember that we had a flow or anything, but the computations are okay for any function, so we can see that this is equal to minus c times this interval. But now, since this is equal to zero, okay, so what we have in the end is that, since we are in this region, we know that kappa p0 is positive, I mean it's non-negative, okay? So because alpha is less than alpha fs, so knowing that kappa p0 is equal to zero implies that all the terms individually are equal to zero. But if all the terms individually are equal to zero, let's take for example this last one here, it means that the angular derivative of p0 is equal to zero everywhere. Almost everywhere, please. So if it's the angular derivatives are equal to zero, that means that the forms are two independent angles, so they have the variables, they are not the same, okay? And you can even get better because not only this term is equal to zero, but this term, for example, also is equal to zero, and if this term is equal to zero, you can use it, you can integrate this equation here, and then see that actually you get a precise expression for this, okay? So you can find really the precise expression of the angular matrices, okay? So this is what we use to prove it, I mean it's a summary of the proof, but what I have to emphasize is that this proof is an electric proof, we really don't use the flow, we don't use the flow to give us any idea of how to, by which we want to multiply these things, so this is really where the, the, the, the, the, the, the motivation from the flow, but we don't use the flow. Okay. Now, what are the, um, yes, yes, a little sentence to say that everything that I am saying here is about the critical case, you remember the first thing in the qualities I wrote down, we were going to, uh, to take the case in the theta was equal to one, and the third term in the, in the way it dropped, okay? You don't have that, you are in the case in which, uh, the theta is not equal to one, uh, in that case, you have to adapt this proof, you count the, the, the, the, the, the input doesn't work, what you have to do is to consider the entropy function in the R of u m, you can see that the derivative of the entropy is equal to one minus seven times the visual information, and then you use, um, reny entropies, I think that in this, uh, cloud everybody knows what the reny entropies are, so it's a power of the entropy, a given power to the entropy, and what you do is to prove actually that the reny entropies are concave, and using that you can prove again the single result. So you have to adapt a little bit the proof that I described here in the general case. Okay, now this is, this, this part of the work is what we did with, uh, Michael, Jean, Almaty. Okay, so what are the disadvantages or the problems of this approach? The first one is that really we have painful estimates of boundary density in the very, very, very small parts. Really this part of the proof is rigorous, but it's really, it's a long and very technical proof, not very nice. You don't see why, I mean, you don't have a nice regularity result, so it's a little bit disappointing. The second, uh, disadvantage of this proof is that you have no way to obtain improved inequalities. You don't have, I mean, you know that for many inequalities, for example, shorted inequalities, uh, in the literature you can find many improvements, so you are not at the optimizer, you can see that you have a remanded that, okay? So with the lifted proof you can't get any remanded term, you can't get improved inequalities. And then the third thing is that in this theorem that we proved, it appeared that the symmetry is linked to, um, I mean, and the symmetry led me to stress that speaking about symmetry is a unique thing about uniqueness in some sense. Because what we are saying is that in the zone of stability, the only solutions are the radial minimizes, okay? But we have already identified. So what is a little bit strange is that you, you use just, um, a local stability, uh, result, so you say, okay, we have local stability, you have stability for the radial extremals, and that gives us a global result, which is a uniqueness result for a learning actor. So it's a very strange thing because the fact that you have only the regular solutions is a global result. You could have, um, the global minimizer could be far away from the radial minimizer. So this is a global result, and this uniqueness result is just triggered by a local result, which is the local stability for the radial extremals. So it's a little bit strange why a local condition on a linearized problem gives you symmetry and uniqueness for a nonlinear global problem, okay? So looking at these three things, we, we, we try to, um, to go to a parabolic setup to try to understand all these things better. So now I'm going to really speak about the, the main results that I want to explain here. So what we decide was to use an approach that had been used by a number of people in the phase without weights. So we have weights in this problem. The weights create many technical problems. In the case without weights, they have, they have been worked by Cabello and collaborators, Tostani, Tostani-Olbo, etc. Um, and so we are going to do what they did is to, um, try to look at these nonlinear equations, but in self-similar variables. So what we are going to do is define two numbers, mu like this, capital I-D's, and then we are going to write our unknowns, u, as, uh, coefficient here, time g, but g now is going to be written in self-similar variables defined by these equations. Okay? So if we use these, um, self-similar variables, the, if you satisfy the equation that we had before, so you remember the equation for u was u t equal l, u m. Okay? So, uh, you write u like this, then g is going to satisfy this, this equation now. So the derivative with respect to the new time, the new time is going to be tau. So the derivative of u, g with respect to tau is going to be written like this. And what is this g is dysfunction, and what is g? C is just the, uh, pseudo-gradient of q, and q is g to the m minus one minus one. Some function, the m minus one, and this function is linked to a bottom-blood solution, which is linked to the radial extremance, actually. Okay? So, so this is the equation satisfied by g. So this is the equation, self-similar variables, which is equivalent to this nonlinear flow. It's first diffusion equation. Okay, so why we chose this? Well, we chose this because, um, in the case without weights, meaning this works by, as I said, carrello collaboratos, toscani, dolbotoscani, etc., uh, this was very useful for, for, for some things that I'm going to explain. So, um, now, um, let me just point out that in looking now in this framework, the range of m is between one and m one, and m one is the critical case, so it's the case when you have one minus one by m, which was the case. Okay? For the critical case. So now we are going to look at this equation, and you are going to understand why we want to look at this equation. So, what we're going to do is to look at this equation in a ball of radius r, r is going to be large, and we consider no flux boundary conditions on the bound. So we are imitating what was done in the case without weights, for example, in a paper of the origin, but we are going to get rid of the problems at infinity. What happens is that when you, um, you look at this, at this equation in a ball with no flux boundary condition, now what we do is to compute the derivative with respect to tau of the, this is the feature information in the, in the, in the similar variables. So what we look at is the derivative of the feature information plus four times the feature information. This is the equivalent of looking in non-cell similar variables at the derivative of the feature information. So we, we look, we do this computation, and it's going to be very similar to the computation we did in the other case. That's why I need to introduce the other computations before. So you can see that this, this quantity is less than or equal to a series of terms that remind me a little bit of what we wrote in some pages ago. And the nice thing is that we don't have any boundary there. Why? Because if you use the no flux boundary conditions, what happens is that the boundary term and the boundary terms in the integration by paths come with the right sign. They are negative. And since they are negative and you want to have an inequality like this, you can drop them. So the nice thing about using these equations of similar variables with no flux boundary conditions is that the problems at infinity are gone. So you don't have to, to deal with them. So that's a nice property of this, of this setup if you want to, this framework. Okay, so what we prove is that we say if the function is at the origin, then this quantity is less than the quantity here. And again, in, it's very near the quantity we had before is again the sum of negative terms, square, square, square, etc. Plus this term here at the game has the right sign when alpha is less than alpha fs. Okay, so it's the same kind of ideas as before. And the only thing I can say is that this quantity is less than or equal to zero when alpha is less than or equal to alpha fs. So we are in the stability zone for the radial history. Okay? So what is the set once more? The set was created to something else? Yeah, set is the difference of g, the unknown to m minus one minus the value of the value of the m minus one. So it's a relative quantity. So it's the power of the function to the m minus one relative to the value of the m minus one. And it's like, it's like, yeah. Okay, so, okay, so we, this is equivalent to the computation we did before. The only thing that we gained is that now we don't have boundary terms as infinity. So we can pass to the limit when r goes to infinity and we can write this for not in the ball but in the whole space. Okay? So one first advantage of doing this is that we get rid of this, this technical result to prove that the boundary terms of infinity did not go to the problem. Here, you just drop them because they have the right sign. Okay. A second thing that we can, an advantage of this method is that now if we define rescaling the body blood, so we consider this body blood like function, we rescale it and then we look at this function g of b which is like the, you just, are in the case sigma equal to one, this is just the official information in the self-similar variables. And if sigma is not equal to one, this is the derivative of the, of the red entropy. Okay? So what you can see is that the kaffareli, I mean, I don't have real time to explain this, but the kaffareli connealyed by inequalities are equivalent to the inequality that g of b is larger than equal to g of b star when b star is this rescaling of the body blood solution. Okay? So just remember that the kaffareli connealyed by inequalities can be written like this. So they are true? Good. Now, by using the approach this nonlinear flow in the sigma variables, we can prove the following. We can prove a theorem that says that we have a reminder theorem, we have an improved inequality. So actually the difference of gb minus gb star is not only larger than zero, but it's actually bounded below by something. So we have a reminder theorem. Okay? So we have a complicated reminder theorem, okay? But we have something that is explicit and in which we have this function h which is explicit here and we have the angular gradient and we have, I mean, it's a complicated expression, but we have an improved inequality. Okay? And in order to prove this, in order to be able to write that this is a theorem, we need to take some precautions. The precautions are the following. We already assumed that we supposed that v is as smooth as the origin. Okay? And the second precaution is that we start with an initial value function v0 which is between two bar and brats. And this is necessary because we don't know, nobody knows, whether these flows are well defined or not. But when you start between two bar and brats, then you can prove that you have a well defined or linear flow. Okay? So we suppose that we start between two bar and brats. We suppose that v is as smooth as the origin so we don't have problems with integration by parts and the origin. And then in that case, we can prove that we have improved the capillary conglomerate inequality. So we get a positive explicit reminder. Okay? And actually this is not the best we can do. We can have other terms but I don't want to write the optimal. Okay? So this is the second advantage of using this parabolic approach. First, we got rid of the boundary terms of infinity. Second, we get improved inequalities. And third, in the many minutes, we are going to understand the link between the fact that we have, we have local stability for the radial minimizes. We have symmetry for the nonlinear problem. For this link between the linearized problem near the radial minimizes and the global result of uniqueness and symmetry for the nonlinear problem. This is a little bit, it was for us like a mystery. We proved it but we didn't understand why this thing was happening. Okay? So in order to understand that, let us just linearize this flow that we are using. So this flow is the flow is similar variables. We are going to linearize it around a bare blood profile, p alpha. Okay? So what we do is we consider g epsilon which is just a perturbation of the bare blood written in this form, which is a nice form to perturb it, which facilitates the computations. So we choose the epsilon so that the mass of the epsilon is equal to the mass of the polysacred bare blood. Okay? And then by taking epsilon to zero, we see that inserting the epsilon in this equation and taking epsilon to zero tells us that f satisfies this linear equation. So the f dt is equal to an operator linear operator times f. f is linked to the perturbation. Okay? And L of f is just given explicitly by this operator. Okay? So this operator has been studied in a work by Bonfort, the Galois Moratorian Nassale. And so they did the following. They defined two scalar products based on this framework here. So the first scalar product is like an L2 scalar product, but with some weight, which is the bare blood to some power. And then the second is where a scalar product, which is like an H1 scalar product, which is the integral of the ingredients of f1, f2 with the bare blood. Okay? So you define these scalar products and using them, of course, you can have the corresponding Hilbert spaces, x and y. X is like an L2 space and y is like an H1 space. And you can prove that y is included in x. Okay? And then they also calculated they did a back-regim-ary type of computation. And they proved that actually when you look at one half the derivative with respect to t of this scalar product, you get minus the other one. And when you look at one half the derivative of the H1 scalar product, what you get is minus the scalar product of f with L of f. Okay? So these computations were not in this Bonfort and Galois Moratorian Nassale. They also proved that if you look at the smallest positive eigenvalue of the operator L, lambda 1, okay? So you have this eigenvalue equation here. Then they proved actually that f1, the corresponding eigenfunction, has the nice taste, the good taste of being not only in x but also in y, which facilitates the computation. So the eigenfunction is in the H1 line space, okay? And also they saw, I mean, that we can verify that lambda 1, the eigenvalue, is larger than or equal to 4 if and only if we are in the stability region. So we see that in this notation here for this linear operator, being in the stability region, in the complement of the dead region, is equivalent to have lambda 1 larger than or equal to 4. Now they also saw this for authors that actually lambda 1 is also the first eigenvalue for the other scalar product, for the H1 scalar. So you have also this inequality with the same eigenfunction and the same eigenvalue, and this is kind of hard to uncover in type. For all these things we took from the paper of the affordable formula theory and the site. And now it's come to the last page of my presentation. So we looked at the official information itself in the variables, and we defined the derivative with respect to tau of the official information as minus kappa of g, okay? Then in the region alpha less than alpha i, this by the computations done some time ago, the difference on minus and the sum of minus kappa plus 4i is equal by definition to the derivative of i plus 4i, and this by the computation we did before in the stability region is non-positive, okay? So this says that the functional kappa over i minus 4 is not negative in the stability region. And now let me just finish with this. So let's look at the quotient, the ratio between kappa and i. So this is the ratio between the production of information and the information itself. So let's look at the infimum of this ratio, okay? Of all functions we call this C2. We know that in the stability region this is larger than equal to 4. Now this infimum, of course being an infimum, is less than the limit in, or when you look at the same ratio for a particular function, and we take at this function, the function g epsilon, that were the perturbations of the value. We make a computation, we take epsilon to zero, and this converses, this is an integral notation, you have the infimum of these two things, and by what was in the previous slide, you know that the corresponding infimum is equal to the ratio for f1, which is equal to lambda 1. So what you get is that this infimum of the quotient of kappa over i is between lambda 1 and 4, okay? And actually superizing what we have is that the infimum of these two quantities, of this ratio is achieved in the asymptotic regime, when t goes to the bottom left, and is determined by the spectral gap of f. So you are exactly at the place where alpha is with lambda 1, so it is equal to 4, this means that we are in the curve that determines the red zone. So we are in the zone, and we separate the stability and stability zones, okay? So when you are in lambda equal to 4, then lambda equal to 4 here, and here this means that you have equities all alone. So what this says, it's actually that the range between a and kappa, which is linked to the case of symmetry or not symmetry, is really related to the fact, to the, to the, to the, to the, to this spectral gap. So, so the fact that lambda 1 is larger than 4, so the, this spectral gap is linked to the fact that kappa over i is larger than 4, which says that we can use our method and prove symmetry. So really the, the, the property of symmetry for the optimizes is really linked to the optimality of the spectral gap. So the, here is the link between the spectral gap of the linearized operator and the symmetry for the nonlinear, or the optimizes for the, for the nonlinear process, okay? And, and, and induce these two last things, we, you see that we are in the red region, you are in the region where alpha is larger than alpha is first. Then you have this infimum, I take, the computations don't here, are less than lambda 1, lambda 1, by the computation of 4, etc., is less than 4. So we see that we don't, we can't prove symmetry here, because we have that this infimum is less than 4. So you see that really symmetry or symmetry breaking is really linked to the fact that this item value is less than 4, or larger than 4. So you see really here the relation between symmetry, symmetry breaking for this nonlinear problem and the spectral gap of this linear operator. So this is a link and this is, this computations is what explains why in this problem, in the problem of the calferellic or near-main inequalities, the symmetry breaking or symmetry regions are really defined by the stability of this local property of the radial optimizes. It looks like a little bit of a tracency. This local property which gives us a global uniqueness in the symmetry. Okay? So this is the reason why this little miracle happens. Thank you very much. Thank you. Are there questions? I mean, we didn't see if you were going to instruct with all these measurements. This measure is Rd there, but there's no connection. It could be, but it would be complicated. Could you use the pair of body flow to see what kind of symmetry breaking occurs? Can you push that into the asymmetric region? You let it run and see what happens. Because the eigenvalue is sort of too small, that's what you say. It means that there's some kind of an optimal factor. We don't know exactly how they are occupied by this argument. They are not about the asymmetric, but they are results of some partial symmetry results. So you know that there's only one direction, one dimension, one direction of stability, and you know actually that the functions depend on R and their simulated angles. So actually they depend even if you have these variables and these very large, actually the functions in the global optimizes depend only on two variables. The azimuthal angle and R. So you have some symmetry for the optimizes. You can compute the numerical easily because there's no problem, but you don't know the next piece. Any further questions? No? Then let's thank Maria. Maria had gone to Maria.
Using a nonlinear parabolic flow, in this talk I will explain why the optimal regions of symmetry and symmetry breaking for the extremals of critical and subcritical Caffarelli-Kohn-Nirenberg inequalities are related to the spectral gap of the linearized problem around the asymptotic Barenblatt solutions. This is a surprising result since it means that a global test yields a global result. The use of the parabolic flow also allows to get improved inequalities with explicit remainder terms.
10.5446/59172 (DOI)
About work that essentially is contained in these three shutters, several of the collaborators are here. So the first one is with Amy Thainat. That actually is one that is not here. On entropy production and the qualities for the cat's walk is the most recent one. But that is building on work that we did with Eric and Michael Ross on a spectral map for the cat's model with art spheres and also some work on anchoring the cows in the cat's model that we did with Jonathan and the rule Michael Ross, Eric Carlin, and Cedric Gillan. OK, so very quickly, I was expecting actually Michael to proceed my talk and talk about the cat's model. So very quickly, what I'm going to talk about is first, what is the cat's walk? So the cat's, you want you to understand the convergence of the convergence at the Boltzmann equation level, and he devised the simplest model at the n-particle level. So he introduced this model in 1956, and he only considered n particles. Each particle has one dimensional velocity, and the state of the system is there to change the random process that I will describe in a short moment. And each time that you have a teach step, you have just two velocities are changed. And the total energy v1, v2, v2 plus square vn square is going to be conserved. We want to model binary conditions. So what is going to be the state space? So for the n-particle system, the total energy n, of course, the state space is going to be the set of all vectors that are on the energy sphere. What is the energy sphere that you can see for rn that has radius square root of n? So in the cat's walk on the sphere, at each step, one piece of pair ij. Now, the model that cats devised, these pair of ij molecules or particles are going to be uniformed. Here, we are going to have in such a way that the collisions, and we are always thinking about just pairwise collisions, and the rates at which the collisions are going to be depending on the energy of the colliding particles. So pick that I will say, the shorty. Pick a pair ij at random. Take an angle that only forming 0 to pi. And then you move from the v1 into vn to v1 into vn. But this v prime i, v prime j is going to be n-th unchanged. So these are the only two-th unchanged. Everything else is the same. And what is going to happen is you'd suffer just counter-plugged place rotations. So v prime i is going to be cosine of that in the i, and v prime j at the same time of z. So the rotation of the arrows is in these coordinate planes. Yeah. So OK, the jump's very quick. So the jumps are right in the Poisson stream for each pair. So each pair of particles actually has a Poisson clock, an independent Poisson clock. So to associate each pair ij, you have this random variable tij that actually is just to expect the waiting time for the particles i and j to collide. And the parameter 1ij is going to be equal to n over n choose 2. 1 plus bi squared plus v squared j to the power gamma. Now gamma is between 0 and 1. So for gamma equal to 1, so for gamma equal to 0, you have actually the original Katz model. So you have just uniform rates. For gamma equal to 1, you have the super-art spheres. For the really physical meaning, art spheres, gamma should be equal to 1 over 2. OK? So OK, so pick a pair. So you just have the velocity z, vij squared, to put it plus vj squared, and just collide. OK? So what is going to be the generator of this random process? OK, this possible course is reversible. And if your initial data is for v is f, at time t, your probability density is going to be ft given by d, where l is the generator and is equal to n over n choose 2. You have to sum over all ij for i smaller than j. Here you have the rate. So the rate of collisions depends on the energy of the colliding particles. And fij minus f, the fij, actually, what we are doing is, you just have particles collide to rotate, ever rotate, rotate, rotate. So of course, you see that f is going to provide to the uniform. So now, of course, you can ask, well, why are you considering 1 plus vij squared plus vj squared to the gamma? Why we don't consider directly this one? Because this is relatively one for the odd spheres. Well, we have to consider the same as, said in this paper in 2003, because you want to be sure, you want to study entropy production. And you want to be sure that you have enough entropy production. So at least you have an integration for the Maxwellian case. So it would be nice to get rid of this one. But for the time being, we just have this rate here. Of course, in the work that I did with Eric and Michael, in the context of the odd spheres and convergence of the odd spheres, we are able to just do this. At the entropy level, after now, we are just going to do this one there. So what's going to be the evolution for the continuous time version of this cat's walk? It's going to be exactly what we know as the cat's master equation. So this is an n particle equation. And the connection, remember, what cats really wanted to study was, he wanted to study the convergence to equilibrium at the Boltzmann equation level. But he wants to study that through an n particle model. So that was the main motivation. Now, I will say in a minute, so you have to pass from, you are going to see that the Boltzmann equation is an equation for probability density on R. So you have to connect the two equations. And for that, you need some notion of cows that I will say in a minute. And of course, you see that if you are in the energy sphere, of course, the coordinates are never going to be completely independent. They depend pairwise. But you hope that in large and limited when n converges into infinity, you are going to be, you are going to essentially be independent. So it is exactly because you need Gaussian probability in error on Rn. It's strongly concentrated on the sphere. And as I said, f of t is going to converge to do the forward the cat's master equation. The single particle marginals, of course, are going to tend to the center Maxwellian Gaussian. So what is going to be the notion of cows that gots is going to be defined. You define the following very weak sense. Take mu as a probability measure on R. Now, take a sequence of probability measures on the sphere. You say that this sequence is going to be nucleotic. If we hand converge to plus infinity, you have that for everybody continuous function q in Rk, you have that this mu n is going to converge to this single particle product, single particle marginal product. So essentially, when n goes to infinity, what we have is the k particle marginal looks more and more like a product. And that's the exact of this notion. It's the minimal notion of cows that incubate device in order that you can make a connection between the Boltzmann equation and the cat's equation, the cat's master equation. So in these 1956 papers, it's just so proof of the variation of cows. What it means what? Well, take f0 sigma n and f0 chaotic sequence. This lives in R. This lives in R industry. Then take ft, explain it. This is the generator of the random walk. Take this. This is going to be an f-co翼 sequence in the sense that I defined previously. And this f of t actually is a solution of the initial value problem given by the cat's Boltzmann equation. Here you have the rate. In the original model that you consider, this rate was uniform. In the model that you are considering, we are making just, you want to have more physically realistic, if you want, potentials. So of course, this would be the v and w would be the pink collusion of velocities. The v star and w star are going to be the post collusion collision velocities. And you have here. And this is the cat's Boltzmann equation. Now, of course, if you want to prove properties of the solution of the cat's Boltzmann equation, using the analysis of the n particle model, you are going to need stronger notions of chaos than actually have. So we really had the minimal notion of chaos in a very weak sense in order to make the connection with these two equations. Now, of course, you can ask, OK, so it's very nice. So if you have initial data that is chaotic, you know that cat's Boltzmann equation is going to be propagating that. But the next question is, there are really chaotic sequences. Actually, in a paper with Eric, Michael, Jean-Dathana Ruin-Sernic-Lanis, they prove that, yes, you can lift probability densities on R. You can lift them to the sphere. And densities, so you have a family of densities that actually is chaotic. So the theorem says that take care for probability density on R. So take, you know, second moment you go to 1, fourth moment you bow to it. F belongs to Lp for P log of n1. And let mu be equal to Fbdv. Then consider the tensor product mu n as the normalized restriction of the tensorized states, you know, here to the sphere. So you take F, make the tensor product, proceed to the sphere, and you get a chaotic family of probability density. Now, in this case, you have that, this new chaotic sequence for the Katz-Massmann equation. You have a density fn with respect to the uniform measure that is given exactly by this c. So we have propagation of these chaotic sequences. And now we also know that they exist. You can construct that. Now, as I said, in the Katz-Massmann equation, of course, you have rotations averaging. Rotations averaging. So of course, the density is going to converge to the uniform. But what Katz really wanted to know was this rate of convergence. So we propose to study the rate of convergence of f to the uniform in terms of the spectrograph of the generator of the random wall, of this Katz wall. So, OK, define the spectrograph in the usual way. And actually, the L2 distance between the exponential Tlf to 1 is controlled by the spectrograph of the generator in this way that is here. Any conjecture, the following. You conjecture that actually in the limit where n goes to infinity to limit of the spectrograph is going to be larger than 0. OK, this was in 1956. Now, in 2001, I see you got to this, 2001, Alice Jean Rez was able to prove that actually, exactly, the Katz conjecture between no quantitative estimates. And later on in 2003, we're able to show that the exact value for the spectrograph. Now, you have something here that's very important. In order, remember that you want to relate these two equations, you want to relate these two convergences. And of course, you want to have an uniformity in there. So you have that here. So later on, so we did for the original Katz model. So later on, Eric, Michael, and Jeff Sharon improved for the three-dimensional model. And later on, Eric, I, and Michael Ross, we proved for the atmosphere collisions in that four are in one-dimensional velocities in three-dimensional velocities. OK. So Katz proposed to study these convergence to the uniform in terms of the spectrograph. But you can say, well, but if you think about the Boltzmann equation, why not the relative entropy? It's also a very good measure for the convergence. OK. And what you want to discuss here is, exactly, look into the relative entropy for the Katz-Baltzmann equation and relate that with the relative entropy for the Katz-Baltzmann equation. OK. How do we relate the two of them? Now, first of all, let's see that everybody knows, but still, what is going to be the relative entropy? So if you take two probability densities on a measure's case, you call the relative entropy of F with respect to G, this quantity that you have here. OK. Now, a very good property is exactly using the Pinsker inequality. Actually, even if the relative entropy is not a metric, is not even symmetric, actually controls the square of the L1 distance between F and G. And that is very important for us. Of course, if you look at the Katz-Baltzmann equation, the equilibrium solutions are going to be the center max variance. And so if you have some initial data, F0, you know that it's going to convert, and let's say, temperature 1, for example, you know that your F is going to convert to the Maxwellian, and that is exactly the same second moment, the first moment. So OK, you know also something else. You know also that since the energy is conserved, the Boltzmann-H theorem implies that the relative entropy with respect to the Maxwellian is monotone decreasing in T. So Churchie and I, in the 70s, he conjectured that for the Boltzmann equation, there would be some constant C larger than 0, the center way that for all solutions with these initial data and finite relative entropy, you satisfy exactly these equations here. So you have the dissipation of the relative entropy with respect to M1 will be smaller and equal to some constant. This constant would just be a constant uniform in that, and times the relative entropy of F times M1. OK. So of course, if this was true, let's say, life was very nice and easy. Because exactly from what I said before, if you had the Churchie and I conjecture, actually you'd get convergence of F to M1 in the album and distance, you'd have exponential convergence. Now, it took a long time. But in 2003, Cedric Villany proved that actually, the conjecture is true, where for the chemical one, for let's say for super arc spheres, but in general, it is not true. However, he was able to show that using that result, the fact that the Churchie and I conjecture was true for gamma equal to 1, he was able to prove non-exponential bonds on the right of the registration for other values of gamma. So improved the follow-up. Improved exactly the DDT of the relative entropy. Let's say the dissipation of the relative entropy with respect to the Vaxwellian is going to be bounded by some constant times this. But you see, if you have the Churchie and I conjecture, you not have an absent there. Now, you have here 1 plus absent. And this gamma goes between 0 and 1. So for suitable classes of initial data, I mean for initial data that has lots of moments, what you know is for a gamma smaller than 1, so any physical potential if you want, you have exactly this conjecture here. So that is at the Boltzmann level. So now, you are going to call, instead of writing always minus DDT of H of the relative entropy, so you call it DF. And so you can think about the Churchie and I of the left side. The result of Cedric is exactly in this form here. OK, this is for the Boltzmann equation. So what we want to discuss here is the following. Can we obtain similar result but for the Katzmaster equation? So take care of probability density. And of course, the relative entropy of F with respect to the uniform density is going to be given by this Hn. And first thing to investigate is the relative entropy dissipation and the dynamics generated by the Lm. So sometimes we call it the entropy production. Define it here. And you have for psi this form, you have to compute introduced to generate the guarantees. OK, here are the gamma between 0 and 1. And here are the rotations. Anyway, how many minutes do I have? You still have seven minutes? OK. OK. So we want to get information on how to relate this Hn of F and capital F in the sphere and the entropy dissipation. So what we have, we want to connect the Katzmaster equation with the Katz-Boltzmann equation. Now, in order to do that, we have always to perform the limit where that goes to infinity. So what we'd like to know is a kind of a chessing and a construction for the Katzwalk. That would be exactly of this form. But we would like to have this c gamma not depending on n. Because we have to break the limit when n goes to infinity. And we want to have inequalities of this type uniformly there. OK, in 2003, Cedric introduces the generators that I talked before. And he showed these. Now, you have a problem here. If gamma is equal to 1, everything is nice. And you get 1 over 3 here. But for gamma, smaller than 1, this gives a rate of order n gamma minus 1. And remember, gamma is between 0 and 1. So this is not really very meaningful for all of us. So for the Katzwalk, the intuition that we have is chaotic data with a 1 particle marginal F behaves like the tensor product, the n tensor product, in some cases. And he showed me that paper with Cedric and Jonathan LaRue. We showed that hn of Fn divided by n is essential to the relative entropy of h with respect to the Munch's well. Now, the entropy dissipation here, so dn gamma of Fn divided by n is essentially the entropy dissipation at the Boltzmann level. So if you want to have inequality of the type, what is the value for light stretching inequality, if you want, who would like to have inequality is all deformed here. Because this would be our d gamma. And this would be our relative entropy. So one of the first questions is, can we estimate the dn gamma for gamma small between 0 and 1, not 1? Can we estimate it in terms of dn1 and gamma equal to 1? Because actually, Cedric used the fact that the Churchian eddy conjecture was valid for gamma equal to 1 in order to prove bounds for the gamma smaller than 1, even in the comparison hours. So now one of the questions here is, can we do some similar thing like that? Now, if you look at the structure of the Boltzmann equation, you have products already of the probability density. But if you look at the Katzmaster equation, you have that two-particum marginal F of n is not exactly the problem of the single-particle model. It is in the limit. But before taking the limit, it is not. So that presents already a problem. So we can think, well, maybe we should think about different notions of chaos. So we are going to have two inequalities of this type. In one of them, you have some notion of propagation of chaos that you know that is propagated, or some notion of chaos that is propagated, and we know that is propagated. But we have a problem. C depends weakly, but still depends on n. So we cannot really get information for the Katzmaster equation directly. And the second inequality that we're going to have, we have a constant that is independent of n. But we don't know if that another notion of chaos is propagated by the Katzmaster equation. So it's still open, and we still need to do that. So we will see any way that we get results for the Boltzmann equation. So take fn, probability density on the sphere to be symmetric, and define the following. fn is log scalable if there is a constant c larger than 0, independent of n, such that the sup of the absolute value of the log is bounded by c times n. So define this. This actually know that is propagated. Now, what do we get from this? We get that take fn, set of probabilities in the sphere, take fn symmetric, and log scalable with this notion. Now, assume enough moments, and we are able to prove that we have exactly the inequality of this time here. But this would be the epsilon here. But the c, even if it depends within n, it still depends on n. So that's the first inequality. Now, we clearly from there see that we need some stronger notion of co-euticity. So we define the following. Take a symmetric family of probability densities on the sphere, and say that this fn has the log power property of order beta for beta larger than 0. If there is a constant c larger than 0, independent of n, such that you have this here bounded by c to 1 plus beta. So define this notion. If you want, this is going to be like a quantitative notion for the chaotic initial data. And actually, if you think about initial data, there are people stuck for the Boltzmann equation. But this normalized, denserized products actually satisfies this problem. So then, we prove the following. Take fn with this log power property of order beta. If there exists k larger than 1 plus beta, and you have enough moments here, you have exactly the equation of the type that we really wanted. So you have that the relative entropy dissipation per particle is going to be larger than or equal than a constant, and this constant is independent of n times hn of fn over n of 1 plus epsilon. So we have exactly a result at the Katz master equation level. We have an analogous result, a set result. Now, of course, we can say as I said before that we know that the kind of initial data that we construct with the paper in the center in John Attala's room, we know that t is true, and we know that the f from the Boltzmann equation satisfies these. So finally, using this result, even if you don't know that it's propagated, we can say exactly the same thing as Sergic said, or 1 plus epsilon. So we have the following. Take f for probability density in the arm, such that the second moment is equal to 1. Assume that there exists a beta larger than 0, larger than 1 plus 1 over beta, such that you have at least the fourth moment valid. Define that the Fisher information being finite and have this condition that like Sergic has, actually. So then we have an explicit constant c that we can compute that depends only on the parameters of the problem in this way that you have here. So gamma here is the interaction k are the moments, and you have exactly analogous equation that Sergic had for the Boltzmann equation. Now, you can say, well, why do you really care to look and get the same type of result as Sergic got in 2003? But of course, the reason that we really want to know is we want to have, so first of all, what we see is at the Katz-Massner equation level, as a corollary of our results at the Katz master, at the Katz level, we are able to get the best endropic convergence that we know from the Boltzmann equation. So now the question that we are working on is, well, can we propagate that type of by the Katz-Massner equation? We know that we don't have problem for the Katz-Massner equation. But can we propagate that notion of cause? Is it propagated for the Katz-Massner equation? That's one question. Can we bound, like Sergic, can we bound at the Katz level the entropy acceleration, the entropy relative to dissipation for gamma smaller than 1 with gamma equal to 1 that we have, we know that the Schachinian project is true, in that case, in our case. So that leads the question that maybe we need to have better notions of cause. And in fact, probably Katz just defined a notion of cause that was good enough for him to rewrite the Katz-Massner equation and the rate of convergence of the Katz-Massner equation to the Katz-Massner equation. That was his main motivation. OK, so that's it. Plenty of open questions. Thank you very much. We have time for one, two questions, please. Yeah, if you already have the think of Delaney plus your results uniform in N, and if you've got something about the propagation of the control of the difference between the final measure for the sphere and for the nonlinear one, you will also get directly some propagation of the first property, no? Control the propagation of the first. I don't know. We definitely need to have a refund, because the main problem here is, if you look, let's have a look at the Balsman equation. OK, at the Balsman equation, you have this structure of f of v times f of w. So you have already a product there. Now, if you look at the Katz-Massner equation, you don't have that there, because the single two particle margin is going to depend on the one particle margin, and you don't have this exact product. So probably what we really need is to have a stronger property, or we'll be able to prove that, in fact, this new notion, that is, it gives more of a quantitative information about the initial data, the alkylotic it is, maybe if we can prove that actually that is propagated by the Katz-Massner equation. In that case, we have, you know, that's good. But these are open questions that we have. And there is still this question about that we are considering that the rate at which the collisions happen is given by 1 plus vi squared plus hgs squared to the data. We very much would like to avoid the 1, OK, but talk to now. We will not be able to avoid the 1. And what we, Derek and Michael about start spares me, we struggle a lot to get to the 1. It's to get out of the 1. I still don't understand. Yes, he's still quite complete. I tried to find some website and went to the 0. Sorry? They gave Simon n plus vi, and so I also do get rid of this one by. No. No, it's not so easy. No, it's still easy to do. So as Michael was saying, still even for the case of the atmosphere, the case that we did already quite some time ago, it's still very complicated. And maybe you really need to have enough entropy production to get into equilibrium. That makes sense also. So OK, that's it. OK, thank you very much. Thank you very much.
We investigate new functional inequalities for the well-known Kac's Walk, and largely resolve the 'Almost' Cercignani Conjecture on the sphere. A new notion of chaoticity plays an essential role. The results we obtain validate Kac's suggestion that functional inequalities for the Kac walk could be used to quantify the rate of approach to equilibrium for the Kac-Boltzmann equation.
10.5446/59174 (DOI)
An analysis to invite me to this amazing place. OK, so I have a point up, but I don't really have a way to sort of click through my talk anyway. It doesn't really matter. Let me give you a brief introduction of what I'm talking about. And I try to address some keywords of the title of this workshop in the first slide. So you find the word application, which I'll just start with to give you a motivation of why I'm going to look at what I'm going to look at. Then one of the key properties is going to be the entropy, which is in these kind of systems. And this is, of course, not the most general thing which you can think of nonlinear reaction diffusion systems. So I'm not claiming that there's a theory for everything, which would be impossible, just think of Peluzov-Zapodinsky. You can create all kind of dynamics you even can think of with Peluzov-Zapodinsky. So this is way beyond the scope of this talk to try to explain all of that. Geometry, basically the geometry I try to understand better, I think we should be able to address, better is still a very nonconvex one which comes out of the entropy dissipation which you have with lots of nonlinear reaction. So in that sense, it's a very literal geometry of a landscape of entropy dissipation. And the consequence of we are going to find is this sort of you can prove exponential convergence to equilibrium for a pretty large class of solutions. And I'm also going to point out that there are some structural properties which don't really have a very proper name. We usually call them interactive fusion effect or something like this, which help the analysis and the structure a lot. So I want to advertise that this is in the system. And then if something is right, it means it's dangerous and indeed this boundary equilibrium I'm going to show you there are really, really dangerous objects. They're bite. In the end, I also say that what we do and the methods we've learned to sort of apply doesn't really restrict itself again to what is called a complex balance system, but it goes beyond with a sort of a model for amyloids. So what do we have? Complex balance is actually a very old term. What you see here would be a kind of a cycle complex balance version, which is already mentioned by Boltzmann in his derivation of the Boltzmann equation, because maybe that's necessary to understand cycles of collisions and so forth. And in terms of direction of various substances, it means that all of the substances have to talk to one another in order to agree to an equilibrium. Actually, that this ring, a partly reversible part of the irreversible ring of species forms a unique positive equilibrium with something which is, if you look at it, not so obvious at all, but it is true. In fact, here this is a model which is more than just complex balance. It also has some more features. It has a volume surface structure, which appears often in biology. You have here basically a certain protein which lives in the boundary, well, boundary here, on the volume here, and so forth. And there is a cycle of reaction, which even has a meaning. Meaning would be it's part of a machinery which is responsible for asthmatic cell division in certain cells. But I'm not going to discuss the biology behind that. I'm just clearly pointing out that what is happening with this sort of circle. OK? And before I go into the analysis, in fact, I try to avoid as many equations as possible. So I just try to motivate things by pictures. Let me see what you will be faced with in this idea of a complex balance system. So here you have the plot of one of the concentration of the species at the boundary of a cell. The cell should be the circle here, which is not very nicely visible, but anyway. And you have two pictures, one of them features in the model surface diffusion, and the other one is not. Actually, that's a biological question. The biologists were not sure if surface diffusion for this protein plays a role or not. So you do a model for both options and see what it happens. OK? And on the left-hand side, what you see here, the front of the cell here, that's the active region where this protein is actually going to be phosphorylated. So it's phosphorylized with a kinase, and so it disappears basically from the concentration here. The same thing is true here. What you see here with the case of surface diffusion, you have a certain sort of smooth profile which leads into this active region, and that's what you expect if you have a diffusion on the surface. On the right-hand side, there is no surface diffusion at all. You have a discontinuous rate, which is sort of located at that part. But still, you see that there is a sort of even steep, but still smooth profile. OK? So why is that possible? That you don't have a jump here. And the contractures actually, that you have sort of an indirect diffusive effect. And you can even prove this in terms of a function in the quality which you're never going to write down for you. OK? But what you see here is a high-resolution numerical picture of what's really going to happen with, and the indirect diffusion effect you see here is that, well, there's a certain probability of close to the active region that something on the surface detaches into the volume. And there it can diffuse again. And then it attaches to the boundary again where there is now, it's now in the active region. So it's an indirect way. And it needs that there is a sort of a kind of reversible action. It actually doesn't really need a strict reversibility. It's enough to have something which is called a weak reversibility somehow that you form a circle of things. And you have diffusion in another compartment. OK? And that is basically that indirect diffusion effect in terms of that system here, which I'm never going to define more precisely, but which is a tremendous useful mechanism. It means that for the analysis of large-time behavior or existence or whatever, what we want to do for global solutions, we really don't care if there is a diffusion coefficient present or not. There is no difference anymore between a PDE or the E-model and the pure PDE model in the analysis. So the other thing is just to keep you a bit entertained in terms of what does it mean that all species need to agree in an equilibrium. So this is an equilibrium configuration. And the left-hand side, it looks like this. Actually, this is sort of the volume concentration of this protein LGL. And you see it has a sort of a nice hump. And then the system without surface diffusion, you have no hump at all. And this first time you plot these pictures, and actually it happened to me, I thought, OK, I've reversed the picture. That should be without a boundary diffusion. That should be with diffusion. Because usually, your intuition is, when you add diffusion to something, it makes things flatter and smoother, right? But here, the opposite is the case. And this is indeed true. And the reason is because when you have boundary diffusion, you create through this heterogeneity of the boundary conditions a lateral flow along the boundary which feeds back into the domain and creates this hump inside the domain. And without surface diffusion, there is no lateral flow around the boundary. And therefore, the thing looks like that. And the reason why this is possible is because all of the species feed into another. So it's not just an agreement between species. There's a circle of concentrations. And that's why you have this kind of lateral flow possible, that there's a machinery, even a stationary state, which is some sort of a constant turnover of material between volume and surface. Actually, this conjecture, you can really prove. But looking at the free disaster, basically, of this circle, and you see that here, this is really the hump in the interior is created through a high concentration at the boundary. And then it diffuses inside. So it also has to do with the shape of the domain and so forth. So you can see that the behavior which you create is very rich. Going further would be a possible. There's a current sort of project, which is about the policies. The policies is basically the process of why you have energy to survive your night. So it's always you need to create fatty acids. Fatty acids are stored in what's called glycerides in fat cells and they're digested. And you always have a formation process and a deformation process. When you eat, you want to build up your triglycerides. When you basically don't eat, then you consume them in order for you to have energy. So without mentioning further, let's go to the mathematical definition. So for a nonlinear reaction network, you have a certain amount of substances which you let react. Complexes are basically the left or the right hand side of a chemical reaction. So when you have this kind of substance plus this kind of substance forms something else, then some of these two substances would form a complex and it's defined by the corresponding so-called stoichiometric coefficient, which are the number counter coefficients, how many molecules you need in order to do something. Then reactions are just the set of reactions which are happening and the way how you model now a reaction rate would be in this talk always the so-called mass action laws or the concentration product of all of the sources which you put in corresponding to the number of the stoichiometric coefficients, that is the reaction rate. This is also then multiplied with a reaction rate constant, how fast this reaction works. And if you sum all of the reaction up, you get the reaction vector, which here sort of is a vector in the stoichiometric coefficients from the sources minus the input complexes. So once you have that, you can write down something which can delay this method if it isbin Sharvesh. Can you read anything? Good. Tell them. Oh, it's a better install. That's a problem. That's a problem. That's a problem. Now make it 4-4. It's connected. It's black ink. That's your profile. Xiao-Yang, I'm going to start thegel- I'm not sure if you can... Okay, so... Okay, I got stuck with defining a general type of Ratch diffusion network. So, here basically you have like a diagl matrix of diffusion coefficients. That's the simplest to look at. And here you have the Ratch array. And boundary conditions for these PDEs usually that you confine everything in one domain and you're not talking about chemical reactions or whatever. Let me now just recall what complex ballads means in the concepts of chemical reactions. It actually goes back to 75, projection and horn, where for the context of all these, so just purely network of reactions and how they show that if you have something like a complex ballads equilibrium, all equilibria have to be complex ballads. And for each complex ballad system there exists one unique positive equilibrium, which you find actually that it balances the total inflow and outflow of all of the complexes in the system. So here on the left-hand side basically you fix a certain stoichiometric coefficient vector. You go through all reactions, basically we have this complex as a source. You add up the flux which comes out of this reaction and you say that this has to balance basically all of the other reactions where this is basically a target. And so there is no net change of species anymore in equilibrium. So the reactions don't stop, it's just that they don't lead to any net change anymore in the configuration of the equilibrium. So that's what is called complex ballads. And the point where we look at it is because it features still the free energy as an entropy function. So this is the most channeled class of reactions where you have a generic entropy functional beyond detailed balance. And you can even calculate explicitly the entropy dissipation which has a certain form. It actually is not trivial to calculate it compared to detailed balance. Here you have to manipulate furfa and do smarter things, but once you got the trick you understood it and it's done. Okay, let me start now with... Hey, do you think it's a little too much? I'm not sure if there are some forces which don't want me to reveal our great results, but I will insist. So the first thing which we can do is basically the most general result we can hope for in the moment. Take yourself a complex balance system and we are talking, it doesn't matter how many species you have, it could be huge, right? Make a huge assumption which is that there are no boundary equilibrium, and I will come back to that. Take any kind of possible solution you have which is only renormalized solutions. What is the point of here? The point is that if you take nonlinear right-hand sides as they are here in our system, there is no channel existence theory even of weak solutions. In higher space dimensions the smoothing of the heat equation in individual species is not enough in general to guarantee that your reaction rate is even something like your L1 function. So therefore you have to pass to renormalized solutions which were done in the thesis of Julian Fisher in 2015. And this gives you a global existence theory. But for general nonlinear reaction diffusion systems there is no way to define weak solutions which are global in time. For short times you can do everything you want because we have lip sheets continuous right-hand side, but the problem is to pass how do global in time solutions. And what we can do is that this is a very weak concept solution. Nevertheless, all of them converge exponentially to the equilibrium L1. So for any system you can think of. Okay, they prove itself. First of all we try to do it via the approximation of renormalized solutions. Then Fisher actually made a little bit of an extension and showed that all renormalized solutions in his sense of definition conserve the mass and not only the chemical masses, but also if you were to look at let's say semiconductor model charges where you have the sum of species with opposite sign. And you can actually see that the sum of species is rigorously conserved by renormalized solutions and that they're all satisfied kind of a weak version of the entropy dissipation law which is just enough to apply our methods to have exponential convergence. The ingredient is using the entropy method which has been mentioned before, so let me read very briefly here. So you have some sort of a convex entropy functional which gets produced. So you're using that you have a setup where entropy to action equals to zero allows you to identify uniquely an entropy minimizing state. So I'm using not the physical but the mathematical entropy always. This identification process crucially depends on the knowledge of all of the involved conservation laws. Otherwise you cannot do this uniquely. But then you want to translate this statement into a function inequality which measures the entropy to production and also a function of the relative entropy to equilibrium. And this way of measuring should be in a generic way that it of course is zero at zero but otherwise positive. And depending on the shape of the profile of this function where I measure the distance with you can conclude exponential convergence or not. And here we will always have exponential convergence at the end of the day. And if convergence in relative entropy is not to your liking, you can translate it with some G. Schaubach-Pinska types of inequality into L-Born convergence. And we have seen this with Anton yesterday for instance, right? The advantage of this approach is that it's very robust. The function inequalities you derive you actually can reuse in more general models. So you build up a sort of a machinery function inequalities. They usually tend to give you global results and you can aim for explicit, sometimes even optimal constants. So there's a huge theory of this. It goes back to non-linear diffusion equations we have seen. Arnold Makovic-Joskani-Untereiter for instance, okay? Or Carillo-Jürgen-Makovic-Joskani-Untereiter for instance, right? We have heard about Deville-Let-Villainy and also the Fokker-Plancky-Kinetic equations for inhomogeneous kinetic equations and so forth. And you can also apply for reaction diffusion systems which you would think originally should be a much easier functional setting. And this is of course true, right? I'm not claiming that Boltzmann equation is as hard as reaction diffusion systems. But the problem for reaction diffusion systems is that they usually have no maximum principle, okay? So please don't think of the heat equation, right? Things here are much harder. And then it goes back to actually some very early works of Kröger in Berlin. At three, I already considered here some complex balance of the ease and so forth and so forth. And then we build up a certain machinery which you only need to know about that it doesn't allow you to do Bacchier-Emmery, okay? So people have tried, also Daniel had tried, even with computer force, to try to make Bacchier-Emmery work for reaction diffusion systems. You create such a huge amount of terms that everybody got lost. So I got frustrated during my PhD. Daniel basically also didn't manage so that doesn't work. Nevertheless, we have a big machinery how to do it, which is tedious, technical. In the end, you really understand what you do if you do it, but I'm not going to present it to you. I'm just going to say what we can do. What we can do is, for instance, if there is no boundary equilibrium, then we can show that there is always such a sort of exponential entropy and entropy dissipation estimate. So that's essentially the sense that the way how you measure the relative entropy is just with a constant, which by Gromov Lemmas gives you exponential convergence to equilibrium. So we are always in the setting of Giaciniana's conjecture. To prove this inequality, there are different kind of approaches. One, the first, most general approach was actually proposed by Mielke, Markovic and Haskovets. What's the script? That should be a P now, that's the entropy production. It's a typo. Actually, because Alex Mielke is very strict with his thermodynamics and he insists that the entropy dissipation here is just acceptable sin, but it's a sin because entropy gets produced. If you talk to people from thermodynamics and you say entropy is dissipated, they will kill you. Alex doesn't kill you, he just points out very neatly that you should always say entropy production. And it's only in the case that here our entropy function is a free energy that you could say it's dissipated because energy is the thing which gets dissipated while entropy gets produced. So I learned that a little bit, at least. Anyway, so this is the entropy production and you can estimate it in terms of the relative entropy for channel systems using a kind of convex occasion argument which is very elegant, but unfortunately not very explicit. So this goes back to the initial beginning of the talk, basically. We have said my geometry is a highly non-convex entropy dissipation landscape. And one way to completely bypass that is if you convexify it. The price that you have to pay is that you basically, for channel systems, you will be having no idea what is really the explicit way of getting a constant. What do you convexify? Functional. Yes, so you convexify actually a shifted version of the direction term of the entropy production. You take some part of it with two bit chosen constant between 0 and 1. And then you look at the entropy to function term which, let me go back, looks something like this. So this is a sum of sort of non-convex functions. And then you convexify a certain partial of that in combination with estimates on the fishier information. So it's not something which is very deep, but it's also not something which I would like to explain in a talk in detail because it's a nice trick and you should look at it. So on the other hand, sort of with co-workers like Laurent Demilet or Bauer Tang, basically we are able to do a different way of proving the same things which have advantages and disadvantages. Basically what we need to use is explicit structure of the conservation laws. We know that we need that in order to even be able to close the argument. And then we get explicit estimates. And so this was also first done for detailed balance systems like here. The argument of Milke first was only for detailed balance, but you can lift it to complex balance. There's no problem basically. And you can also do it for different things. And in the best version of what's possible now is actually that this infinite dimensional function in equality you can reduce to understanding a finite dimensional inequality. So you can kick off the infinite dimensions entirely. And all you have to do is understand something which looks like this. So here you have a vector of what you could imagine the average states. So you average a space in the opportunity and it's like in that sense inequality which could relate to an ODE system. And if you have such a constant, so if you can do that, right, out of the existence of such a constant here you get the existence of this function or infinite dimensional function in equality. So that's a neat trick, but it doesn't make the problem easier to try to get the explicit version of the constant. Okay, for general, what's the square root of yr? So yr is the stoichiometric coefficients, right? So this is a vector which is defined basically, this is not where it is, this should be bold, and so this is a vector of coefficients which basically here you do the products of corresponding to the individual stoichiometric coefficients. This is a product defined accordingly similar to what is here. Like here you have also sort of a short version for a product. And so you compare some here things. So here you have basically the difference between certain products of all directions and here you have basically the difference of the individual species to its equilibrium value already. So in some sense you have to translate the chemical balance conditions. So this is defining the chemical equilibrium in terms of a distance to the equilibrium. And basically on a level which is a finite dimensional one which doesn't make it easier to really understand what's going on here because it's still the core of the non-convexity, you could say. Okay, so this is the good story. And then I mentioned that there is a dangerous thing which is called boundary equilibria. So I start with a simple reaction of a circle of three species and the boundary equilibrium is that equilibrium state where basically some of the species are really exactly zero, not positive anymore. And this just can happen. Okay, this is always possible. It's actually quite likely to find in a nonlinear system that you have boundary equilibria. So this is not a pathological case. This is a real case. The problem is that at this boundary equilibrium the entropy dissipation is zero. But of course the distance to the positive equilibrium in relative entropy is positive which means there's no way of having an entropy in the quality of this shape because here you have zero and here you have something positive. Nevertheless, chemically you expect that you already converge to this positive equilibrium state. So our approach in order to get closer to understanding a structure of how you can bypass this is actually that you're written, you're entropy dissipation estimate. You don't look it as a functional inequality anymore but you have to look it as an inequality along the trajectories of solutions. Okay, so you can't forget that you have individual solutions to look at. And then you aim for an inequality which now might depend with the flow of the species on time and for which you know that if you are close to the boundary equilibrium that constant will degenerate to zero. But if you still have along trajectories all together a constant which integrates up to infinity then you can do the same global argument and see that the relative entropy to the positive equilibrium state goes to zero. So in some sense this allows you to show an instability of the boundary equilibrium that you really go away from the boundary equilibrium. And that's the real hard part. You don't know how fast it is but you will go away. Once you're gone away then you can basically restrict your state space and use our methods to show that there's again exponential conversions to the positive equilibrium. There's some boundary equilibrium system where the very reason that there is a boundary equilibrium also gives you a pre-estimates like this lower bound here which allows you to deduce this program and really do it rigorously. Like for this system we can do basically what they just said before. For general systems, and I mean really now, arbitrary complex balance systems as complicated as they can be, right? You have to assume that there is such a constant. Okay? And if you assume such a constant in this finite dimension inequality then you know that you will converge to the positive equilibrium. So of course this is a big assumption but it's also a very hard assumption to do in general because of something which is called the global attractor conjecture. Okay? So since the 70s it has been always postulated that what chemists see is true in the sense that any complex balance system will go to this positive equilibrium. Right? However, not even for all the E-systems there is now a verified or accepted proof. There has been a proof proposed by George Grasio in 2015 on the archive. There have been conferences organized just to try to agree if the proof is correct. It uses methods about whatever sort of inclusions which has nothing to do with like a qualitative or quantitative approach to do with entropy structure or anything. So it's a completely different world and so far I haven't heard any news if it's finally that a preprint.published or is accepted as a valid proof or whatever. It's definitely a very hard problem. Okay? Even for all these. But you could now get very cheeky and say, okay but you if you'd used your PDE problem to a finite dimension on all the E-structure problem so presuming that you know that is true there should actually be always such a constant. Okay? So why don't you use it and lift everything up to a PDE level. The problem is that you know that actually is a proof of concept which you can really go through. The problem is that the proof of concept is based that you can compare the evolution of an ODE system to the evolution of the averages of PDE concentrations. And that is at least as hard problem as a standard PDE problem itself. Because for the difference of all these solutions to average PDE solution there is not even an entropy. There is nothing which helps you, right? So basically if you don't understand the PDE problem you can't understand the difference and then you can't understand the error you make by using the constant of an ODE system in that thing and you don't know about the global conjecture, injector for PDEs. But in some sense the setup is there that you could think that you should show it. If you show it for ODE's one day maybe you'll be able to show it for at least large classes of PDEs. If you get bored for the rest of this workshop I challenge you to look at this simple system of complex balance network and try to understand in a good way why the boundary equilibrium is stable. So Laurent and Paul and I we tried many hours we failed so far. So even in very simple systems the question of why boundary equilibrium is stable can be really, really tricky to quantify. Okay so now a question to the chair. What, how much time do you give me despite all of the technical? Three more minutes. Okay so one thing which I want to say is we can do a very similar theory also now for non-linear diffusion systems. Once you develop a nice enough existence theory we need L infinity bounds for the concentrations and then we can also prove convergence to equilibrium let's say for such a chain of reactions of a single reversible. By that mysterious indirect diffusion effect the point is that when you have non-linear diffusion then one possible generalized version of logarithmic soap relief which is part of this change of estimates which I put in has here a term which is proportional to the average of the species and you can see that this degenerates so basically whenever the concentration is close to zero you use your diffusive information. And this is exactly what the indirect diffusion effect can do for you if there should be a species which somehow stops diffusing then it will cure that with this sort of circle of reversible things. And this is what we can prove here actually it will lead to a functional setting where you don't get directly exponential convergence but you first get some sort of algebraic convergence because you have to deal with some slowly growing a pre-estimate so you have the L infinity norm of the solution inside a logarithm and that is what you have to use in your Gromwell argument and so forth. But once you have algebraic convergence you get global in time a pre-rebound and you get immediately again exponential convergence so it's like a technical in-between step. And this indirect diffusion effect even refills of our papers have asked that what has it to do with hypercorrosivity and in my opinion it's not as bad as hypercorrosivity or not as challenging let's say it's like you could say it's a corrosive version of hypercorrosivity because we don't need to play with different norms or measuring in different norms. We can take just the standard norms which are in the system and add function and the qualities in order to recover full corrosive basically statement. So in some sense it is really related it really means that the mixing of the species allows to compensate for degeneracy in the spatial transport but it's not as let's say tricky to, well at least in my opinion because you know maybe it seems easier to make. It's not as tricky to work out. Well the last thing I wanted to do is okay there is some certain diseases which, well I'll skip that. So thank you very much for your resilient talk. Any questions? How would you estimate this constant H1 in particular cases? I know in some simple cases you can estimate it. How do you do? Basically once you have an explicit form of the conservation law then there is some core estimates which basically require you to put the information of the conservation law in. So the crucial point is really that for a general system we are not aware of any structural way of identifying conservation laws. But once they are explicitly known then you can play with them and then basically you can always work out a way and this doesn't need to be a simple system. This can be now very complicated system like with enzyme kinetics or whatever. You can always work a way out to use the information of the conservation law. But for a general system where your conservation laws is just a statement like there exists a matrix for which you have that sort of thing, you don't know how to quantify it. That's the problem. I asked because in one particular case of this it would be linear systems. Yeah linear systems we can do entirely. But that thing is a boundary inequality. Essentially boundary inequality. So you are giving an estimate of the boundary constant. No, unfortunately this is not the case. Poincaré is basically that would be only, well, so Poincaré is only respect of diffusion but here you have reaction diffusion always mixing. The constant that appears in the finite dimensionality. So it's like a finite dimensional. No, it's also not true. So even for linear systems you can see there are two kind of modes. The one which are diffusion dominating basically where then it's like an ODE system and the other ones where they are non diffusion dominated. And then the critical eigenvalue has nothing to do with Poincaré inequality. It's a mixture of stoichiometric coefficients plus eigenvalues of the Laplacian which are the ones in Poincaré. So of course Poincaré is part of the thing, my diffusion is part of the thing. No, no, I mean the discrete Poincaré inequality. But okay we will stop there. I think I just think that this H1 is very hard to estimate usually. Yeah it is. I'm not saying anything else here. More questions? So after you mentioned that I've drawn a little bit of a reference about the Bobo-Lotharney range I'm sure. Yeah. So it's not that long. It's 41 pages, maybe three years. Yeah, yeah. Do you have a percentage, what's your percentage confidence in the correctness of it? Okay. I'm basically saying something about the kind of techniques I never used myself. Okay, so that's the premise. And I think there must be something to it. I think the guy which did it knows very, very well the subject. Okay. And if there would be a serious problem I think people would have had an eager effort to point it out. So I expect that at one point it will be accepted. But this is as a lay person in a field which is really not my method. Got a question next? Okay, let's thank them. The next speaker. That's your name. Okay.
We prove exponential convergence to equilibrium for renormalised solutions to general complex balanced reaction-diffusion systems without boundary equilibria and even for systems with boundary equilibria provided a finite dimensional inequality holds along solutions trajectories. Our proofs are based on the entropy method and represent the most general results on the convergence to equilibrium for complex balanced RD systems currently available.
10.5446/59175 (DOI)
Ovdje Parkerá.. z hledu na players哦 sva priljerme kajno puntiji chair u aims u mar Lance in servicaj neli se sa termoničnjem, ki počal q applauding v pomez PageCenter. Pa samo čob compressiva se Idi confessed hip investenskih rekač competitive, infused, če탄,ideri pa behave v tačcu iz imeli motiv? Turbojeama imi ove! Iben za pržjem in pa tega linija je soluz<|hr|><|transcribe|> dela s foltimiiri. Aso ga ust verdadirati bl这么, že ne��을 se repušanja. Im Schnab bili bom meni tomom si malго potreadil, v protektivizku de<|pl|><|transcribe|> komersji, k tim dokloocyjitej, bo otros stancej ni preferences, pr casuali non hornsami. Wie proves kozad, t riva, dvojo spla, r shredded o prepolidjanah, czondsLRa mu glasba per punšenASn הm ker biš ko sono bilo poz rolled jam pent. in mlattersih v crustim, kiest t3âl…...... Kajaanega je.. z albumbitי zp之後 cla oluzneval video z zwischen Fast and Furoring. Petrbe zatenak je da pa mu se olačali700 človek lokega. microphone.me su pozve v toh voila, ovedaj, au materjalnjmo! 404, ker so spava v motornice.... Te this gives monotonicity En imного korolari of this fact to is defended as that zero set, the positivity set of u Is increase in time. So the corollary is that the set and you deposit Mzača je difficil to je pa. In tudi pokožda je resistivna, tukaj misli 상황 Sloven 就..... crushed arm. Prejo, izgled najavej vame v komentu dotief, potom lahko vzrte posličevane publications, religionje? Preča! Mi longest tem nos當� o kaponi. Sem je shouldn, sredaj. je ta v salt. Z 블�im trapezaj, innocenti je Vogto snučila začall. Danes ga ostajao dom creamy.ただ je dozno operati celo in sinku gateso in lands zanim. Znači grevo, prDO Plus navič ide na teloji teloji kot az beachte vิteč. Nelato do treasury ne se predvodales events dzivoste celih soljuti, ki je delat u oputnikov. Imamoयe poslednje, da vedemo za soljusen 小 del Maj representacionalne izgledа sega. avenje vni vpolj, osrednjat pljeno mož Competition de 18. I pos Arrow in je ko sem prekratila z dialectem, kako mislo, da evoeste božino z predmosenjuÿÿÿÿ. kazim v convolutione. revanja so sryinga pr božino. form i s, zero on v warnedri steady. Tuanned yako zato派 o to, parašimo. Ako ta elektronija je valizala, ne zamem vse accela ter viši Soülčnosti informacije billTs, zelo Stejno s petalsimi igroent우ajski, inolithic divo whilsti? Osema se dedu dedication, da pa li se pa nam riding pa baršno plačali. Priz keenикom locatedno dve stones winko izengam Nemam, vo stvarmi. Zelo fro посмотреть zelo oto náprav tattoos še v patrimonio paket z o Gan zelo taj je diš gašem. tx. We prove that this is increasing in time. This function is a limit, and actually this limit is the function s that solve the teleplik problem. And actually a consequence of this, because this is monotonically increasing to this function, it means that this function is always less so equal than this function, which means that s is always less than s of x, between 1 and m of 1, for everything and for everything. Now the third fact is about s. So how does s be h? So s of x behaves like the distance, so by dx boundary, the distance of the boundary to the power of 1 over m. So near the boundary, let's say this is omega, my s looks like some kind of concave power of the distance. And 4, so v, the last property is about the boundary behavior of u. So this moment, this is not telling me anything about the boundary behavior of u, because a priori, just from the glipo, my function u, so if this is s, a priori my function u looks something like this, and it's converging, but we don't know anything what's happening in the other problem. So a priori, that's something that happens. Well, that's not the case. We're doing this on a boundary domain, and you're imposing traditional conditions on the boundary, or what? Yes. Thanks. Yes, we have omega n. No, it's 0. No. Yes. So then I have u dx, t1 over n minus 1, divided by s of x, minus 1 in infinity, this is going to 0. And this last property tells me that near the boundary, you see, for every fixed time, u divided by s is uniformly away from 0 at infinity, which means that this is not the right picture. So this is really the near the boundary my solution is like this, with the same power. So this is kind of a rather complete picture of the Bierov view, so corollar of this is u dx is similar to the distance to the boundary 1 over n, for every 40 large up to constants. So for every fixed large time, u gives like the distance. OK, so this is the theory in the local case. And essentially the question is what happens if I replace my Laplacian with a local version of Laplacian. Now, we need, before doing that, to discuss what is the fractional Laplacian in the domain. And as we will see, there are three possible definitions. And then the answer to this PD question will depend on the definition I use. So, game space now. So, we want to do a local. Essentially what we want to do is to replace minus Laplacian with minus Laplacian to power s. And so this is the fractional Laplacian. And then I'm going to give three definitions. So, the first one is the so-called restricted. And then we denote restricted fractional Laplacian. So, how do we define the fractional Laplacian on a function in omega? Sorry, you can see it with three definitions. I will give three definitions of fractional Laplacian. Three definitions? Yeah. OK. No, three definitions. So, the first definition of restricted fractional Laplacian is the following. So, I think, so, I have a function u, let's say, defined in omega. Sorry, none. And let's say that u on the boundary at moment is zero. So, it's nice, it's a function, which measures the boundary. What I do is that I extend u to be identically zero in Rm, minus omega. OK, so, I just put it zero. And then I define the fractional Laplacian u at one point as using the standard definition of fractional Laplacian in Rm with a kernel. So, then minus Laplacian s u and x. I use the following definitions, the integral of Rm of u x minus u z divided by x minus z to the n plus 2s in dz. So, what I wrote now, this is the classical definition of fractional Laplacian. OK, s is a power in zero minus. This is the classical definition of fractional Laplacian in Rm. OK. Then you can do, there is a code, sensor it. So, you need to print to the idea. Yeah, of course. Yeah, everything in the prints for the other sense. Sorry. Thank you. So, the standard definition is sensor it, where I define the fractional Laplacian of s u x as the same integral, but the integral is only in omega now. So, I don't need to extend you outside, I just restrict the integral to omega. So, now we give the third definition, let me mention, all of these definitions are different probably in interpretations in terms of the processes, whether you kill the process when it exits the omega, when you kill it when it hits the boundary and whatever. There are several different interpretations of these processes, but they all have a meaning. Of course, you see the first definition coincide if you are on Rm. So, there is no difference. And actually, on a technical point, actually, this definition, to be precise, you give it a real meaning, you need to restrict to s greater than 1. So, there are technical issues if you are killing one of them, to define properly the second definition. I will not enter this, but just keep in mind that this is defined only for s greater than 1. And then the third definition is the spectral, which is probably among them probably the most natural. If you have an operator, you want to take up power, you do it in the spectral sense. How do you do it in the spectral way? So, you let phi k lambda k be an orthonormal basis in a tool of eigenfunctions of the minus-laplash. So, this means that minus-laplash of phi k is equal to lambda k phi k in omega, phi k is equal to zero in the boundary, and these functions are orthogonal to 2. So, the integral of phi k phi j equal to zero for every j equal to, OK? We construct a basis in L2 using the eigenfunction of the laplash, and then you say, OK, then for every u, you can write u as a kind of fulliest in term of fulliest series. So, for every u, u of x can be written as the sum over k of u k at phi k of x, where u at k is the number given by the integral between u and phi k. So, it is the projection of u on this element, and I will see that if you compute the laplasha of u, the laplasha goes on phi k. So, minus-laplasha u of x, OK, the observation, OK, maybe I should say, lambda k are positive numbers, because minus-laplash in its positive is definite. So, lambda k are positive. So, minus-laplasha u of x is the sum over k of u k at minus-laplasha of phi k of x, and the laplasha of phi k is this. So, you get sum over k of lambda k u k at phi k x, and then the definition of minus-laplasha s of u is simply replaced, and then you get the second value by the power s. So, in lambda k to the s, u k at phi k x. So, you take this as a definition. So, this is what you will do to take the power of a, for instance, of a matrix. If you are a positive film matrix, you just diagonalize the matrix, and you just take the power of each other in volume, and then you increase the inflation. Ok, so, this is, you see that for the sensor and laplasha, for the spectral laplasha, I don't need to know how u is extended outside of omega. This only depends on the value of u inside. For the first definition, I needed to extend u outside to be a delta k at zero. So, these definitions are clearly different. These are clearly different, and this, of course, must be different. And there are also easy ways also to see that they are different. You will see that in a second. Ok, so, we have three possible definitions, they all coinciding if we were in a random. And now, we will look. The spectral is the only one which is consistent with the classical one, s times 2 1, right? No, all of them go in a dash. Yeah, so, if you put the right normalization constant, so, if you put 1 minus s here, and you let s goes to 1, in the limit of this kernel becomes very, very concentrated on the origin, so, it is not a abstract, except the last one. So, all of them, as s converged to 1, will give you back the classical one. So, we are in three definitions. It doesn't matter now that you remember exactly the details. I will not use it. But one important observation that is useful actually for, I mean, it is usually the proof, but I will tell you more for completeness is the lemma. So, you see, the first two definition fractional aplashans, sensorally restricted, are defined via kernel. For the spectral, I use these spectral decompositions. So, the lemma is that for the spectral fashion of the plashan, you have still a representation formula. So, minus the plashaness of u of x can be written as the integral of omega of some kernel k upon the value of z of x, u of x minus u of z, d z, plus, but you need to add at 0 to the term. So, it is not only a kernel, there is a small 0 per term correction. And so, this kernel k of x and z be a is like 1 over x minus z, n plus plus in the interior, but as you approach the boundary, the bf is different. You need to multiply it by 1 times the minimum between the distance from x to the boundary divided by x minus z. And then, 1 minimum distance z to the boundary divided by x minus z. So, if you are very close to the boundary, this minimum is going to be attained by, let's say, for this one. So, you get a distance to the boundary. So, in particular, this kernel, for instance, if x is on the boundary, distance from x to the boundary is 0, and the kernel is 0. So, this kernel is compactly supported. And the BOMX, BAs, like the distance from x to the boundary to the power of minus 2s. So, it grows up near the boundary. Can I just ask you one question? So, if I look at the Neumann spectral fractional apache, so, in the Dirichlet problem, consider the Neumann problem, and then look at the spectral one, right? Yeah. Would there still be a language like this? Is there still a kernel representation of that one? No, I just, if you know, don't worry, I mean, I just would be interested. I would say yes. I mean, you have a way to prove this. But the way to prove this is that you can always write a spectre of kernel in terms of the heat semi-group. And then, if you play with the upper and lower bounds for the heat semi-group, and you integrate the bounds. So, you know, there is a representation like the interval from 0 to 3, and you get the heat semi-group with dt over t plus s. So, there is a formula like this. E to the minus del of Laplace, and u to the minus u, d to the minus del of Laplace, for this, that's the formula that we use. And then you use the bound of the heat semi-group to deduce bounds like this. So, you could look, I would think, at the heat semi-group in v normal commissions, and then try to do the same. So, I would expect a yes, but I haven't thought about it. They're just great. One second, I would say something false. I don't see why not. OK, so, OK, this is just a problem that is useful in the proofs. And I thought it was useful to mention because, you know, it can be useful maybe for some of you working on this kind of problems. And, but, OK, so for us, now what's the goal? So, the goal is to study. You want to study the Paul's medium equation. Vq equal l of u to the m, u to zero to the negative, u on the boundary equals zero, and then you can understand the pendulum of this operator where l is minus laplacian to s. And which laplacian, any of the three. OK, so, what are the properties of this fact? So, there are several facts you can prove. So, the first thing you can prove is that t1 over m minus 1 u of tx is increasing in t. And t is the same as I gave before because that proof was not using the operator, we are just using composite proofs. Second property that we can prove is the, the elliptic problem. So, well, it's a monovobstavitiv. So, let s solve the elliptic problem l, s to the m equal minus s, s of the boundary equals zero, zero, negative, then u tx, divided, defined as s of x divided by t plus t, 1 over m minus 1, this is a solution. Again, it's a competition. Third property, t to the 1 over m minus 1 u of tx is a stigo to infinity to s of x, s tigo to infinity, which implies, again, that u of tx is always bounded point by s of x with t to 1 over m minus 1. OK, no, in surpricing. So, up to here, I just, OK, I recovered the same properties. Now, we want to ask ourselves two questions. First of all, in the local case, there is this finite speed of propagation. So, the zero set expands at finite speed, in finite time it invades everything, and then you look at long-time behavior. What about in this case, do we have finite speed of propagation or not? And second, what can I say about the long-time behavior of solutions? So, you see, before I could prove something stronger than this, that this comes with this, but really we could prove that the boundary behavior of u was the same as the boundary behavior of s. If we had the boundary, they had the same power. Do we have the same? So, these are the questions we would like to understand. So, see? OK, so there are, so now I start to, I need to start distinguishing between the three operators, so, in the definition, I will have different problems. So, an important exponent, if they follow me. So, you look, so you want, each operator is different, I mean, these are just three different definitions. So, in particular for each operator, the behavior of solutions to the delitial problem near the boundary will depend on the operator. And to capture this, we look at the first eigenfunction of each operator behaves near the boundary. So, look at phi 1, the first eigenfunction of l. So, this means that phi 1 equal to lambda 1 s phi 1, where phi 1 is a positive function. So, the stationary solution also changes with the operator. Yeah, yeah, they are all different. There are three different solutions. There are different eigenfunctions. And I want to understand, so, we will be two things. I want to understand the boundary behavior of this. And I want to understand the boundary behavior of you. I need to understand both. But to capture the behavior of the operator, I will do it through the behavior of the first eigenfunction of each operator. So, I look at just at the first, the nice elliptic problem, l phi 1 equal to lambda phi 1, phi positive. So, this is phi 1. And then I ask myself, what is the boundary behavior of phi 1? So, phi 1 will be like the distance from x to the boundary into a power gamma. And now what is gamma? Gamma is s for the restricted function of a fashion. So, the solution to the restricted fashion of fashion arrives to the boundary like distance to the power s. If you get the sensor of the fashion of fashion, you get s minus 1 alpha. And now you see the importance that s goes above 1 alpha, otherwise I get a negative power. And if you look at the spectral fashion of fashion, well, the again function of the spectral fashion of fashion are the same as the functions of the classical fashion. And so, I get 1. So, the error of eigenfunctions depending on the three different operators are completely different. And this power is going to enter, of course, in the bm of my functions. So, this is the first thing. But now, the interesting, so this is just a linear property. So, this gamma just comes because of the operator. But now we are looking at something nonlinear because the operator composes with a power inside. And once you start to mix the nonlinearity of the question, so, L of s to the m equal minus s, together with these different error of the operators, something even more complicated happens. So, the key number, the key number is an exponent sigma that defines the following way. The minimum between 1 and 2sm divided by gamma and minus 1. So, this encodes the s from the Laplacian, the m from the power, and the gamma from the eigenfunction. Encodes three different numbers. And so, you wonder, why there is a strange operator appearing now. Well, the point is that no local operators are kind of subtotes. It's not like working with local operators because if you try to understand the bureau of functions, how is the find operator? Operator is, you see, u of x minus, let's see how the sense, where the one with the kerm for instance. You look at a special ideas. Of course, you have the effect at the origin. I mean, this operator is very single at the origin. But if your function, u, grows too much at infinity, you also have problems at infinity because maybe this function is not integrable. So, when you work in the nonlocal case, you have to always understand the local effects, but the nonlocal effects coming from infinity. And of course, this is where the fact, maybe you see, if u is growing at a certain way at infinity, u to the m is growing even faster at infinity. And so, that's why the m and operator start to play because the fact that you are raising u to a power will change the bureau of u at infinity, u, comes u to the m, and then you have to understand whether this kernel is integrable or not. And so, you start to get problems in some regimes. So, you look at this number sigma, which is defined this way. And you notice that actually this number sigma is very simple in one case, because if you are in this case, let's say this is, if for instance, if gamma is s, s over s simplifies, and 2m over m minus 1 is always greater than 1. So, this number sigma is equal to 1 in the case of the strited fraction of Lashan and the sensor fraction of Lashan. So, this number sigma is not very interesting, in fact, in the two cases, you never see it. But for the spectral fraction of Lashan, it makes sense, because it's the minimum between 1 and 2s m over m minus 1. And to see this, you only see it when, so you see a number less than 1, here only if s is small enough, compared to m, otherwise you don't see it. Ok, so this is the number sigma. And now, let's try to understand first the behavior of s. So, the behavior of s is the formula. So, proposition. Let us solve this elliptic problem. Then, how sb is? Sb is like phi 1, so I write it in terms of phi 1. Phi 1 to the sigma over m, if, and so, which is the same, so phi 1 was distance to the gamma, so distance x to the boundary gamma sigma over m, if, and so there is now, ok, if what? I thought that the exponent sigma is going to be the critical exponent that gives me the behavior, but there will be a kind of a, so you see this number is always equal to 1 when this quantity is greater than 1, and then it jumps to this one when this quantity is less than 1. The moment where this quantity is equal to 1, where both expressions are equal, this, you would like to say that, ok, this is 1 now, but it's one in a very degenerate sense, because both this is 1 and this is 1, and there you need a logarithmic correction of 2. There a logarithmic correction appears. So, if you are in this situation where the two numbers don't coincide, so if 2sm is different from gamma m minus 1, then you have this behavior. Otherwise, if you are in the quality case, if 2sm is equal to gamma m minus 1, then here you get phi 1, 1 over m, sigma is 1, but you get 1 plus log phi 1 as a correction to the power 1 over m minus 1, and this is sharp. It's from above and below, not that it is from above, it is something. This is the behavior, the boundary behavior. So, sigma is the exponent that you need to use, and when sigma is 1, because both these numbers are 1, a logarithmic correction appears, and there is no difference about that. You will never see this if you work with the state fractional aplation, with the sensor fractional aplation, because this sigma is always 1, and you never see in fact these powers, but the moment you work with the state fractional aplation, very weird happens. So, in fact, we had a few people with bone forte rossotone, that were working with this fractional aplation, and we never saw this very weird behavior. It is only the spectral case where you see stuff. OK, but then you say, no matter what it is, I found s. S is a weird behavior, fine, but I know what is the power. So, in real, I would like to prove that my elliptic solution to the parabolic problem for large times behaves like the solution to the elliptic problem. So, theorem, this is true for the restricted and the sensor. In this case, utx, for every positive time, utx is bounded by constant times s, and it is bounded by little constant times s. So, this implies infinite speed of propagation. So, instantaneously, the solution behaves like s, in no time, from above and from below, and you get also the convergence, utx, e1 over n minus 1, divided by sub x minus 1, nearly infinity to the s, so things work super nicely, the restricted and the sensor. Now, what happens in the spectrum? In the spectrum of fashion of pleasure, you can prove that for every t positive, so utx is bounded by c of t s, so from above you always can put the s. From below, you get c of t times phi1 to power 1, where this is the first eigen function. So, you see, s is like the first eigen function to a power, like 1 over m or sigma over m, and here is power 1. So, this implies, again, infinite speed of propagation, because instantaneously, u is positive, but the powers are not much, you know, at least for short times. But you would say, OK, but this is only for short times, because you are for large times. So, nothing better, this monstrel. Does that constant, above and above ground, go to 0, as s goes to 1? This constant goes to 0, as s goes to 1, of course, because it's for this function. It's going to be finite. Now, what happens if I look a longer times? So, if tj. Spectra flash on a flash on 2. So, if you are in the regime where sigma is greater than 1, sigma is equal to 1, so 2sm greater than m minus 1, the large regime, you are in good shape, and you can recover the usual result. For every t greater or equal, large enough, let's say, u of tx, t1 over m minus 1, divided by s of x, is going to 0. So, you recover the usual thing. So, work says in the local case. So, for s, large enough, for m close to 1, things are fine. Things are fine. Now, what happens when I'm not there? So, another theorem. Spectra flash on a flash on 2, spectra flash on a flash on 3. If 2sm is less or equal to m minus 1, now there are few things. So, if I initial data my assume for this term that is larger than s, so, I assume that at the beginning, I'm above the stationary solution, then everything is fine. Again, u tx, t1 over m minus 1, over s of x minus 1, converge test. But if you don't put assumptions on initial data, so, for instance, you assume that u0 is bounded at time 0 by a multiple of the first eigenfunction, then u tx is always bounded by constant, depending on t, first eigenfunction to the power of 1 over m, for every time. But this is much smaller than s of x. The behavior of here is 1 over m, the behavior of here is, this is like sigma over m, by 1 sigma over m, and this sigma is less than 1. So, you don't get the boundary behavior. So, from the boundary behavior from both, this falls. So, you are behaving like a lower power. You are not getting s. And if you assume that u0, so, this is true for every time. Actually, if you assume that u0 is bounded by 0 phi 1, 1 minus 2s, you get that for short time, u tx is bounded by the same power, c1, 1 minus 2s, which again is much smaller than s of x. And even more, u tx. So, you look at this, when you look at this bound, you can say, OK, so, I have u tx is always bounded, what you do, actually, this here. So, u tx is always bounded by s, but from here it is bounded by phi 1, phi to the power of 1. Is this power 1 optimal? Can I prove this power? And so, once you see this statement, you will think, OK, maybe I can put 1 over m, once you see these two statements. This would be compatible with 1 over m for large time. But u tx greater or equal than phi 1 over m, phi 1, 1 over m for t large, is false in general. So, reality is that there is no clear power that you can put here. If you start with data that are large enough, you can put s here and s here. If you start with data that are flat enough, you know that you are below this, so the upper bound is completely wrong. But there is no clear upper and lower bound that you can put. So, the behavior is not care what it does. We did, well, we didn't do it. So, as a student of one, we studied some numerical simulations about this problem. And essentially, depending on the value of s and m, you get completely different boundary behaviors. So, it looks to us that this is shark, actually. That's the best you can do from below in general, and this is the best you can do from above. And there is no real improvement you can do. So, there is no clear too much in powers, and there is no better power that you can put here. I mean, I don't have a proof. Maybe one could talk that by... So, at the moment, we just identified one regime. We just identified the regime, this regime. So, we said, OK, if you are above this regime, things are nice, if you are below, things are bad. Maybe here, you want to identify more subregimes and get better informations. But this is completely unclear. But already, at least this gives the... I think, rather, here, a picture. And I think what I like more about this problem is probably, at least to my knowledge, is the first case where you take the operator, the laplacian, replaced by the fashion laplacian, and things don't go as they should go. Which I think is a positive feature. At least there is something interesting to do, rather than just repeating all the proofs mechanically. So, OK, I think that's enough. So, thank you very much. Any questions? Yeah. You've given three versions of the definition for the fashion laplacian. Do you know of applications where one or the other is favored or more appropriate or correct? Uh... Yeah, I don't know. Yeah, from the moment... I never looked really at the modeling point of view. I mean, I think... This is probably also one of the first cases. I mean, all of them have several meanings, and there are very natural probabilistic interpretations. Now, from the modeling point of view, I don't know which one is the more natural. In some effect, in some sense, you would think that the fact that they are treated in a sense works so well. They are the most effective. You know, they just do immediately what you want. They spontaneously logoarize, and they immediately get that idea. On the other hand, the fact that the spectra is so rich is more, I don't know, mathematical taste. Therefore, application, maybe, is not the right one, except because it's so delicate. I don't know. But that's a good point. I never looked much at the modeling of this, so... It was for us, actually, a big surprise that things didn't work. I mean, when we started, there had been a lot of interest in trying to understand how no local effects interplayed with nonlinear effects. So, for us, medium is unnatural. No local, no linear. And the first one we looked was the restricted, just because it was the most natural, and that's what we did first with Rossotone and Moforte. And so it was particularly interesting that, by changing the operator, all this machine will change. But, yeah, on the modeling, I don't know. So, you used comparison principle, and for no local operators, it's not so trivial, basically, to have a comparison principle, right? So, I mean, could it be that you have different behaviors that don't have different kind of details of comparison principles to the three operators? No. So, the comparison principle, actually, it's easier for the no local case than the normal case, and that for free the strong maximum principle. You know, when you want to, I mean, you want to put that two operators are, I mean, you want to put that if two functions are ordered, there is a compact, you want to put that if I do one, let's say I do two, and at some moment they touch, I get a contradiction. And usually, how do you do in the local case, you say, okay, if two functions touch and they're both solution to the same equation, then at the contact point, they're like the same gradient, and the replashats are ordered, and from this property I get a contradiction in the question. I look at the question, they satisfy and they're not. And you always have this problem that when you try to do a comparison principle, you never get a strict inequality, and then you always have to add a small epsilon somewhere to do the trick. That's how you do for the lifted problem, for the heat equation. In the normal case, if I have two functions which are ordered, then they compute the fraction of the plash at this point. The fraction of the plash is the integral of u2, of let's say ui of x minus ui of z over x minus z, and then plus s. And then these two functions coincide at x, so this part is equal to the first term, and they are ordered everywhere else. So in particular, this integral has a clear sign. It's strictly positive that the two functions are different. So it's much easier to do comparison principle and let's say strong matching principle for non-local operator. What really the boundary behavior is capturing, that's the present of sigma, is because this, as you... I mean, how do you prove boundary behavior for equations used? You take a point on a boundary, you start to zoom in, and you try to understand the behavior, so there is a rescaling that you do. You start to look at your function, there is the boundary, you look at a point, and you look at the local infinitesimal behavior function, like a distance. And now you try to zoom and understand the property but as you zoom your function, your function gets rescaled, and as you zoom in, so you have a scaling, like u over rx, rescaling, rescaling, you have to renormalize to get something, so let's say that u near the boundary is like distance to the gamma, so u is like distance to the gamma, then when you rescale, you divide by up to the gamma. And as you do this rescaling, whatever was the berov u is pushed at infinity, and you see these in details. So it's the berov, the power gamma appears here, and as you rescale with this power gamma, this function will be like x to the gamma at infinity, and so you see the gamma appearing at infinity. Whatever is the power here, as you rescale, it becomes a berov at infinity, but berov at infinity creates mass of infinity, which creates problem of non-integral ability at infinity, because you are doing z to the gamma divided by this, and you have a problem with gamma, I mean this power is not integrated with infinity. So that's why the gamma is different in the behavior, and because there is an m round, there is also an m up here, because you compute the operator on u to the m, so you do u, u to the m, and you have an m, and then you look when this is integrable, and so on. So it's about a boundary behavior, it's not compiled from principle. How is it from principle? It's easy for the volunteers. One for the short question. What's the question of k, to get compute in time limit for the two longer, you have an attention bigger approach. It gives you a nice framework to do some, as if the expansion power time, if you're something. Does it extend in the functional field? That's a very good question. I don't know, I didn't try. For the street and the sensor, it looks promising, because you go in the right regime. Here, the fact that there is no clear regime, that actually is false, and that it really depends on the initial data, on the behavior, it means that you cannot have an expansion, so you don't know where to linearize. Actually, there is only one guy you can linearize to, because you already know that u of tx t1 over m minus 1 is like s of x. That's true. The problem is the ratio. So you always have s, and you already know that your solution, let's say in the interior, builds like s. The problem is that you have the boundary. So one could try to linearize, but then it means that when you linearize, near the boundary you will have some very non-trivial boundary effects, because in linearization you will have something unbounded, if you do a ratio. We didn't try. What, yeah, maybe there is a way out of this, it could be very interesting, because maybe out of this linear analysis one could try to understand which these subregins that I was talking about, that would tell which one is the right boundary behavior in the bending on the solution. Maybe one could prove that there are only, I don't know, three possible boundary behaviors, something like that. That's something we didn't look at. OK, so let's continue the discussion. Thank you.
The behavior of solutions to the classical porous medium equation is by now well understood: the support of the solution expands at finite speed, and for large times it behaves as the separate-variable solution. When the Laplacian is replaced by a nonlocal diffusion, completely new and surprising phenomena arise depending on the power of the nonlinearity and the one of the diffusion. The aim of the talk is to give an overview of this theory.
10.5446/59181 (DOI)
So thank you very much to the organizers for this kind of invitation. It's the first time for me here and I'm really pleased to be in such a beautiful place. So this work I will talk about is in collaboration with Giuseppe Savare. Let me start with some preliminaries. I will consider a complete metric space and the lowest semi-continuous functionals on this space. These functionals are supposed to take values in minus between minus infinity and plus infinity, minus infinity is excluded, plus infinity is excluded. We do want to deal with trivial functionals. So the domain named the set of values, the set of points x for which the function is finite is non-empty. We also deal with lambda convexity. So we say that the function is lambda convex. If it admits between any two points in the domain, there exists the geodesic and along the geodesic, this convexity and equality holds with respect to theta, which is the running distance along the geodesic. So this means that you also make in some sense an assumption on the domain itself because you are requiring that between any two points there exists at least a geodesic. This assumption can be relaxed to some extent. I will talk about this if I have time at the end of my talk. Okay, lambda convexity has some nice properties like quadratically boundedness from below. So you can prove that the functional phi minus lambda over 2 times d squared to a fixed point is bounded from below by an applied function of the distance and all can be any point actually. I should recall what the metric slope is. Maybe it's the lim soup of this incremental quotient as y equals to x. And if the function of this lambda convex, then the slope, the local slope coincides with the global slope. So you can replace the lim soup with the soup. This is a nice property because for instance, slopes, the global slopes are always lower semi-continues. Okay, I want to give a meaning to this identity. So u dot time derivative of curve equals minus the sub gradient. But since we are in a metric setting, we don't have a sub gradient. So we resort to this definition of evolution, variation, and equality by introduced, for example, by the sum of the sub-variant. You say that your curve solves the dis equation in some sense if this inequality holds, this partial inequality holds for any v in the domain of phi. The inequality involves the square distance from fixed point v, which is a bit right. And note that it always makes sense because here we have a lim soup. So this is the so-called the denier derivative. So it always makes sense, this definition. And what is a gradient flow? A gradient flow is a family of curves such that for any initial datum, the curve is, well, the curve is continuous down to zero. For any initial datum, the curve satisfies the AVI in this sense. And the semi-group property holds. So the semi-group at time t plus h coincides with the semi-group at time h. Starting from the initial datum, you get by letting the semi-group evolve at time t. This is just the standard semi-group property. Okay. Some examples. In the Hibbert setting, these notions are quite, I would say, classical by the theory set up by A. Brezes, if you want. So you can prove that curve is a solution of the AVI, even only if it is locally B. Pshitz, and it solves this differential inclusion. So the derivative, at least almost for every t, belongs to the minus sub-gradient of the functional. Assume that the function is just convex, for instance. And once you know that it is located in Hibbert, this equivalence is quite easy to prove, actually. You just use the definition. This definition I wrote down of sub-gradient, and then you just integrate. You just perform the chain rule for the distance square. Okay. Another example, actually, Simon already talked about that in the morning, but I didn't know he would talk about this. So let's just repeat it. I take the space of probability measures on Rd with the finite second moment, endowed with the vast distance, and we consider these functional, which is constructed by summing up some internal energy, which I just took as the entropy for a simplicity, potential energy, and an interaction energy. My functions here, V and W, are convex, because I want to deal with the EVI formulation. So EVI formulation of gradient flow is strictly connected with convexity. So it was proved by some people which are present here that this function of V admits a gradient flow in the sense of EVI, and the solutions to this flow are given by solutions to this diffusion equation with drift and interaction given by this kernel. So in Simon's talk, he dealt with some power here. We can consider also the Poisson medium equation or the Passen medium equation just by changing the entropy. For simplicity, I wrote down the equation for that in the heat case. All right. Let me now start with a preliminary result, which involves properties of solutions to the EVI. So our setting is quite abstract. We just consider a lowest and my continuous functional, which is not even lambda convex, a priority, but it admits a flow. We don't know if it is lambda convex. Maybe the reference space is not even length, but suppose that you have a flow. Then we have several properties, which to some extent reminds us of the heat bucket setting. So first of all, lambda contraction and uniqueness. If you start from, if you consider two solutions of the EVI, then the distance will not grow up to an exponential turn with respect to the distance between the two points you start from. Then you have regularizing effects, having in mind the precise results. So the curve is locally Lipschitz in the metric sense, and it belongs instantaneously to the domain of the sub gradient. Sorry, it's the domain of the slope, I should say, in the metric sense. Then the functional phi is non-increasing. No matter lambda, it's always non-increasing. And also the slope is always non-increasing up to an exponential term, which depends on lambda. Then we have a priori estimate. I wrote down this estimate because it shows you a quantified regularization effect, because here you have the slope, time, of course, a function which goes to zero, as t goes to zero. And if you forget about this term, actually, you have the integral formulation of the EVI, which is more convenient to work with with respect to the differential formation. If you just forget this, you have the integral EVI. And then the last property I wanted to mention is the energy identity. So actually, you always have right limits, so the metric slope and the right derivative of the functional phi along the flow, and they satisfy this nice identity. So the derivative of the function along the flow coincides with minus the metric slope square, which coincides with minus the slope square of the functional, which coincides with the global slope. This last identity is not for granted, because we don't assume convexity. We just assume the existence of the slope. So we get the same equality we would have if we assumed convexity priori. So what is the global slope? The global slope is the, what is the global slope? Yeah, the l, the l. Yeah, yeah, it's just the slope. So the limps, when you replace limpsup with a sup, I wrote it down in the beginning. Yeah, yeah, I think it's got it. That's fine. That's fine. Here. If phi's convex, they coincide. Okay. Now I have to start by talking a little about minimizing movement. Also, some people already talked about that. Let's just refresh the ideas. So we perturb quadratically our function. Sorry? This one? Yeah, the first one. Yes. Satisfied? Okay. We perturb quadratically the function by a small time step tau. And then we generate a sequence of minimizers of the quadratic perturbation of the function. And we call this discrete minimizing sequence. Then we just interpolate in a stupid way. This was constant interpolation. And we call this discrete minimizing. Okay. Everything works by, well, many people worked, of course, on this kind of approximation at Georgie, Amgen and Thelon and Wong for a micropartial flow, I guess. And of course, a famous paper by John Lankin and Adelotto to approximate the Fokker-Planck equation by this discrete scheme. The point is to have a good minimizing movement, you need coarsivity. And here at this point, I want to stress and I will stress later again. We don't want to assume any coarsivity on the function on phi. So sub-levels of phi are not compared in our framework. The idea is to modify the minimizing movement by resorting to Ekelem's variational principle. I recall this here, this principle, which is very powerful in a simplified version, which is sufficient for our purposes. So what's the idea behind this principle? You take a lower semi-continuous function, which is bounded from below in a complete space, and this is crucial. You take a point which is close enough to the infimum, and you have it because the function is bounded from below. Then you can always find another point, which I call the u eta, which is as close to the previous point with respect to the infimum, and whose slope is smaller, is a bit smaller. So our idea is just to replace the minimizing movement by a sequence of Ekelem movements, which you can always generate. Here you don't even need to come back to this stage. So we call this eta and eta Ekelem sequence. So it's exactly the same sequence you would have in the minimizing movement case up to this last term on the right-hand side. Let me just mention that this eta is not necessarily this one, but because here we take into account the distance between the next point and the previous one. But this can be done just by iterating Ekelem's principle. I will not enter into details, but this is very convenient for slope estimates. Okay, so let me mention that in order to have a good minimizing Ekelem movement, you just need your function to be quadratically bounded from below. To apply Ekelem's principle, you need the boundedness from below, but here we are applying Ekelem's principle to the quadratic perturbation of our function. And if the function is convex, we have quadratic boundedness for free, as I said above. So if we have convexity, that's where convexity enters, we have good estimates for these, good energy estimates for these movements. So the first estimate and the second one. First one is a slope estimate, and the second one is an estimate which involves discrete time derivative of the function. For the first one, actually, we don't need convex, but for the second one, we do need convexity. So what is the purpose of this estimate? They allow us to prove a discrete approximation theory. So suppose now that your functional admits a flow and that it is convex, lambda convex. Then we have this estimate. So the functional, sorry, the flow is close to the movement, to the Ekelem movement, and you know that the Ekelem movement is just any Ekelem movement. It is at least close with respect to, with these estimates, so square root of tau. Probably one could do better in more specific settings, but in the general setting, we were considering, well, we just, we are happy with this square root of tau estimate. So in particular, when you let tau go to zero, you can prove that the minimizing movement exists, the continuous minimizing movement exists. So just the limit as a time step goes to zero of the discrete movement. And it coincides with the flow. So here we are assuming that the flow exists. I will not mention existence results for the flow. This is a problem, a future problem, which is at the echo. Okay. Okay. This is the second part of the talk, which involves stability, stability with respect to the functional. So now we consider a sequence of functionals ph, which converge to some extent, in some sense that I will make more precise, to a limit functional phi. And support that ph functionals admit the flow. So two questions. Does the limit function admit the flow? And if it admits a flow, can we prove convergence of the h flows, so the ones associated with the ph functions to the limit flow? Well, first I have to recall some definitions of gamma and mosco convergence, because it turns out thinking back of the Hidber case, it turns out that the right notion is it involves gamma convergence. I just wrote down the definition, topological definition of gamma-limith and gamma-limps super, even if we are in a metric setting, sometimes it's a convenient topological definition. So we say that the sequence converges to, gamma converges to a limit functional if the limit and the gamma-limp and the gamma-limp super coincide. And in that case, the limit functional is the gamma limit. It turns out that this definition in the metric setting is equivalent to this one, which involves sequences. So for any sequence, xh converging to x, you have this limit fin and quality, which reminds one of the, the newer semi-continuity, in some sense. And then you have a recovery sequence. Well, in the Hidber case, of course, you always have weak topology. So by replacing the disformer that I labeled as a star, with weak convergence, you get mosco convergence. So mosco convergence is a bit stronger. You require the limit fin and quality, the gamma limit fin and quality for weakly convergent subsequence. This is the stability result, which is due to many people, and of course I'm forgetting someone, in the Hidber case. So take a sequence of lower semi-convex, lower semi-continuous functionals, and conceal the flow generated by the subgradients. What happens at the limit? So all the functionals, well, we don't suppose anything for the moment on the functionals. We just stated that all these assertions are equivalent. So convergence of the flows, if you take a sequence of what prepared initial data, then the corresponding flows converge to the limit flow. Then convergence of the resolvents, I will mention this later in a more precise way, convergence of the Moro-Uc regularization, which are just the quadratic regularizations of the ph and the phi functions. You have point-wise convergence of the Uc regularizations. Mosco convergence of the functionals, so ph, mosco converges to phi, to the limit function, and the graph convergence of the subgradients. So for any element in the subgradient of the limit function, you can almost find the sequence, Uh of elements in the domain of the subgradients, and the sequence of subgradients which converges to the limit element in the subgradient of u and for which Uh converges to u. Some remarks about this classical theorem. Okay. You have the existence of the limit flow for free, basically, because you have mosco limits of convex functionals, which is convex, and then you are in a hyper-setting. So in this case, it's not difficult to prove that you really have a flow at the limit. Then you have a recovery sequence, a sequence which is recovered both for the subgradient and for the functional, all at once. We don't assume, and in the theorem, there is no cursivity assumption, but let me just mention that if you have a cursivity, then actually mosco convergence coincides with gamma convergence. If you have strong cursivity of the functionals. But if you don't have a cursivity, of course, you are in a hyper-setting. You always have weekly convergence sequences. Therefore, this is why you have to deal with the weekly mean from which you have the mosco convergence. Okay. The resolvent operator, the resolvent convergence is in fact strictly related to the minimizing movements because you can set up in this case the minimizing movement scheme just by applying, interactively, the resolvent. So you have convergence, the resolvent operator. So you have convergence of the resolvent, and therefore you have convergence of the minimizing movements. So the idea to prove the convergence of the flows in this case is just by, just to apply the convergence of the minimizing movements along with the triangle inequality and then use the stability result, sorry, the error estimates. So if you let h go to infinity, then the set, the time step is fixed. The central term goes to zero just because of the convergence of the resolvent or of the minimizing movements. And then you are left with the first and the last term, which just involved the error, the error between the flow and the movement. But the error is more, is more with respect to tau, square root of tau at worst, uniform with respect to h. So in the end, you let tau go to zero and you just recover that the limit soup of this quantity is zero. So you have convergence of the flows to the limit flow actually. We wanted to reproduce a similar result in a very general metric setting, no assumption on the metric, no assumption on the function, no cursive assumption on the function. Where are the difficulties? Where a priori we don't know if the limit flow exists. So first, the problem on the real issue is to prove the existence of the limit flow. Resolvants in our case are not well defined. We can just use echelon movements. We don't even have minimizers. We don't have a weak apology. Of course, we could introduce one, but that could sound this artificial. As I said, we want to completely drop the cursive assumptions. And if you don't have cursive, well, then minimizing movements themselves are not very stable. For theory, echelon movements. In any case, if you had a cursive, then the passage to the limit would be quite easy. You just use the integrated formulation of the AVI that I was mentioning before for the h functionals and then you let h go to infinity. You know that the UHD converges to something by a cursive and then this something satisfies the AVI. So it is the flow. Okay, this is the result we have. So take a sequence of functionals, ph, which are not necessarily convex. The domain of ph functionals is not even a length space, if you want. Not necessarily. But the limit function is lambda convex. We need to assume that. So the following claims are equivalent. We have convergence of the flows. So the existence of the limit flow and convergence of the flow to the limit flow. We have a recovery sequence for each element in the domain of the slope, which is both recovering with respect to the functional and to the slope. Then we have a simultaneous gamma convergence of the functional and of the slope. The qualified gamma convergence of the functionals. So namely, you have the gamma limit inequality, but this one. So if you let tau go to zero here, you just get the gamma limit inequality. But we have this inequality prior to letting tau go to zero. So we can estimate the infimum of the ph functionals in the board, this radius, by the infimum of the limit functionals up to a small row of tau. And we have a similar estimate for the Moroio-Cider regularizations. We don't have a point-wise convergence of the Moroio-Cider regularizations, but we just have this estimate from below. We can estimate the limit for the Moroio-Cider regularizations with the Moroio-Cider regularization at the limit. Again, up to a small row of tau. This is not, of course, point-wise convergence, but it is nice that it is, in fact, sufficient along with the gamma limit loop inequality to prove convergence of the flows. Like in the feedback case. Okay. Some words about the strategy of proof. The idea for us is to start from an Ekeland movement associated with the limit function. That's why we need the lambda convexity of the limit function to have good estimates for the limit Ekeland movement. Then we have the energy estimates. By using the gamma convergence of the functionals and of the slopes simultaneously, we approximate this Ekeland movement by sequences of Ekeland movements that are called UH tau eta. They are not necessarily Ekeland movements with respect to the functional ph, but they are close to be. The only thing that matters is that they satisfy these estimates, the energy estimates, up to an error epsilon. Because they approximate a sequence that satisfies these estimates without epsilon. Then we use the discrete approximation error estimates, which tells us that the movements, the approximate Ekeland movements are close to the same groups with an error that does not depend on H. And finally we use the triangle inequality along with the convergence of the Ekeland movements, which we have constructed. We end up with this inequality. Since tau, namely the time step, and epsilon, so the error you have just by the fact that your movements are not Ekeland, are free. They are as small as you like. So you just take epsilon tau square and you let tau go to zero. Finally, you have proved that your sequence is Cauchy. Since we are in a complete space, the sequence of flows converges to something at the limit and then you pass to the limit in the FVI and you get the limit. So let me mention an application to RCD spaces. So take a lambda RCD space and continuous, for simplicity, geodesically lambda convex is functional. Then the student proved in 2014 that if the space is locally compact, then this function, which I call psi, admits lambda gradient flow. So in this case, as long as you have compactness, convexity is sufficient to have the flow. As a corollary of our result, we have that actually you can remove the local compactness assumption. The idea behind the student's proof is to resort to a flow in P2. So to consider the flow generated by this integral function on P2. And then to prove that this flow is well defined, he uses this approximation. It just starts to feed the entropy, epsilon times the entropy, which gives rise to a well defined flow and then he lets h go to infinity, epsilon go to zero if you want. He needs local compactness to be believed. By our theorem, we can just drop this assumption because we have a sequence of, well, of course, up to verify that this sequence of functionals verifies, satisfies our hypothesis. So it comes from the, it is phi phi and as well as slopes. So we don't need compactness. Finally, let me mention some, some extensions. So I assumed in the beginning that the space is complete. Actually, which you just need that the sub levels of your functionals, of your functionals are complete. We don't need the whole space to be complete. Also convexity can to some extent, when I refer to the stability result. Also convexity can be relaxed to some extent. Namely, you just need for the stability result the domain of the final function, the limit function to be geodesic. Or alternatively, you could assume that the limit function is approximately lambda convex. This roughly speaking means that maybe you don't have geodesic between two end points, but you can, from any theta, you can always find a teta midpoint which satisfies up to epsilon, epsilon is arbitrary, the convexity inequality as long as the geodesic identity. So epsilon close to satisfying both. But maybe as epsilon goes to zero, it may not converge to anything. And this assumption is, I would say, necessary in the sense that you can prove that if you are in a length space and you have a flow, then the function for which you have the flow must be approximately lambda convex. But maybe it could be difficult to verify it. Another application would be the case where you also let the distance vary. You also have a sequence of distances, dh, which varies. And in that case, we believe that by means of a more of a convergence, we can have a sort of similar stability result. And that's it. Thank you. Thank you very much. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you very much. before you mentioned some, where you say that that you want to discuss with the research and research. So, what just happened to me is that you say you need the existence of the flow. To prove this estimate you need the existence of the flow. But the posteriority is also, I just invite that the sequence was in the question. So, posteriority is not. Yeah, in fact it is question. It is question. I mean to prove the estimate you need the flow. Yeah, I can just say that you need, but the posteriority doesn't mean, that also standing that if you take two different paths, yes, the discrete solutions are close up to one plus one plus one to each other. Yeah, so if I do a time-worn inequality with a Taiwan and a Seoul two, it's telling me that discrete solution with Taiwan dot this. Yeah, I see. So, a posterior is telling me that that were for Xi. A posterior you know that they were for Xi. Yes. So, suggest strongly that you should need existence of the flow. I agree, right? To apply the triangular inequality, you apply the triangular inequality and then apply the estimate. Yeah. But, prior to that. I know. I mean it's a work in progress. I think we need some additional assumption in the distance. Some concavity assumption on the distance. So, general and very general setting, I don't know if we are able to prove the existence of the flow. At least in the context case. I'm not sure. With no further assumption on the distance. So, there are more questions. Let's thank all the speakers of the conference.
We study the main consequences of the existence of a Gradient Flow (GF for short), in the form of Evolution Variational Inequalities (EVI), in the very general framework of an abstract metric space. In particular, no volume measure is needed. The hypotheses on the functional associated with the GF are also very mild: we shall require at most completeness of the sublevels (no compactness assumption is made) and, for some convergence and stability results, approximate λ-convexity. The main results include: quantitative regularization properties of the flow (in terms e.g. of slope estimates and energy identities), discrete-approximation estimates of a minimizing-movement scheme and a stability theorem for the GF under suitable gamma-convergence-type hypotheses on a sequence of functionals approaching the limit functional. Existence of the GF itself is a quite delicate issue which requires some concavity-type assumptions on the metric, and will be addressed in a future project.
10.5446/59184 (DOI)
So, okay, so I'm going to talk about this result we recently finished with the present time video and Mathias de Galina in Pure College. So we look at PDEs of this type, okay, where you have aggregation and diffusion. And so this contains somehow the case where Bruno was talking about just before, okay. So in his case he had the diffusion was a power law, okay. So the main question is here is, is there a relation between the existence of stationary states, okay. Is there a relationship between W, this parameter here epsilon, so it can see the temperature as a noise, okay. And the diffusion function which will give me a kind of threshold separating existence and non-existence of stationary states, okay. Okay, so how do we do this? I'm going to forget for a bit, I'm going to forget now about the PDE, okay. And I'm going to look at the energy which is associated to this PDE. So we have here the interaction part, okay, which we have seen already. And the diffusion part which will depend on this diffusion function, okay. This diffusion function I'm going to assume, so I'm writing the energy, the entropy in this way, okay. Here I haven't given you any assumption on U, right. So if U doesn't have a super linear growth at infinity, I should actually be more careful before writing this. Because I should also take into account the singular parts of the density, okay, using this recession function. But just keep in mind this form of the energy for now, okay. All right, so we already know from several works that when epsilon is zero, so when you only have interaction, okay. You have minimizers for a very general class of interaction potentials, okay. And the idea is, okay, so what happens if now I switch on some bit of diffusion, okay. Do I still keep the distance of minimizers in a general setting of diffusion, okay. So there is already been, let's say, something, a word has been here where I switch on a little bit of diffusion, okay. And for this range of the force-minimum equation, okay, where m is the power associated to my diffusion function, I have the existence of minimizers if I consider a fully attractive interaction potential, okay. Fine, so let's see a bit what kind of potential I'm going to look at. So this is the linear diffusion as we've already seen before, yeah. So that is the linear diffusion is excluded? No, no, yes. This is also on the previous result. Oh, yeah, yeah, yeah, yeah, yeah, in this one, yes. Well, ideally we would like to take into consideration any diffusion, but yes, in this result, very cold, well, this, the linear diffusion is not good. Okay, so we look at, basically we look at the power of the potentials, okay, where you have a parameter beta, here which characterizes the strength of the repulsion at zero, okay. When we take all zero, I'm just going to write that this is the low. And then I consider diffusion, power diffusion, so you have parameter m, okay, where m could be between zero and one, okay, and all greater than one. And the linear case, I just, I'm going to refer to the, I'm going to refer to the linear case as being the case m goes one, okay. So you just look at the diffusion, right. Okay, but what are, okay, typically we look at these potentials here, and we have seen just that, we've grown just before, there are many things that have been done here, and the idea is, okay, what can we say if we don't assume powers here, what can we say if we don't assume powers here as well. And this is a, let's say, minimal set of assumptions that we needed to show things, okay. Fine. Okay, so the first question here is, why do we look at these things? One is how stable, so we see that when epsilon equals zero, we have minimizers, okay, as long as the interaction potential is nice enough. So what happens if I take one minimizer and I put some epsilon in there, okay, will I still, will I have some kind of stability regarding this minimizer, okay, will I still keep minimizers, okay. So it could be that if I switch on a bit of diffusion, then minimizers disappear completely, right. However small this diffusion is. Okay, and then, okay, so numerically we have seen many times this phenomenon of metastability, okay. So if you look at your PDE, okay, and you let it evolve, okay, you could be numerically, you could be tempted to say, okay, I've reached a steady state, I've reached a minimizer, okay, but in fact, if you wait long enough, this thing could change shape or actually disappear completely, okay. So can we say, for example, that if we put a little bit of diffusion, this metastable states actually don't exist. Okay. And okay, and as I just said before, that was the original question, can we find some sharp, okay, and then we can look at some of the solutions on temperature epsilon, the diffusion function and interaction function, okay, which would give me some kind of distinction between existence and non-existence of minimizers. Okay, and I mean, this is one work, I will come back to this, where this has been done for power diffusion potentials, which I showed you before. Okay, so just quickly, what's the answer that we got, and I'm going to show you the actual theorems, but so if W is, so the interaction potential is parallel to infinity, okay, I don't care about what happens at the origin, if it's parallel to infinity, then when M is less than one, okay, less than or equal to one, so this includes diffusion, we actually don't have any minimizers, okay, as soon as I switch on, not even local minimizers, as soon as I switch on the temperature, the minimizer is disappeared, okay. And what happens if now I want to find this general condition here, general relation, so we find the general relation between W, U and X, okay, but this general, this relation is not sharp in every regime, okay, so it's going to be sharp if M is greater than or equal to one, so including linear diffusion, but it's not sharp when M is less than one. So let me give you this, okay, so, okay, the theorem goes like this, so let's for now consider the linear diffusion, okay, so if I have now a potential that is bounded away from zero, okay, so I write that as being infinity rd minus some ball with radius delta, okay, for any delta, then the energy E epsilon does not admit any minimizers, okay, and this is for any Watterstein metric, okay. And on top of that, if I look at the critical points of my energy, so actually there are not even those, okay, those actually disappear as well, okay, under the additional condition that W be reached. So this asserts, okay, that there are no stationary states, to the continuity equation, but this, okay, this is for the whole space rd, okay, so what if I constrain myself to be on a compact set, okay, omega, well in fact in this case you can show that there are minimizers and you can show that there are infinity norm is bounded from a, it can be estimated this way, okay, so you see where this is the volume of my set omega, okay, if omega goes to zero, my infinity norm gets smaller and smaller, my minimizers actually getting flatter and flatter, okay, and so eventually you see some consistency with the fact that when you look at the whole space rd, you don't have minimizers, okay, so when you increase the size of your set omega, you actually are vanishing, you're making your minimizers vanish, okay. And, okay, so I told you here that u is the linear diffusion, but actually we can extend things to diffusion potential of this one here, okay, I want the first derivative of u to be unfounded at zero, okay, so in the case of the power law I would have m less than one, or equal to one. Okay, so how do I prove this, the proof of this is actually quite simple, okay, it just relies on the Euler Lagrange conditions, okay, that we have just seen before, okay, so we have that, okay, so just let me tell you that I'm going to assume here that w is at infinity everywhere, not just at infinity, okay, so it's bounded everywhere, and I don't, the proof will not change much, okay, and, okay, the idea is exactly the same, okay, so I think the Euler Lagrange equation, I know that on the support, on any close connected component of the support, by minimizer I have this equality here, okay, just for the Euler Lagrange equation here, okay, and anywhere else I have that in fact is greater than that constant, okay, so how is it that, how is it that we prove the reason, so the idea is to prove that in fact the support is the whole set, alright, so you should see by contradiction you take a minimizer, okay, and I know that w is in infinity, okay, so the inequality here, okay, tells me that rho cannot vanish, cannot touch zero, otherwise I would have that minus infinity is greater than minus infinity, okay, you get to a contradiction, okay, so the support of rho has to be the whole set, alright, and therefore I am in this situation here, I have the Euler Lagrange equation is true everywhere on R and D, okay, so then I can write what rho is, I can write that rho everywhere in R and D equals the exponential of, I take any constant C, okay, the constant C that is here, and I say that rho is the exponential of this, now I know that w is in infinity, so I can bound this from below, and I can write this constant here which is not zero, okay, and so rho cannot be probability measure, okay, if I integrate I get something greater than one, okay, the mass is not one, okay, so if I go back to what I assumed there is no minimizer, okay, fine, the second part of the argument was, the theorem was true, at least rho is a critical point of my energy, then I have still no, I don't have any critical points, excuse me, so if I assume again by contradiction that rho is a critical point, I will get to a similar argument that we already described before, we were doing some, you needed some holder regularity, okay, on your density, and you will get it by some bootstrap argument, you get to that C alpha regularity, and then that will tell you that again you will go back and say that the critical point satisfies this equality here everywhere on R and D, okay, and you get to another, to the same contradiction, in fact, okay, so this is how we prove the first result, I would like to, okay, so this concerns the fact that, okay, I don't have minimizers as soon as I switch on the diffusion, okay, and this is, this tells me that if numerically I see something that I call a minimizer or a steady state, in fact it's not, okay, and that's probably a metastable state, okay, so the second part of the result here, the second theorem was, what's the relation between W and U, okay, which give me some threshold between existence and nonexistence, so if I call V, V is scaling function, okay, which is right to my counter-stating function, okay, by this relation here, okay, so this encodes the information of U in the diffusion, okay, so if, okay, use differential away from the origin, and I have either this or this behavior, so either this behavior at infinity or this behavior at zero, and I don't have a boundness from above the energy, okay, so can we recover the results we have already seen, which tell me, okay, if M, you have these different regimes between M and M being the power, the power law, the power and the diffusion, and the mass, okay, the total mass, do we recover those results from this, okay, by just taking W the power, okay, and U the power, okay, so if you do this, you actually get that your conditions become this, okay, so omega D is just the volume of the dimensional ball, okay, you cover this, and you show that you get that, in fact, if you are in these regimes here, which is the aggregation-dominated regime, okay, you do not have boundness from below, okay, so if you take, if you look at, you have these two powers here and here, if you beta is less than this, okay, you will fall in either of these categories, okay, in this case, so what about the sharpness, so if, you can show that if, and this was shown already before, but in fact, if M is greater than 1, meaning, if M is greater than 1, the result is sharp, so I mean that if now beta is, so when M is greater than 1, beta is less than 0, so if now I consider that the beta is greater than this thing here, then the minimized, this is the dominated, the diffusion-dominated regime, okay, this one here, okay, it is not sharp when M is less than 1, in fact, we can find ranges that are greater, like you can find betas that are greater than this, for which there are no minimizations, okay, but this is very well explained in a series of papers here and I want to focus more on the case where M equals 1, okay, so the linear diffusion, so what happens when you need your diffusion when I take M equals 1, it tells me that it is sharp, so when M equals 1, here, beta is less than 0, but if I take beta is strictly greater than 0, minimizes, okay, so if I take linear diffusion and power, okay, power interaction potential, right, then this, if the power is greater than 0, minimizes, just another remark, if now I look at the critical case, which is the ferrican-petition case, where the interaction and the diffusion are, let's say, have same strength, then the existence, non-existent, would depend on the strength, on the parameter x, the temperature x, okay, so if I have M is greater than 1, okay, and epsilon d is less than this constant here, then you don't have boundedness from below, and in fact you can see very well if I take beta equals 1 minus M times d in here, okay, these two powers will be the same, and the non-existence will minimize or the, excuse me, the unboundedness from below, the energy will just depend on the sign of 2 to the v times minus 1 minus epsilon d over d to the 1 minus M, which is exactly this constant here. Again, when M is less than 1, we have this condition here, in fact when M is less than 1, this doesn't tell you anything because when epsilon is, you should have unboundedness from below for, regardless of the critical, regardless of the temperature here, okay, fine, so if I now go back again to the case that I'm more interested in here now, it's the case of the linear diffusion, okay, and this tells me something, so if I take linear diffusion here, M equals 1, I get to beta equals 0, right, so that's the log case, so I have log log, which is carrier signal, right, so if I have carrier signal, this result just tells me that it recovers the result that we know, that is that I have a critical temperature, okay, if I'm not at that critical temperature, I don't have boundedness from below, okay, so recover that result. So let's see how the proof is, the proof again is quite simple, it's just based on the directions of the ball, okay, so you take rho r, some direction of the ball, right, this r, okay, and what I do is I plug this into the energy and I want to show that either I send r to infinity and I get that the energy is not bounded from below or I send r to 0, I get the energy is not bounded from below, this would depend on the regime I look at, okay, so I just do a change, I'll just plug rho r into the energy, okay, I get this here and then what I do is I differentiate with respect to r, okay, I do this thing here and if I take the radius one over r in front here, here I recover what I had in the condition, remember in the condition I had this soup of the gradient of w dot, the soup of gradient of wz with z, okay, so here I get exactly this, with z being r x minus y, okay, and I know that then this from the assumption, I mean from this, this is less than the soup of this, okay, the ball of radius 2r d1 d1, and by the condition I know that this soup was going to be, as r was going to infinity, this was actually going to be less than 0, and so I have that for some r0 big enough, I have this ratio in here, okay, I integrate, I just got that when r goes to plus infinity I get minus 0, okay, so the energy is not going to be from here, using the exact same thing with the other conditions, so instead of having less than or equal to this thing here, you can say that it's greater than the same thing with the e of the soup, okay, and if you do this, you get the same condition when r goes to 0, which was the theory, okay, so in this case we have this general condition between w, u and epsilon, okay, which gives us a kind of separation between non-existent, like unboundedness and boundness of the energy, so what can we say more about the case time equals 1, okay, so about the log case, the log case, we said this, why is it short, we can show that, in fact, if you are in this case, where the loom soup of this regularity of w is less than 2d epsilon, then the energy is not bound, now if you pass it the other side, if you get that it's greater, the energy is bound to below, okay, and in fact it's better than this, you can find the Roepsen that reaches this mean, okay, fine, so the proof of this, I won't give you the details, but this is based on log HLS inequality, okay, log HLS inequality, and it's also using a compactness result, it basically tells you that if the interaction energy is bounded, then you have compactness over your sequence, okay, so if you take the interaction energy of some sequence, okay, if that one is bounded, then you can extract this sequence, so it's not based on Leo's concentration compactness, principle, the proof is slightly different in that, but you recover this decotomy here, okay, and this is what does it tell you, the first thing to notice that is important is that here I'm assuming that w positive are just bounded from below, okay, so not including the case, the K-R-C-O case where I have w equals the log, okay, so this just tells me that if I have a critical, a critical epsilon, which tells me that if I am below that critical epsilon, then I have, I have me measures if I'm above, I don't, okay, and this is very similar to what happens with the K-R-C-O because what I have in K-R-C-O is that you have this critical value for which you know you have a complete number of new measures, but this tells me another thing, okay, so this is what I said, so if I take the K-R-C-O, this version of the K-R-C-O with the log is the interaction potential in a dimension, okay, I know there is, from these works and others, you know the energy, the critical epsilon is given to me by 1 over 2d, okay, now the similarity with what we get is that the result we get is that if w becomes bounded from below, okay, and then I have that, I have that, I also have a critical epsilon here, and what happens is that I still have minimizers when epsilon is less than epsilon critical, okay, so let me tell you what happened, I think I'm included, so okay, so the first question that I would be interested in is, so look at these meta stable states, so we've seen that there are meta stable states numerically, okay, you can observe them, but what's their behavior actually when, so now we know there are no minimizers, so these meta stable states are actually not steady states, so what happens if I let a t go, the time will go to infinity, does the two meta stable states flatten and vanish, or what do they do, okay, what's their behavior, in general we don't know, okay, and the other question is, what happens when w is bounded from below, I told you there is a critical temperature, when I'm below the critical temperature, my energy is bounded when I'm above, it's not, okay, so what happens when I'm exactly at that critical value of the temperature, this we don't know, okay, and yeah, I will stop here, I think, thanks. Thank you. Questions or comments? I thought you had a confinement potential also. I guess, but in general I didn't look at that, yeah, I said it should help you in getting. Questions or comments? Questions or comments?
We analyze free energy functionals for macroscopic models of multi-agent systems interacting via pairwise attractive forces and localized repulsion. The repulsion at the level of the continuous description is modeled by pressure-related terms in the functional making it energetically favorable to spread, while the attraction is modeled through nonlocal forces. We give conditions on general entropies and interaction potentials for which neither ground states nor local minimizers exist. We show that these results are sharp for homogeneous functionals with entropies leading to degenerate diffusions while they are not sharp for fast diffusions. The particular relevant case of linear diffusion is totally clarified giving a sharp condition on the interaction potential under which the corresponding free energy functional has ground states or not.
10.5446/59187 (DOI)
to be in such a nice place, such a nice meeting. I would like to present some recent joint work with my colleague Eva Kopfer, who is also in one, and which will be concerned with the notion of super-rich flows for discrete spaces, such as weighted graphs or Markov chains on a finite state space, which will depend on time, and which will be a natural way of deforming such a discrete space. And which is also tightly connected to recent developments, trying to understand which flows are continuous, but single as it. A particular challenging feature that we would see is that these flows tend to produce singularities in finite time, with a dimensional topology of the space changes. And this will lead us to, in order to characterize such flows to study the heat equation on such spaces, which change over time, which might be of some independence. So we first try to give you a short introduction to classical richie flows, and then tell you how this notion can be adapted to the discrete set. So let's start with classical richie flows. We have a Riemannian manifold, M, with a metric, G, which depends on time. And such a time-dependent metric will be called a richie flow, if it satisfies the following equation. Minus 1 half the time derivative of this metric should be equal to the richie curvature induced. So this is a richie flow. We will be mostly concerned with super richie flow, which can be thought of as a super solution to this equation. So we have minus 1 half the time derivative being less or equal than the richie curvature. If you are somewhat familiar with the theory of lower richie curvature bounds, then you might think of this as a dynamic variant of a lower richie curvature bound. OK, let's look at some examples. Very simple examples are given by so-called soliton-like behavior. So you start with a space that has some lower bound on the curvature, say kappa. And then you just let the metric depend on time by a scaling factor, 1 minus 2 kappa t. And then you immediately see if you calculate the derivative of this and use this lower curvature bound, that this will be a super richie flow. So if the space is non-negatively curved, this leads to a steady evolution, which does not where there is nothing happening. If kappa is negative, then the space will be expanding, where the metric is growing. And if kappa is positive, the space will be shrinking. In particular, at the finite time 1 over 2 kappa, the metric will have shrunk to 0. This is the first instance of these singularities that appear. Another very popular example appears in dimension 3. So of course, the picture I drew is in two dimensions, for lower capacities, and which is a so-called neck pinch. So you start with some manifold, which has a shape of a dumbbell. And then if this shape is tuned appropriately, then the flow will evolve by shrinking this connection more and more until at the finite time, which you expect some pinching of the second. Probably everybody has heard of the uses of richie flow in the work of Hamilton and Perlman connected to the Bronkari and geometrization connectors. And since then, it has turned out to be a very useful tool in studying geometric questions via natural deformation. And in view of these singularities that appear, there has been a lot of strong interest recently in studying this flow in the presence of singularities and to give a kind of robust description that can handle these things. So let me mention a few of them. This work by Bambler and Kleiner and Klot, which kind of define a canonical way of letting the flow evolve through these singularities. So usually, a one-wide ask one, for example, in the case of this neck pinch, one stops the flow shortly before it reaches such a singularity. It does some surgery procedure to take care of this and then starts the flow again. And what they do is they kind of pass to the limit in this surgery procedure and get a canonical evolution, which has a single idea, but one is able to continue this. Then there's different approaches and trying to give a different characterization of this equation, which is more robust. There is an approach by Haslothofe and Nebel, who try to use functionally inequalities on the path space. There's other work by Peter Topping and Robert McKen using optimal transportation to give a characterization. And there is this written work by Theo Stroum and Koffa and Stroum, who expand this approach of optimal transportation and use it to give a synthetic definition of super-rich flow for not for many faults, but for more singular metric measures cases. So next, I would like to try to give you a little bit of a hint of how you could characterize the flow in a different way, which might be more robust, because this will lead us to the approach that we would also take in a discrete setup. So the first way one can characterize is to use the gamma calculus of binary memory and to obtain a characterization as a time-dependent bohnen inequality, where you have the gamma 2 operator associated with the manifold being bounded from below by the time derivative of the gamma operator. This is kind of the dynamic version of the memory per sheet. So let me briefly recall how this works. So the gamma operator, it's given as a gamma 2 operator. It's given as a commutator between the Laplacian and the gradient. And Bohm's formula tells us that this expression is equal to the Ricci curvature plus the Hilbert-Schmidt norm of the Hessian. Now you get a lower bound of this by throwing away this positive term and using the super-rich flow equation to bound this by the time derivative of the squared norm of the gradient, which is nothing but the time derivative of the gamma operator. This is basically a way of rewriting this in terms of gamma calculus. This is intimately related to a different characterization where we derive gradient estimates for the heat semi-group associated to this time-dependent manifold. So you look at the solution to the heat equation starting at some time s, which is now a non-autonomous equation since the Laplace-Bachlan operator will depend on time. And the characterization is that if you look at the gamma at some time t of the heat flow applied to psi, this will always be less than if you apply the heat flow to the gamma at an earlier time s. Another characterization is in terms of optimal transportation using the mass-sustain distance and contractivity properties of this. This works as follows. So we look at the two-vastage line distance, I just recall the definition. It will also depend on time since it's given as the optimal transport problem where you use the distance at time t. And then the statement is we have a super-easy flow if and only if the distance is non-increasing along the dual heat flow that's acting on measures. So if you start with two measures and look at the mass-sustain distance at time t, this will always be larger as when you let the measures evolve under the dual heat flow and look at the distance at an earlier time s. And the last property I wanted to mention is the so-called dynamic convexity property of the entropy. This is very much related to lower curvature and the lower bounds where you would see a lambda convexity of the entropy. And here the property is as follows. If you think of a convex factor, it would describe convexity as a lower bound on the second derivative. Or you could describe it by the property that the derivative is increasing over time. And this is the approach that is taken here. We look at the derivative at time 1 and compare it to the derivative at time 0. And the amount, how much this is increasing is given by how much the Vasselstein metric is changing. Time. OK. To see this for those who are familiar with this, you know that if you take a second derivative of the entropy along a Vasselstein geodesic, this will make appear the gamma 2 operator. And we already know that this super-easy flow inequality can be expressed as an inequality for this gamma 2 operator. And this will precisely lead to this rate of change of the Vasselstein distance. Yes? But the inequality is the Vasselstein distance time t. Is that right? Yes. So at any t, you can look at a geodesic in this fixed metric t. And then you will have this design. OK. So these characterizations have the advantage that they also make sense in a more general setting. For example, this gamma calculus, you could make sense of in the setting of a derivative space equipment with a time-dependent derivative form. And the associated gamma operators. Or you could also make sense of this in the setting of a time-dependent metric measure space. This was the approach that was taken by Sturme and Koppfer, who gave a synthetic definition of super-easy flow in this more singular setting. Just to make the connection in particular, if we look at the metric that does that, which does not depend on time, then we just recovered the theory of synthetic curvature balance, which was introduced by Lottan-Bielei in the initial. OK. So now we would like to look at a similar concept for weighted graph, so Markov chains. One motivation for this is, as I said, to be a natural way of deforming a discrete space. Another motivation is to say, is to use it as a kind of sandbox to study concepts and techniques, which might then later also be useful for studying super-easy flows in a continuous and single space. OK. So what is our discrete setting? We look at a finite set x, and we look at a continuous time Markov chain on the set, given by a family of transition rates. The generator of this Markov chain can be thought of as kind of a discrete Laplace operator in a weighted graph. But a problem that we face in this setup is that if we would like to copy these robust characterizations that I just described in terms of optimal transport, that the classical master-stein distance on this discrete graph is degenerate in the sense that it does not admit any geodesics or gradient flows, so it's not suitable for the study. And the idea is to replace this distance by a more suitable distance that we already encountered in Jan's talk yesterday. So let me briefly recall this. The idea is to come up with a discrete analog of optimal transportation by mimicking the Van Ampour-Brenier formula. So I recorded here it's an equilibrium over solutions to the continuity equation connecting the two given margins, 1, and we minimize the kinetic energy of this interpolation. And this has a discrete analog where we define the distance between measures on our discrete space by again connecting them by a curve, solving a discrete kind of constant continuity equation where the vector field B will replace it here by a function on the edges. And then the quantity that we minimize is an action which is again kind of a square of this vector field, but now instead of dividing by the density we have to divide by something which depends on the density in the two points constituting this edge, which will make appear the suitable way to do this is to use the logarithmic mean of these two density. And it was shown by Jan and in joint work with Jan's slightly later that this distance defines a geodesic distance, so we have minimizers in this problem and these minimizers are constant speed geodesics in this metric space. And the law of the Markov chain involves the gradient flow of the entropy with respect to this distance. So in this sense it's a natural replacement for one of our suspicions. Okay. Just to briefly recall this one, one can use this approach to give a definition of lower bounds for the Ricci curvature by mimicking the approach of Lordstorm and Wilalit by saying Markov triple is lower Ricci curvature bound if the entropy is kappa convex along this discrete transport geodesics. Okay. Just a bit. And now one can try to do a similar approach to give a definition of super Ricci flows by looking at the dynamic version of this. So we could say that independent Markov triple is a super Ricci flow if the entropy is dynamically convex. So if at each... So now we have a time dependent discrete space where the transition rates depend on time. And we say we call it a super Ricci flow if along each discrete transport geodesic is a dynamic convexity property. So it's easy to come up with the first example of this solution which are of the soliton type. So we look at a discrete space which has a curvature bound in the previous sense. And then we can come up with a super Ricci flow by scaling the transition rates in a suitable way. In particular if we take... If the space is positively curved then the transition rates will explode to infinity in 5th and 6th. You think of a graph where the transition rates become larger and larger. This is the kind of way of thinking of that as a trans-moder. So this graph shrinks to a point in 5th time. Also in this discrete setup we naturally see this singular behavior. So the question is how can we deal with this concept of flow across this singular time. And the idea is to use the heat flow for this. So our goal is to study the heat equation that is naturally given on the graph. So we look at the rate of change of a factor given by the discrete Laplacian operator which depends on time. And similarly we can look at the adjoint heat equation acting on measures which will run in the opposite time. And the challenge is to study this on discrete spaces where also the base space will be penultimate. In particular we would like to allow for call-ups and for spawning of vertices. So what do I mean by this? So here's a simple example. Say you have a sub-finite set with a Markov chain given. And then say on this group of three vertices the transition rate explodes at a finite time. Which we think of this group shrinking to one point. So this happens at this singular time T i here. And then we also would like to include the opposite effect where at some singular time one vertex splits into a group of vertices. Meaning that if we look at the transition rates going backward in time then they will again explode. Okay, so let me try to make the setting more precise that we would like to consider here. Now we have a finite space Markov chain depending on time. We also let the state space depend on time and we make the following assumption. That basically we have a partition of times T naught to a T n which will be the singular times where some singular behavior is securing and in between these times the state space is fixed and we have a nice control of the transition rates. So to be more precise we have this sequence of times at we assume that X T at this time T i is some space X i bar. And on the intervals between these times the state space is fixed and let's call it X i. And this collapse and spawning of points we encode by maps C and S. Which will just say which group of vertices collapses to which point and which vertex spawns which group of points. Okay, then I said we want to have a nice control in between these singular times. So we assume that the invariant measure of this Markov chain is Lipschitz in time and that it has a limit not equal to 0 or 1. So we want to have a strictly positive invariant measure all the time. For the transition rates we assume that the logarithm of the transition rate is at least locally Lipschitz on these intervals but might explode at the end points. So more precisely we assume that the limit of this transition rate exists in 0 and one sort of transition rate can go to 0 but it can also explode to infinity. And if in case it explodes to infinity we assume moreover that the integral of this transition rate is also infinite. The idea is that if you want to think of these points as collapsing then we should then the Markov chain by approaching this time should jump infinitely often between these points. And for this we need that the integral of the rate is infinite because this will give us the average number of jumps up to this time. Then as I said this map C encodes the collapse of points sorry, two points but at the same image only if they're connected by a path of exploding rates. And we have some natural compatibility conditions that the transition rates at a singular time they are given in a natural way is the limit of the transition rates at earlier times. But somehow one has to decide if this group of vertices collapses to this point how do you determine the new transition rate from the collapsed point to its neighbor. And the natural condition is to assume that the weight of this edge, the symmetric weight is just the sum of the weights between all the points that correspond between all the points that have collapsed. And I've just described the collapse of points but we have the analog assumptions in the opposite time to a projection for the spawning. Okay then we have the following result that we can in this setting obtain a unique solution to the equation and to the adjoining equation. So to this purpose we define the space type which is just the collection of these of the points r and x where r is some time in S and T and x is a point in the corresponding space x, r. And then for every initial time S and every initial datum specified on this space at this time we can find a solution to the heat equation that is on these open intervals we satisfy the heat equation in a sense and we have the initial condition. And across these single atoms we have a compatibility. We have some kind of boundary compatibility condition which is that if we look at the point z then the value of the solution is given by the limit of the solution at each point which is mapped to this point. So if this z is the result of a collapse then the value of the solution there should agree with the limit of the solution on the points which have collapsed to this point. And similarly for spawning. Okay, we have the exact analog for the heat equation on measures running backwards in time. We have some time T, a measure on the space xT and then we solve the adjoin T equation backward in time with this initial datum. And there is a similar compatibility condition that the measure on a collapse point is given just by the sum of the limits of the measure on the points that have collapsed. The nice feature about this is that we have a natural adjoin-ness condition. If we integrate the solution to the heat equation against the measure it's the same as if we integrate a dysfunction against the solution to the adjoin-t equation even across these singular times. Okay, let me briefly award how this works. So it's on these open intervals there is no problem in solving this heat equation. The heat equation is just an ODE with coefficients which depend on the local illiterates' manner of time. The trouble is in ensuring that we have a reasonable limit at these singular times and that we also can, this is ensured by the assumption that the rates explode sufficiently fast. So this will mean that if we approach a collapse then the solution will already have equilibrated on the vertices that collapse so that we have a well-defined limit. And then the other difficulty is to start the solution again from this singular time. And then there one has to play the equation and the adjoin-t equation in a clever way against each other. Okay, so now we have this heat equation in our disposal. I need to introduce two more objects before I can give you our main result about super-reduce flows. And these are natural discrete analogs of integrated gamma operators. So we look at the integrated gamma operator depending on the measure and the function. It's given kind of as the squared gradient of a function but we integrate not against the measure but we make a pure logarithmic mean again. And similarly there is an analog of the gamma-2 operator. I don't want to, you don't need to read these formulas to precisely, they're quite complicated. Just keep in mind that they should be thought of as analogs from the continuous world of just the gamma operators integrated against this measure. And that these objects are the right ones so that they play the same role as the other objects playing the continuous world. Namely, the transport distance can be written in a smooth and a move-in-y fashion as an infimum over an action which features this gamma operator. And if you look at the second derivative of the entropy of the optimal curve this way it will make appear the gamma-2 operator. Okay, so then I can give you the main result which is a statement that four different quantities are precisely the analogs of these four different robust characterizations of super-rich flows that I presented at the beginning. The first one is a time-dependent bottleneck inequality which says that this integrated gamma-2 operator can be compared to the time-dependent of this integrated gamma operator. The second one is a gradient estimate where we look at how this gamma operator can be estimated on a solution to the heat equation as compared to the gamma operator of the same function but the edge-on-heat equation acts on the measure. The third one is a transport estimate. We look at this discrete transportation distance and see that it is increasing along the edge-on-heat equation. And the last one is the dynamic convexity property that we already saw as a first equation. What is the nice feature about the equivalence of these things? So we already had this last property as a definition but this is kind of only a condition which makes a statement for almost every time, and in particular not for the singular times. If you somehow need, if you wanted to have the singular evolution, you need some way of saying that the flow also is a supervegetable cross-lead singular times. And this is the nice feature of this gradient estimate which is really a global and time property. For any two times S and T, we have this estimate no matter whether in between S and T there are singular events or not. Okay, so let me give you very briefly two examples of such singular behavior. One is a collapse phenomenon where you can look at, for example, a product mark of chain consisting of two factors. One is a non-negatively curved factor and the other one a positively curved factor. On the non-negatively curved factor, we do nothing. We keep it constant in time. The positive curve factor, as in the previous example, we shrink it to zero. And then what we see is that we have this product space here, it's just a product of two triangular graphs. And then at a singular time, one over two kappa, one factor will have collapsed to zero and we're just left with one triangular graph. Okay, we can also have the opposite phenomenon saying that we're at some singular point, each point of the graph explodes into some negatively curved object. Okay, let me briefly conclude on the next slide. So we've seen that we've studied the existence and uniqueness of the heat equation on discrete spaces with time-dependent rates, where also the state space might depend on time. We have collapsing and then spawning phenomena. This was the crucial tool in giving equivalence of four notions of super-richy flow in this discrete setting. And another nice feature which I didn't mention so far is that this notion is also consistent with continuous super-richy flows in the sense of a storm that I mentioned in the kind of discrete to continue. Okay, next are future questions, this how one can somehow find minimal solutions to this super-richy flow equation and whether one can construct such solutions starting from a given date. Okay, with this. I would like to thank you. Thank you. Thank you for your very nice talk. Mark, you have time for questions. What is the ideal model of how the pressure has also been measured? What about whether there is such a time-dependent pressure to make the evolution of volume in all of those? No, we haven't looked at this kind of example yet, but this would be very nice to see. Also, what would be very nice to see whether this evolution can be used in some sense to smoothen a given complicated route to a simple objective. Concerning this explosion of graphs, we have sort of applications in mind where such situations could arise. So this explosion is certainly not a minimal behavior. So this really exploits this slack in having the inequality in comparison to the super-richy flow. So for a minimal evolution, what one would expect of the graph just stays the same and does not explode into a higher dimension. So I think that this study of heating reasons on such objects could be of independent interest in your era. I would be curious to see whether this can be helpful in other applications. Yes. I would like to see if there is any connection with discretization of certain degrees. So if I think about the graph and the procedure that you were responding to in final measures, can you see a convergence of the corresponding market chain to the interconnection on the domain? Yeah, in principle, yes. So this is the part that I skipped for time reasons. So this formulation is very robust in the sense that the only thing that you need to control in order to pass to the limit is the convergence of the entropy and the convergence of these transportation distances. Then the task would be basically if you have a discretization of some space to check that the transport distance is converged. This is not completely clarified yet. I would think we saw the result of Yan yesterday for final volume methods or the results of Trino's for point clouds. But here it would be interesting to have such a convergence result of the transportation distances also for curved spaces. But in principle, the formulation is robust and it adapts well to these limits. I was wondering about relationship with the monotensity formula. Is this by Troman? How is that equivalent to monotensity? The one that seems by Troman and then showing the two. Okay. Do we need to do one of these? So we couldn't explore so far yet but it would be very interesting to see where there are discrete analogs of these monotone quantities that Parabend uses like this W entropy and reduced volume. Yes. Is there a definition of leach flow for macrochains and related to the two leach and super leach? So in the continuous case there is approaches trying to characterize the opposite inequality in terms of say local concavity of the entropy. It's not so clear yet how to make sense of this in a discrete setup. Some characterization of minimality would be very desirable. Okay. Thank you very much again.
I will present a discrete notion of super Ricci flow that applies to time dependent Markov chains or weighted graphs. This notion can be characterized equivalently in terms of a discrete time-dependent Bochner inequality, gradient estimates for the heat propagator on the evolving graph, contraction estimates in discrete transport distances, or dynamic convexity of the entropy. I will also discuss several examples.
10.5446/59188 (DOI)
This application of a well-known techniques in well-known technique in probability to some problems in PD. Okay, so this This technique is known as we serve different names. So it's known for example common name is a Harris theorem, but many people know it as a coupling method or this is a simpler case known as the Dublin method, okay, the Dublin theorem and So it's a collaboration with several people So on the on this new proof, which is more PD based of this theorem. It's in collaboration with Stefan Michel from Parado Fin and regarding some application. So it's in collaboration with the two cheats out from Parado Fin Jo Evans, who's also here and have a Yolda from Granada. So let me just give some some Overall idea of the aim of this. Okay, so the main aim is to apply to some linear models in in PD theory. So the The gap, let's say, the part where it hasn't really been applied to in my I know is for models that involve no local terms. So for example for the Boltzmann equation for some PDs in mathematical biologists that has some kind of integral differential form for no local PD that involves some kind of non-local diffusion or fractional diffusion. Okay, and for kinetic for all these kinds of PD Which may have a genetic version. Okay, so we can easily meaning that we have the term V grad X So all these problems are linear. So this technique in principle applies only to linear problems but so linear even mass preserving and positivity preserving but there are some arguments for which you can extend it by no linear equations mainly with the non-linear and not to say about the equations, but some similar techniques. So it's an interesting question whether you can extend this to other things. Okay, so this is just an idea of how this, where this fits in a overall. So for a general Markov process, there are many ways to improve the equilibrium and with a rate and a financial rate. So this is one of the main problems which everybody here is familiar with and one very well-known technique is to use entropy methods. Okay, so this is one of the main topics of this conference and it's a very wide topic that is usually the main idea is that it's based on estimating the rate of decrease of some quantity. Okay, so we consider some kind of entropy. We take the time derivative, we look at the dissipation or production of this function and we try to relate them in some way. Okay, so this is very wide as a whole theory of a backfiem-Mellin and you have extensions to this for non-linear equations like the post-medium equation. You have an extension to hypercoercive equations for which this method doesn't work directly, but then you can change the function and you look at it. You somehow change it or rotate it in some way and you obtain something similar. So this is a theory developed by David Tanvilani and several many other people. So this, for example, we saw this technique in the Instagram talk, so Ahnogeian was talking about this, Joe Evans was also mentioning a different strategy. Okay, so then there's a non-constructive method, which has mainly been used in the kinetic theory and is known as a vile method. So this is something that if you have some decay, some function or some entropy, that is not explicit. Sometimes you can prove exponential convergence by some compactness arguments. Okay, so I mentioned this because it's it's also quite well used in kinetic theory and there's these results, these two-cent results by Weldani Michelin-Lamor, where if you have some estimate in some space, you have some exponential decay in some space, you can extend it to other spaces. Okay, so this is important, I want to mention this because for the results I'm going to talk about, the exponential convergence you get is usually directly in some L1 space. Okay, so some effort is needed here to extend exponential convergence in some L2 space to some L1 space with a weight. And then there's this method, okay, which is called Derbly-Noheris or main 2D or Cauchy method, that is different from this in that it doesn't look at the time derivative or something at each time. So what it does is it looks at the evolution after a certain time t and then you look at some properties of the t-time evolution operator. Okay, so it's very, so it's different, so it applies with, I think with a lot of it's very easy to relate to some problems for which these approaches are a bit harder, but on the other hand, as far as I know, you can only apply to linear problems. So linear, positively conserving and mass-preserving problems. Okay, so then what we'll do is I will give you some short context of the method, then I will give a short proof, which is so the proof of these methods is well known, but it's usually very probabilistic, so we'll use some kind of a more PD based proof. So it seems that the main, where these results come from, it's results by Harris and Main and 2D. Okay, so they are papers by Main and 2D and then there's a book, which is a very well known book in probability that explains this in very different versions. Okay, so this is not only one result, there's a whole family of results like this. And the way I came to know about this result is that there's a paper by Hiram Attingly in 2011 where they gave a proof that's very different in the conceptually different. So what they do is something that looks a bit like this hyper-corrosivity ideas, right? So what they do is you look not at the total variation distance or a one distance between one solution and another, or evolution of a density or another, but you look at a modification of this. Okay, so you take a weighted total variation distance and then you change a bit the coefficient that gives the weights so that you can get a decrease of that in time. So it's the way it's stated, it's using a mass transfer distance. So it's a way of rewriting the weighted L1 norm, weighted total variation norm between two things as a transfer distance. So the proof of what they mentioned is certainly most similar to this and all the others. Okay, so that's the most similar one. Then there are sub-exponential versions, which whose proof is harder and which are, so this only works for the exponential case and it's not clear how to extend it to this sub-exponential version. So this is an exponential version, it's mainly in this paper by Duke for Hamillin, but you can also find some explanation in notes by Haier. So there are some election notes by Haier that gives a slightly simplified version. So I mentioned this usual by Pierre Haier because it's relevant for some models I want to mention later. So this is an application to the renewal equation biology, which is an un-local model. So it's something that has a strong non-locality and so it's an easy application of this method, but I think it's new, okay, so it was done only recently. So then in the same spirit, okay, so there's this paper by Dumont-Gabriel for integrate and find newer models, then we have, which are at least I think it's too bad, but we have this myself and the tableaudes, we have the same kind of result for a different neural model, which is the one by Pagan-Petanus along, so you can do the same kind of thing. So then there are results using this method for a croquet plan and kinetic project plan, these are some references, so this paper is also recent and gives some constructive estimates. There's also a very recent reference by Jo Evans, which is trying to do something similar, right? It's not actually a preprint. It's a preprint, okay, so it's a structure, I disclosed that it's a work integration, okay, so and okay, and this is some effort to try to extend this to non-conservative cases, okay, so non-conservative meaning you have some equation that doesn't doesn't preserve the mass, okay. Okay, so I will give some very simple statements of the theorem and then I will give some examples how to prove it. So this is the main idea, it's the simplest one, okay, so this is the case of this theorem known as the Devenin's theorem, so what you need is to have some kind of Markov semi-group, when I write the Markov semi-group, I mean astro-gastic semi-groups, for me it adds some measures, okay, so ST mu would be the solution of a PDE that preserves mass and positivity, okay, preserves the integral and positivity, so I'm interested in knowing whether it would go to equilibrium at a differential rate, okay, so this is the condition, the condition means that you have to find sometimes t0 for which you have a nice lower bound for the solution, so it means that no matter which initial condition you have, you have to prove that there's some region in your space where you can reach with a positive probability, okay, so this is written formally as this, this nu, it's the place where you can reach, okay, so usually for the examples I will give, nu would be just the characteristic set of some area, some region, and this means that no matter where you start, no matter what your initial condition is, you can reach that region with some positive probability alpha, okay, so that's the only thing you need, no matter how this is saying, if you have this very strong positivity saying that no matter where you start at a given time which is fixed, you can reach the same point, then you have already a exponential convergence, so this is a spectral gap in the total variation norm, so this norm right here, if I don't write anything, is either total variation or l1 if you have a semi-group which is defined in l1, okay, so this is a convergence in l1 without any weight, so you have some, you notice that, so you have some lambda here which is the spectral gap and it's completely explicit depending on this alpha, and then you have some constant here which is larger than 1, okay, so I don't know if initially you started at some initial condition, the total variation will go down fast, okay, so the total variation of the l1 distance is always non-increasing because it happens for any medical process, but I don't know whether it really goes down, but after a certain time t0 it has 2, okay, so now this, I will give you the proof because it's so, just because it's so beautiful, okay, that is very short, and so it fits here on this slide, and the way you do it is this way, you take two solutions, u1 and u2, and you take, you can see the difference, so this s is st0, it's a fixed evolution at a fixed time, so what you do is you subtract this common part from the two measures, from the two, from these two evolutions, and you take into account that this thing is positive because we know that after a time t0, you're always larger than this alpha nu, so since this is positive, I can just write the integral, right, so I just write the integral and it's explicit because we are starting with probabilities all the time, so it's explicit, I just write it, and then what I get in the end is 2 1 minus alpha is equal to this because I'm assuming that they have this joint support, so this is, there's a very graphical way of seeing this, if you consider this as an evolution in for pd, okay, so let's say you have this, and you have this, so this is u1 and u2, okay, now I have g equal to c0, so what happens now is that this has spread a bit, and this has spread a bit, this thing here is nu, okay, so we use alpha nu, this is the, this is a picture essentially, and what's happening is that you initially have two things which have this joint support, and then after some time they need to have some common part, okay, so they have, they have some some mass which both of them are covering, so you clearly see that here the difference of them is two, and here the difference is smaller, it's smaller by exactly that proportion, okay, so now since the problem is linear, it's enough to have it for two initial conditions which are, which have this joint support, because otherwise I just apply to this one, okay, so if I have any two initial conditions, I just separate them and I apply to that one, okay, so this is the shortest proof I know, and it has some, so there's some work left to do, because we have proved that at time t0 you're contracted, okay, and there's something left to do, you have to prove that you're contracted for all times, so you're decreasing exponentially, but then it's not, it's not hard, because you have this, so this is t0, okay, this is t0, this is t0, and so you have this, right, okay, so I know that by iterating the semi-group, at this time there's some decay, so what I do is I just take the envelope, right, I take something like the exponential envelope, I will do this, and you see this difference here is where you get the non-one factor, okay, so it's, I'm just, if I can estimate some semi-group at multiple, some integer, it's okay, okay, I can do it in general, so this proof here, it's not really a PDE proof, it's the same as a probabilistic proof, okay, it's just the way of writing it, that's a difference, so I've now explained to me that the way that this is usually seen in probability, that you can see that's a coupling, right, so you can see that two copies of the process, and whenever they hit the same area, you say that they stay together, okay, so the marginal distribution stays the same, and you can prove that there's some approach to the common distribution, so, okay, so then the question was, can you do this kind of argument also for the next version, for the, for the harder version of theory, so then the condition we had before that no matter the way you start, you have to reach some region, it's too strong, okay, it's too strong because simple processes like a focal plan, for example, they won't satisfy it, so if you start very far, it takes some time to reach zero with a given probability, so you need to change your condition to some, to something else, and your change is this, so this is what we can call the local turbulence condition, and it's the same as before, but you're allowed to start not too far, okay, you're allowed to be able to prove this, not starting too far from the region you have chosen as a, the region you want to go to, and then you have to compensate with something, and you compensate with this, the appellant condition, which is almost the same also that, yes, I thought it was a question, so this is almost the same that was given in the top by ethnoglian, and so some people call it the appellant condition, or a foster appellant condition, but it has nothing to do, so I know with the appellant function, that are common in dynamical systems, okay, so this is not, this is not the meaning, so maybe I should call it foster the appellant of something, I write it in this way also because it's more appealing for the PD way of thinking, so it's saying this, that you look at some V moment, okay, so this V, typically, I would think of it as some kind of V square, or maybe X square plus V square, it was something kinetic, okay, so it's something that grows at infinity, but this is a general way of stating it, because if you have a general map of change, there's no infinity, okay, but it's something that, so at least you have to know that this region you're considering is a sub-level set of that thing, so in all models that we consider, it's something that goes to infinity, so it means the moment, the V moment, at time t0, would decrease exponentially, okay, if it wasn't for some multiple of the mass, this is the way that you can look at it, so the way that it's written, so you can write it the way that's written in the usual improbability by looking at the dual of this operator, okay, and what's one way of proving this, so for example, if... so... so maybe if you have this, right, d dt, the... okay, so if you have this here, you have this kind of condition, so you have a time-dependent semi-group, and you're able to prove this, and you just integrate at the time t0, and you have that, okay, with different constants, you would get kt, and then the gamma would be 1 minus e to the minus at, okay, so you would get... so it's written this way, because this is good for any Markov chain, so you don't need to have continuous time, you can have any Markov chain, in discrete time Markov chain, and it would work also, because this s t0 is just, would be just the evolution to a certain point, right, so maybe with a certain number of iterations of the Markov chain, but anyway, so if you want to think about time-dependent semi-groups, this is a almost equivalent condition, okay, so then what's the conclusion? The conclusion is that you get the same, but not in the usual L1 or total variation norm, but you get it in this norm, okay, so it's... okay, so this is a weighted L1 or total variation norm, okay, when I write this, you can decide if you want L1 or measures, okay, so I write this, just maybe the total variation of this, and this is the weighted total variation, so this is the beta norm, so this norm is exactly the same, used in the paper by Harry and Mattingly, and you get this, okay, you get exactly the same exponential convergence, and if you want here, since these beta, these norms are equivalent for any value of beta, so you can just write one if you want, okay, so you can forget about the beta, the beta only appears in the proof, so it's just... it's the equivalent norms, so what you get is spectral gap, but with a weight, okay, so sometimes there would be semi-groups for which you cannot lower the weight more than a certain value, okay, so there are semi-groups that will have a spectral gap with the weight x square, but not with the weight x, so okay, so then the proof of this, you can cover it out with a similar method as I wrote before, okay, which looks very much like PDA proof, and then the probabilistic proof is a bit harder, but in Italy, it follows some... it follows a coupling argument, but you have to use this condition to prove that it doesn't take too long for this meeting to happen, okay, so then what I will do is I will give some example, which is very easy, but it's new, and then we'll give a sketch of the proof, so I think I have a... until quarter past, right? Yeah, ten past... something like that. Okay, so then this is the example I wanted to comment on, so this is an ongoing work with Chiu-Chi Tao, Habaiyodas, and Josephine Perrons, and we consider this kinetic linear VGK model, so this f is a function of three variables, so it's time, space x, versus dv, and you have some transport velocity, and then in the... so you probably have some transport in space, and in velocity you have this operator, which is the same in the talk of Anton Adnod, and it's, let's say, the simplest relaxation to the matriarchal that you can think of, okay, so it's something, if you look at it in a probabilistic way, it's... there's some positive probability of jumping directly to the matriarchal distribution, okay? So the only difficulty of this model is that it's hypocorrosive in the strict sense, so in the sense that it's not coercive, if you try to apply the entropy methods directly in the naive way, you cannot get any inequality, because there's this non-exponential VK that's happening, right? These plateaus that were plotted in the talk of Anton Adnod, and so this model, you can study by these hypocorrosive techniques, so you change the norm and it will work, okay? So this is just an example of how to do it this way, and so this is the idea, the thing, you need to prove the two hypotheses I mentioned before, so maybe if I can prove the first one starting from anywhere, I can just use the doubling version, I don't need to use the Harvys version, so this is what happens in the torus, that is the one we're considering here, and so I just need to prove the lower bound, okay? So the lower bound, often it's proved by either some... so you can see that it's from a stochastic point of view, you follow one path and you estimate that there's some probability of arriving to a certain region, and often people do it with some compactness argument, okay? So they just say that since you are in a compact set, you will arrive at some point and it doesn't give you an estimate, but it's not too hard to get an estimate, okay? And this is one way to do it, so I can solve this, I can rewrite this by doing this formula, so I solve for this semi-group, okay? So I take this part and I take the negative one, okay? So I take this transfer semi-group and I can solve everything except for the harder part, which is this L-class, so I put L-class as a perturbation and I put it on the right-hand side, and I apply a dou-amount formula. So this one, this is just applying the dou-amount formula twice, and this is there because I have the minus F, okay? So there's some decay happening due to the minus F, and then this is the solution, the transfer semi-group, and then you have to apply to get the perturbation, you apply transfer, then the positive part and then transport again. So this is a direct application of twice the dou-amount formula, and I just neglected the first term, so I forgot about the term that directly evolves the initial condition, okay? So, okay, this is a very standard formula, so it's very commonly used in the, for the linear, for the Boltzmann equation, for example, even in the nonlinear state. So now, if you can estimate this, you can estimate where the solution is positive, and then there's some graphical way of doing it, which is this, so this is one property that helps us, is the following. If I am at any point xB, then this velocity scattering operator, it takes us to any velocity in a given range, okay? So my target region, as I would consider, is the set of x, which are not too large in some ball, and the set of V, which are in some ball, okay? So the set of V, I can reach easily. So the picture is that I am, I forget this. So let's say I am in, I want to reach this region, okay? I want to reach, this is a drawing in space, I want to reach this region with a velocity which is not too large. This is my, I know, aim to prove this doubling condition. So I start here with some velocity, okay? I have to prove I can reach there. So what you do is, first, when you jump in velocity, you can reach small velocities always with a fixed probability. This is true because the jumps are uniform, okay? The jumps don't depend on the probability. You jump to the distribution of the match value. Okay? So this is a way of writing it. If I start here, I can reach, I don't move in x, but I can reach any velocity less than a given value. I can reach all of them with the same constant because it's a match when you, which is the case, but I can reach some bounding set. What about, how do I get here now? So this is a different estimate that you can interpret this way. So it's, it's, it's same, same this. If you start at the point x and you have the possibility to go in any velocities in a bounded set, delta L, then there's a time such that you will reach the bound, the small x region. The reason is this, that if I can choose all velocities in some range, in some, in some ball, what I will do is I, okay, I choose this, okay? I choose this velocity and then the t zero is how long it takes me to reach this ball. Okay? So, so this is, this is the point is if you start at x with, and you have the choice of a certain bounding velocity, then you can reach this region x. You can reach anywhere in this region x with some velocity. This is why this is, it seems to be on the left. Okay? So I don't know with which velocity I reach it. Actually, I know in this case because it's the same as I started with, but, but I reached this region with some velocity. Okay? So now the iteration of these two things, if you put it inside this Duomen formula that I wrote with a purely algebraic procedure, it gives you the bounds. And the idea, the intuitive idea goes like this. So I start here, okay? So maybe I transfer a little bit, but I, I, I'm able to jump to this velocity. So now I jump to this velocity. I am, I'm going to arrive to this region. So I am here, let's say, with the same velocity. Maybe when I arrive, I don't like this velocity. It's too large. I have to make it small. But then I use again the jump. Right? So I use, I use the first one, then I use the second one, and I use the first one again. Okay? So, so that's the idea. In one jump in velocity, one transfer process, and one jump again, I can reach the region I want. So here, this is uniform because I'm an authorus and you never too far on a authorus. So, so you can prove this. Okay? You get this kind of result. So, yes. And you get the quantitative gamma? Yes. Yes. Yes. So this exact argument was used in Jolie, who it's a thesis with the double in and everything. Okay. For this one? For this one? This exact. Except the difference was he used compactness at this last stage. Okay. And it's a non quantitative length. You do get something about the idea. Yes. I think that was the first use of double in condition. Okay. Okay. It's a nice reference. Thanks. So, always with a non-constructive length? Yes. So this is common in the result, but everything here is constructive. So one can complain that the constants are bad. Okay? But they are constructive. And you get some dependence on the parameters. Okay. So that's the result. So now you work for this almost in the same way. Okay. So you see this is an advantage that I don't care too much about the shape of phi. So what I care about phi is that it should be super quadratic in the radial direction essentially. It should be super quadratic and it should be a power. Okay. So I don't want to exponentially confine things for the moment. So now I have to use Harris because now I'm in the whole space. I consider the, ah, this is not true. So this is not the, this is not the torus. So this should be rd times rd. Okay. So this is a mistake here. And so I can be too far now in space. I have, I change the torus by confining potential. And by the way, the operator is the same and I still have the same, same expression. And now I can do this. I say, okay, I can jump to, to small velocities, same as before because the, the V operator didn't change. And now jumping to small position is a bit harder because now the potential deflects the trajectory. So I don't know when I, when I, when I want to get to some region in the origin, I don't know where I have to point. But then this, this, this, this idea that was developing, which is that if I can get to some large velocity and I start with a very large velocity, then the potential doesn't matter too much. Okay. Because I have a lot of inertia. So I just, I almost go in a straight line with the same velocity. So that that's a strategy you can do. You can say that you start at a given point x, which is not too far from the, from the origin, that's it. You point to the origin and you go really fast. And if you do that, you can reach the, the origin because the potential doesn't make much difference. This is a condition that helps you prove that it doesn't make much difference. So, so you get the same. Okay. There's a, there's a difference here. You see, this is, well, no, there's no difference actually. It's really the same kind of result as before. Okay. The main difference is that I cannot start with any x zero. I have to start with some x zero in some ball. Okay. Some where the potential is not too large. Okay. So I am restricted to some initial conditions. If I'm too far, it takes me a long time to get to the origin. So now, so since we don't have this for x zero, we need some, we need to use this Harris version. And here's a, here's a force and the outcome of you can use. Okay. This works if phi is super quadratic. And so this is some kind of hypocorpsivity flavor here because it's coefficients are chosen after doing the putting ABC and doing the same, which ones will work. And you get this, okay. This star norm is the same I wrote before. Okay. So you get convergence in a, let's say quadratic weighted space or you need some, so the weight in the space is fine. So if phi is a fourth order, then this is would be fourth order. Okay. So, and then it's important that everything is constructed. Okay. And now, now I want to show you in the 10 minutes I have that, that there's a simple proof that goes in the same PD, with the same PD ideas that we had at the beginning. And, and is this okay. So I like the double improved a lot. So I want to do it at no matter what price in those in the, in this Lyapunov version. Okay. So I do this, I consider this, this norm as I wrote before. And we will show this. Okay. This, this mu now is a difference of two initial conditions. Okay. Mu has integral zero. It's just because it's shorter to write it. But mu is always the difference of mu one and mu two. So I have, if I prove this, but at a certain time T zero is contracted, then I will follow the same argument that I drew here before. I have it at integer multiples of this time, and then I will do some kind of, of, of, of, of developing curve. So this is enough. So this is what we will show is exactly the thing that's shown in very, very, very mathematically. And we do this. Okay. So first, if the case in which I have a problem is if the V moment is too large, okay, it means that your mass, your, your difference in the two solutions is far away somehow. If the difference in the two solutions is far away, then you make, let's measure it in this exact way. This delta, it's your choice. Okay. It's a choice of delta. So it means that the V moment is larger than a constant times the mass. So most of the difference is in, it's far away. So then what happens? Well, if I consider the diapunov condition, then the part that dominates, it's the exponentially decreasing part. Okay. So there's this, after time, sometimes V, okay, you get this. This is just the diapunov condition. And then this term is not too large. I bound this term by this. So it cancels this one and I will get this. Okay. So it means if you have this condition on the moment, then after time t zero, your V moment has decreased. Okay, by some factor. Now, observation, which is a calculation also very similar in a higher math, is that if you can contract, if you, if you know your contract in norm V, you're also contractive in the beta non, for any beta. The reason is that it's an average, right? It's a weighted, you can look at it as an average of the V norm and the total variation norm. The total variation norm is non-increasing, and the V norm decreases. So the average of that decreases by some factor. Okay, depending on beta, but it's true for all beta. Okay, so then this tells you after time t zero, your norm has decreased. Now, what happens if I don't have this condition? I have the other one. So this is the condition. It's, it's a compliment, compliment of the one before. If the V moment is smaller than the mass, then I have something to, I can do, and it's, and it's this idea. It means that the mass is not too far away. Okay, the mass has to be close to some interesting region. So I have a doubling condition only on a certain region, not far from, let's say, if you think about common problem from zero. Okay. So if I have a, now almost all the masses in that region. Okay, so I can almost apply the same doubling argument I wrote before. So this is the way you do it. You say, okay, a consequence of this, it's not too hard, is that both the positive and the negative part of this difference mu of two solutions has to be larger than some constant times the, the integrates of. Okay, so that the difference, both the positive part and the negative part of the difference are not too far. Okay, so, so here you can, there's one reason why you get a factor of two later, because I need to do it for each one separately. So, okay, so you carry out this, this same argument. Okay, I do the same doubling argument for each one of them. Okay, I cannot say that S mu plus is larger than alpha mu. I don't know this because mu plus doesn't have support in the good region, but the missing part is not too large. Okay, so I can say is that the alpha mu times the amount of the mass that started in the region C. The part that started outside, outside of C, I don't know what it does, but the part that started inside of C, it has to go to the right place. So you get this and now this is almost like what I actually want by the one minus side factor. So you get this again, and this is good because I'm just modifying my constant alpha by one minus side. I'm getting a worse constant, but it's still a constant. Okay, now you do the same thing here. You take a, you take the difference, you take here, and you take, you get exactly the same with this modified constant. Okay, so this is essentially the end of the, of the program, because now if, if I have a contractivity for this, for this norm, I can use again the Diabunov condition. I get a contractivity for the beta norm because the beta norm is contractive up to a factor that has to do with the mass. Okay, so this is where the choice of beta comes in. This is where it makes that I have to chose beta so that beta kai doesn't spoil the positivity of the alpha prime. Okay, so that's the only place where the choice of beta comes. So this you can somehow see the parallel with the proof of higher mattingly, which is written with mass transportation norms. And okay, so you can follow the constant and the explicit, it's not, there's not a really easy way of writing them, but, but you can follow it in every, in every case. And the constants, the, the, the conditions of the coefficient for this to work are also the same in this, that's in the theorem of paramatting. And okay, so one advantage, the last thing I will mention is that one advantage for this is that with the same kind of argument, okay, I follow this, I can prove this non-exponential version. So this non-exponential version, it's the one from paper by Dub, Foch and Guillain. And it's almost the same with the difference that here you have this phi, which is concave. Okay, so the decrease of this, now this L is a generator. So I'm looking now really at the time derivative of the V moment. So it's different. I don't get, I don't get this that I have before. So if I look at the time derivative of the second moment, I won't get the second moment, I will get something like minus the first. Okay, so I, I, you lose little bits in that. So that's a, that's a concave function you have here. And the conclusion is that you will get some decay, which is non-exponential. And this is not a spectral gap result, because you have a different norm here and here. This is not a spectral gap. You cannot have, you, you, in general, you won't have a spectral gap with just this condition. And okay, so you can follow the same argument I mentioned, and you can see really that this is some kind of, you can obtain this by some kind of interpolation. Okay, what I will do is here, I cannot do the same as before, but I say, okay, this is larger than a little bit of this, plus something which is maybe lower order. Okay, so I, there's some epsilon, I can, I can decide on how to move. And then in the end, two epsilon gives you some interpolation argument. Okay, but it's, it's really the same procedure. So this, I think it's a, it's a quite a simplification from the, from the other proofs that I've seen. But then we don't get exactly the same decay. Okay, the decay you can get, it's this. So this phi is a concave function, and this h minus root minus one is related to this. Okay, this is how you get it. And we can get this to the power of one minus epsilon. One minus epsilon. And so you get almost that. Okay, you don't get the, what should be the optimal rate, I believe, this one. You get almost that. So, okay, so I will be here and I will thank you all for this. Sorry, I'll go to the end. Thank you. Thank you very much for your talk. Thank you. This time for questions. Yes. Well, the linear-vegetable model with confinement, I believe you should get rid of the quality condition. Because if you work up to, what's linear? What? If you work up to linear? Is it? Yeah. Up to linear, right? Yeah. And change, you change your level of functional by taking exponential of a, roughly the same thing. Okay. Okay. That should probably, probably, yeah. So, it's probably the same as in the Fokker plug. Right. Right. Fokker. Between the two steps that you're combining, do you see the margin to optimize in order to get to reasonable constants? So, the thing is, the constant that you have at the beginning, depends on how well you have done the lower bound estimates. Because somehow the moment that you choose, it's almost fixed. You have got the derivative and you will get something. But then the lower bound is where you have the leeway there. So, the constant you get depends on that. And that's where you have to do the improvement. The lower bound is the part where you use a lot of. Any further questions? Thank you. So, thank you again. Thank you.
We revisit a result in probability known as the Harris theorem and give a simple proof which is well-suited for some applications in PDE. The proof is not far from the ideas of Hairer \& Mattingly (2011) but avoids the use of mass transport metrics and can be readily extended to cases where there is no spectral gap and exponential relaxation to equilibrium does not hold. We will also discuss some contexts where this result can be useful, particularly in a model for neuron populations structured by the elapsed time since the last discharge.
10.5446/58156 (DOI)
Alright, so I'd like to just start by thanking the Bielstein and Karsten for this opportunity to talk to you today and also to thank you for the support of the scientific and especially chemical community in these many years. So thank you to the Bielstein. So the work I'm going to talk to you about today, I gave that very long title, but really beyond the active site is one of the ideas I want to get across to you today, how things beyond just the active site can lend to specificity, things like the membrane and things like protein-protein interactions. So all the work that I'm going to talk to you about today is in collaboration with my exceptional collaborator, Barbara Imperiali and her people at MIT. So glycosylation is a key post-translational modification, and I think that we're all very, very familiar with this in eukaryotes. So it's very important, you know, and linked glycosylation specifically is very important in protein folding and trafficking in cell-cell interactions and the host immune response. But also in prokaryotes, although very more rare, it's really is, and linked glycosylation is really important. It's important for formation of peptidoglycan. It's important for host colonization and invasion and host adherence by bacteria. So the bacteria that I'd like to sort of focus on today are those campylobacter. So campylobacter jejuni and campylobacter concisis. These are common causes of human intestinal tract diseases. They have glycosylation of the unique proteins and, you know, quite a number of them in the campylobacter. But also bacteria with similar glycosylation pathways include H. pylori and N. gonorrhea. These are not bacteria that you want living in and on you. And so as you might imagine, we're using the pathway for glycosyl formation of these oligosaccharides as being a good target for antibacterial therapeutics. And so in the campylobacter jejuni and campylobacter concisis, which we are sort of focusing on here, we have a pathway which is really a membrane-dedicated pathway. And so both the proteins themselves, the enzymes that enact oligosaccharide synthesis, and the substrates are actually membrane-embedded. So we have these polyprenol substrates that keep the substrate in the membrane during this sort of assembly line construction of oligosaccharides. So what does this look like? So in C. jejuni and C. concisis, we first have a phosphoglycosyl transferase. So this will start with a UDP sugar and put that phospho sugar onto the polypren substrate, which is membrane-embedded. So that's the first committed step of the pathway, and that is enacted by the enzyme PGLC, and in jejuni and concisis, the unusual sugar that is added there is NN-dynac basilosamine, which I'll call NN-dynac-bac for the rest of my talk. So once that first sugar is put in, again, as a phospho sugar, giving us a Pren-dip sugar starting material, there's the next two enzymes in the pathway will again use a UDP-activated sugar to then put on a sugar, not a phospho sugar. So these are glycosyl transferases. So we have a phosphoglycosyl transferase and then two glycosyl transferases. The third glycosyl transferase actually works processively, and it puts on three sugars in a row. So that's PGLH. And then last, we have a sugar branching enzyme, PGLI, and that will give us our final product, which will be used by a flipase to flip over from the cytoplasm into the periplasm. And then that's the end of our pathway. Okay, so the first enzyme that I want to focus on is PGLC. So PGLC, again, does this very first committed step of the pathway where we take a UDP sugar and we put it onto a polyprenyl phosphate embedded in the membrane and make our Pren-dip sugar. So in this case, then, our enzyme itself is also membrane embedded. Now I said that these are targets for therapeutics. So one thing that we want to think a little bit about is, well, are there already some therapeutics that are available? It turns out that there are nucleoside antibiotics, such as tunicamycin, uremidamycin, and liposinamycin, and these guys all are indeed targeting PGTs, phosphoglyceyl transferases. And so what do they look like? Well, taking tunicamycin as an example, we have a Pren-p analog, a nucleoside analog, and a sugar analog. So if you look at the reaction that the PGTs enact, what you can think of is these are sort of a bicep straight analog. Well that seems really well and fine, except it turns out that these nucleoside antibiotics are not effective against all bacteria, namely not the C. jujuni, not the C. concisis that we're talking about. So what's going on? Why wouldn't this seemingly really good nucleoside analog antibiotic work against those enzymes? And so it turns out that nature has played rather a dirty trick on us. So it turns out that there are two families that enact this same PGT reaction. They're the polytopic PGTs. These are very, very different from the monotopic PGTs. So bacteria have either one or the other, or sometimes both super families enacting this reaction, but in different pathways. And so the polytopic PGTs have, they're monstrous. They have 10 or 11 trans membrane helices typified by MRI and WECA. Tunicamycin, those nucleoside antibiotics, they're excellent against these enzymes. They have low nanomolar affinity. But the monotopic PGTs that are found in things like the campylobacter have only a single membrane embedded helix. And I'll talk about this later. This is a reentrant membrane helix that spans down and then comes back out of the membrane again. And so this sort of anchors the enzymes only to a single leaflet of the membrane. There's three sub families in this super family, and those are typified by WAB, PGLB, and PGLC. And tunicamycin, those nucleoside antibiotics are terrible inhibitors of these. So why would that be? It turns out that the answer is not the scaffold, but the mechanism. In other words, these guys do the same reaction, but they use very different mechanisms to enact them. And so the nucleoside antibiotics are no good against this second class of enzymes. So in beautiful work, Aldeba et al, what was shown for the polytopic PGTs is that in the presence of magnesium, both there's a ternary complex mechanism where the polypren comes together with the UDP sugar, kicking out UMP, and giving us our final product. Not so for the monotopic PGT super family. So what happens here is instead we have a nucleophilic aspartate that attacks the phosphoryl group, kicks out UMP again with the assistance of a magnesium cation cofactor. We have the enzyme phospho sugar intermediate, which is then acted upon by the polyprenol phosphate substrate coming in and forming our product. This was shown in beautiful work from Barbara Imperiali's lab by DeBossis Das, where he showed that we could see the UMP release in the absence of Pren P. So again, this is more like a ping pong type mechanism. Indeed, kinetic analysis showed a ping pong kinetics. So the C UMP can be exchanged with labeled C UMP into UDP bacillosamine, again in the absence of the second substrate Pren P. And then if we look at the covalent intermediate, we can borohydride reduce it and make homocerein at the active site. And so we can actually identify where the active site is on the enzyme. And so this was all done in Barbara's lab, beautiful mechanistic work, and it really shows in a lovely way why the nucleotide antibiotics are no good. And that is that they really are bisubstrate analogs. They have both the Pren P and the UDP there. But in our ping pong type mechanism, the UMP has already left the building before the polyprenol comes in. So this is not going to be a good inhibitor. In our case, we need to make something that looks more like the phospho sugar intermediate or perhaps like the UDP sugar to begin with. OK, so that solves that mystery. Again, mechanism is incredibly important for drug discovery. And if anybody tells you otherwise, it's just not true. So when we started our workout, we decided, OK, let's go dig into the bioinformatics a bit and I can show you a couple of things. One is, of course, we were very, very happy to find that this DE or ASP-Glumotif, which I'll call the ASP-Blue-Diad, was very, very conserved, in fact, completely conserved. And that makes sense because this ASP is the nucleophile. OK, also what I can show you is there's a couple of very conserved arginines. Remember that. There'll be a quiz later. And also this very, very conserved proline. And we'll see how these are part of the mechanism and part of the structure of the enzyme. OK, so to start out with our structural work, what we did is taking those bioinformatic analyses. We were able to look at what the structures might look like in a very schematic view of the three subfamilies of the superfamily. And what you can see here is that we have one sort of very minimal subfamily where you have the catalytic domain and then, again, that re-entrant membrane helix that dips down but then pops back out of the membrane. OK, we also have some that are fused to other enzymes in the pathway, either enzymes before or after the enzyme in the biosynthetic pathway, so these sort of big, bifunctional family. And then we also have a family where, in addition to the re-entrant membrane helix, we also have some other helices appended to help really nail it into the membrane. So being crystallographers and seeing the high sequence identity in the catalytic domain amongst all members of this family, we decided to go ahead and go after the most minimal catalytic unit that in the small PGLC-like family. OK, so it turns out, luckily, that this did crystallize very nicely in the presence of detergent using just standard crystallization. And we can see the following features of the structure. One is, again, here's that re-entrant membrane helix, and that proline that I told you about that's absolutely conserved, is right here and forms the kink that allows the helix to go down into the membrane and then turn and pop back out again. What you can also see is that this enzyme seems rather solvent accessible. I'll get back to that point as well. And the fold itself is typified by this very long beta hairpin associated tightly with an alpha helix. When we use the PPM server to see how it might enact with the interact with the membrane, we got this sort of picture. So it almost seemed very much like an enzyme boat floating along in a membrane ocean. And so what you can see here is that the Delta G of transfer is very, very favorable. There's actually quite a bit of PGLC that is predicted to be membrane embedded, OK, and that's as compared to other monotopic membrane proteins in the PDB. So much higher than average amount, this entire helix rather than being a transmembrane helix is a re-entrant membrane helix. OK, so using that structure then, what can we learn about the enzyme? If we look at the structure again, this makes sense in terms of our membrane embedded region, the re-entrant membrane helix, helices A and B are pretty much very, very hydrophobic. This is colored as red is hydrophobic, white is more polar. And so what you can see is we have very hydrophobic residues embedded in the membrane. But these other helices, which you can see very easily by actually peeling off the re-entrant membrane helix and then turning the enzyme towards you, are also very hydrophobic on the bottom, the part that would sit down in the membrane. So sort of helicic D and I shown here in helical wheel projection are very, very amphipathic so that the hydrophobic portion faces down and they make up sort of the deck of the ship when we think about our sort of nautical analogy. OK, and then also the electrostatics make sense in terms of our very negatively charged substrate and product and that is that you can see this sort of very positively charged saddle going all along the active site and then looking in this view down into the active site. OK, so let's just take a look at the active site then. In our structure we lucked out there's a phosphoryl group, there's a phosphate ion bound in our structure and you can see that here. This is very nice because it gives us an idea of where the phospho-sugar intermediate might be. We see that cunningly it is docked next to the magnesium in our active site and then also our nucleophilic aspartate and there's our glue of the asp glue dyad. You can also see this polyethylene glycol molecule and maybe we could use this as a placeholder for our undecapronol phosphate. Notice that at the top of it where the phosphoryl groups would be it's also very close to that conserved arch that I told you to remember. OK, so now I can at least draw the mechanism inside of the active site and what you can see here is the aspartate nucleophile attacking the phosphoryl group kicking out the UMP. UMP leaves the active site in this first step of the two-step reaction and gives us our phospho-sugar intermediate. The magnesium would help to stabilize both the enzyme substrate complex as well as the very charged transition state that this would afford. OK, but there's something else about catalysis that this scaffold lets us do and that is that there is in that helix that is part of the active site. Here's our nucleophilic aspartate. You can see that there's a proline and that proline actually breaks between a canonical alpha helix and what's called a 310 helix. And if we look down from the end where the aspartate is, you can see that the 310 helix is much more narrow than the canonical alpha helix looking down from the opposite end of the active site. And so what does that allow us to do? It allows us to do a couple of things. First of all, the nucleophile is actually the cap of the 310 helix. And as that capping residue, what it does is it breaks the symmetry between the two oxygens of a carbonyl group. And when you're thinking of making a catalytically active enzyme and you want to exquisitely position the nucleophilic aspartate, you want to break that symmetry and just position one oxygen in the correct place. And so this really lets us break that symmetry. OK, the second thing that it allows us to do is in a canonical alpha helix, if you had two carboxylates next to each other like this ASP93, 94, and MRAY, they would actually be quite splayed apart. But our ASP glue dyad, because it's on a 310 helix, allows those two residues to come actually quite close together, closer than four angstroms. So that lets our catalytic dyad be very close together. OK, now can we get any kind of insight about the second-sum substrate? The undecapronol phosphate. OK, so first of all, we're going ahead and trying to get structures of undecapronol phosphate bound to our protein in the crystalline form. So far, we haven't had any luck, but we're going to keep trying. Meanwhile, we've gone to computational analysis with our excellent collaborators in the Stroud Lab at BU and his graduate student, Ion. So what Ion did is made a large box. I'm not showing the waters here for clarity. So he made a large box with our PGLC in it. And then he put in two molecules of undecapronol phosphate. And undecapronol phosphate's actually quite rare in the membrane. It's 0.1% of the lipid fraction in membranes. So this is actually twice that concentration to put two molecules in, but we really wanted to be able to see it bind, so we doubled the concentration that is physiological. OK. And we saw two things. One thing is that the enzyme state as being monotopic really mirrors the state of the undecapronol phosphate really well. And that is that the actual undecapronol phosphate by itself, even in the absence of PGLC, is always monotopic. It only occupies one leaflet of the membrane. So it sort of coils up, which is entropically favorable. And it actually does then occupy just a single layer of the membrane just as PGLC itself. The other thing that this gives us is, well, one of the two molecules did indeed bind to PGLC, and that's shown here. We left in the phosphoryl group as a marker to where the phosphoglycosyl intermediate would be because the undecapronol phosphate would bind to the form of the enzyme where the phosphoglycosyl intermediate lies. It then comes into the active site, and we see that here, the final docked pose. And I'm going to turn it by 90 degrees so that you can see a top view. And when you do, you see that the enzyme kind of throws one arm around the undecapronol phosphate. So there's a nice large cavity here for the undecapronol phosphate to nestle into the active site. Okay, now I can draw the second half reaction. The second half reaction then we have the phosphoglycosyl intermediate. The undecapronol phosphate comes in. We do draw this in such a way that the glutamate will participate by sort of holding on to the magnesium a bit or sharing the magnesium with the aspartate 93. The phosphoryl group will now attack the phosphoglycosyl intermediate, and now we have our final product. So we have some structural evidence for both steps, in this case more through computation, in this case from our structure of PEG and phosphate bound to the active site. Okay, the other structural piece of information that I can show you has to do with the active site and its solvent exclusivity. So as the perceptive among you may have noticed, this is a rather open active site, and we wondered if it would remain open in the presence of substrate. Now so far we don't have a UDP sugar bound structure. We tried to get one and what these figures are showing is the sort of very poor density that we see in the active site when we try to bind sugar. But what we did do is break the symmetry of the crystal, and we wound up with eight molecules in the asymmetric unit, all with this loop, which we call the angel loop, because that's actually the sequence in that loop. So we broke the symmetry and that angel loop here actually takes on eight different conformers in these crystals with that we have soaked with substrate. And so if we look at what kind of movement that is, if we do principal component analysis, what you can see is that there's sort of one major component. And that one major component is a very hinge like motion, which would allow the closing of this angel loop into the active site. In our computational analysis, we saw a similar movement and that's shown here. Here is our simulation data. Here is that experimental data now shown as a movie of obviously it's a smaller movement in the experiment than it is in the simulation. But it gives you the sense that this loop might close. Here is the principal component analysis in the simulation. And here is a heat map of the match. As you can see, principal component one really, really matches well. And that is this sort of closing lid like motion onto the active site. What would this let us do? It would let the enzyme go ahead and make the active site more solvent exclusive, controlling the pKa of the residues in the active site. And also the surround that is going to surround the substrates. Okay. What else does our scaffold let us do? Well, one thing that's very clever about PGLC is it has the active site shown here within Cyan right at the membrane interface. So by doing that, the second substrate, which is membrane embedded on decapronol phosphate, can just sort of swim up to the active site without being pulled out of the membrane, which would be energetically costly. Not so with many other enzymes that are transmembrane or monotopic that act on these large lipid catalytic substrates, such as carnitine, palmitil transferase, and polyisopronol phosphate, glycosyl transferase, their active sites are rather far above the membrane interface. And so they have to do this very energetically costly translocation of the substrate out into the active site on every catalytic cycle. Our enzyme avoids that by putting the active site right at the membrane interface. The other question that sort of remains is why? Why would nature take two different scaffolds to work on the same reaction? In other words, why bother just having two scaffolds present that could do the same reaction in metabolism? Okay, so I have a hypothesis that I'm putting out there, and that is that there are polypronol phosphate, as I said, is a very rare substrate by 0.1% of the lipid diffraction in membranes. And so if you have a very rare substrate and it's being shared by two different enzymes and the two different pathways that they enact, having two different scaffolds gives you a way to actually control the partitioning of that substrate into the two without only being able to control K-Cat, or K-Cat over KM. That is that we could then actually control the regulation of these two scaffolds separately, and I'll talk a little bit more about the possibilities of regulation in a bit. Okay, so that's sort of hypothesis number one, sharing of a common rare substrate. All right, looking at the super family itself, if I make a sequence similarity network where things that are similar are clustered together, things that are different are further apart. We can see the following. This really gives us the sense that comparing archaea to bacteria, most of our PGLCs are in, or PGTs are in bacteria. There are very few in archaea, although there are some, and those do indeed cluster together, being more alike one another. This is really a pretty large family of PGTs of about 63,000 members. Doing a phylogenetic reconstruction of those gives us a couple of hints about how nature has evolved the scaffold. That is that if I sort of zoom in here, you can see that there's a sort of a radial burst, and that there's the evolution in parallel. Remember, we have three sub-families, and two of them are much larger with extra transmembrane helices and extra fusions. But the small PGTs, which are just the catalytic unit and the re-entrant membrane helix, those actually come up all through the tree. And so they've evolved in parallel. So it really gives you this idea that the smaller enzymes are the more modern innovation, and that the ancient enzymes were these larger fusion proteins. Now this is just a slide to show you what kind of fusions those might be. And if we take a look at what types of fusions, what you can see is the following. We see that there are some that are bound to sugar-modifying proteins. Now these turns out to be those sugars, those sugar-modifying proteins that either come before or sometimes after the PGT in our pathways. And so if you look at the pathway, you also see glycosyl transphases. I told you PGLA and PGLJ that come after it. And those are also fused sometimes to our PGTs. So we have two different types of fusions, enzymes that come before it in the pathway, enzymes that come after it in the pathway. We also see regulatory domains. Ha, this goes back to the idea of regulation. And what you can see here is that there are small domains, such as those that are found in the key-y signaling pathways in bacteria, also fused to the protein. But the big shocker was this one. We also saw allotopic PGTs, fused to monotopic PGTs. What? So the two superfamilies that enact the same exact reaction are actually fused together. Why would you fuse two enzymes together that do the same reaction? So my hypothesis, and I'm going to stick to it, is that this unusual fusion would occur because of the rare substrates. So again, we're fusing together the polytopic PGT with the monotopic PGT. So why would that be? So one thing is that if the monotopic PGT were able to bind to our rare substrate polyprenol phosphate, it may then gather a lot of that together in the locale and allow our polytopic PGT to bind to it and help it out in catalysis. And do I have any evidence about this at all? Yes. First of all, the PGLC, the monotopic PGT, does double the amount of undecaprenol phosphate when it's in the membrane. So this was done in the Imperiali lab using membrane nanodisks. Second of all, so it is able to harvest or gather up our undecaprenol phosphate rare substrate. The other thing is that we find pseudo enzymes or enzymes where the catalytic dyad, asp glue, has been mutated away in our monotopic PGTs, but we have the catalytic active site intact in the polytopic PGTs. So if we think of it that way, then these may be the ancestors where the catalytic machinery hadn't yet evolved in the monotopic PGTs. So that's an evolutionary theory there as to why we might still have some remnants of that fusion left. Okay, so we now have our PGLC nicely surrounded in our pathway, but how about these glycosyl transferase? Okay, again, these guys are just going to put on PGLA and PGLJ will put on a single sugar. PGLH is going to put on three sugars successively in a processive fashion. So the question is, what is going on with those? What are their structure functions like? PGLH, there was already a structure done and this structure shows us two Rossman folds and indeed all of our glycosyl transferase is used the same fold. So we're going to see how this very similar fold has been utilized in all the enzymes. So my lab went after the PGLA and Nino Voxanovic in my lab was able to get the structure of PGLA. Jocelyn Klassman had first actually crystallized the enzyme in my lab. So these are two post-oxon in the lab. What you can see is the following looking at that movie. There are two separate Rossman domains. The catalytic site is located between the two domains. The N terminal domain is that that's found at the membrane and it is the most divergent in sequence when we start to compare PGLA, PGLJ and PGLH. So zooming in on the active site, we see something a little bit surprising and that is that the sugar is actually bonded through polar interactions, not through the usual sort of hydrophobic stacking interactions that are found in sugar binding sites, so that was kind of a bit of a surprise. And what you can see here is these polar interactions shown as dashed lines in the active site. So now how about the membrane embeddedness? Well, it turns out that this enzyme is barely hanging onto the membrane. It's just sort of dipping its toes down. Again here, I'm using the PPM server to do the docking into the membrane. I've colored the polar residues as white, the nonpolar as red, and you can see that those nonpolar residues are dipping down into the membrane. Here's our active site, you know, fairly far above the membrane. Okay, so how about mechanism? Well we're always interested in mechanism and here's the active site shown as a schematic and what should happen here is that we have a retaining mechanism with an intermediate oxy-oxocarbenium ion and what you can see here is that we would have partial positive charge and you would think then that you would have some residues nearby with negative charge in order to stabilize it. However, in PGLA we are not seeing that, we're not seeing any residues that are really close by that are negatively charged and so it seems more as though it may be the phosphoryl group of the UMP that is allowing that negative charge to actually stabilize the positively charged transition state. That's something that's ongoing, we'll still work to mutate those residues around it like glutamate 113 to see if they play a role in catalysis. Using nanoDSF we're able to actually look at binding separated from catalysis of our donor sugar UMP and so if we look at that you can see that UDP gal nap is much, much better than UDP galactose so that an acetyl group is really important and we see in our structure this just one beautiful hydrogen bond to that an acetyl group nitrogen which kind of explains that specificity right there. But again, interesting all those polar interactions. Okay, now how about the comparison to PGLH? So again we're doing the comparison between the PGLH just puts on a single sugar to the initial Prandi P membrane embedded substrate and then the PGLH that puts on three successive ones to a much longer sugar substrate that already has two sugars appended to the Prandi P. So what you can see here is the overall structure is incredibly similar. Again, two Rossman folds making the active site at their interface and the RMSD between PGLH and PGLA is only 1.92 so their scaffolds are almost identical. The active site really continues to highlight the similarity. The active site for the donor sugar which is the same for the two enzymes UDP gal nap okay, very, very similar again. So there's nothing really surprising here that's the donor what about on the acceptor site again the acceptor will be quite different a much longer sugar for PGLH and it's possessive. So Ramirez at all had noted that there is this helix at the bottom very close to the membrane and it has a series of positively charged residues on it. We have the same helix in PGLA. In PGLH Ramirez at all suggested that this could act like a ruler and allow as each sugar is put on the polyprenol substrate to kind of march down that helix. So they called this the ruler helix. We do have a similar helix in PGLA which is not you know again PGLA is not possessive but those charged residues are in slightly different positions. So the ruler helix hypothesis is still in play but we also want to put forth another hypothesis about the positioning with respect to the membrane itself and that is that what you see is in PGLA the active site is much closer to the membrane than in PGLH. So specificity for this longer sugar may really be partially encoded by the separation between the active site and the membrane. So membrane positioning may actually be part of specificity and this is part of why I called my talk beyond the active site. You know let's go beyond the active site when thinking a little bit about specificity. Okay so what I've shown you today is a story about enzymes that are really structured to gather rare substrates. We see that actually we've even fused the enzymes to other proteins that either use the same substrate or that make the sugar that comes previously in the pathway. We've also seen that membrane positioning may be part of specificity and so when thinking about these proteins distributed like this in the membrane possibly they would form more of a metabolon where they're sharing these rare substrates and again where the polyprenol phosphate intermediates can simply diffuse from one to the other in a much more simple fashion. So the idea of bringing these enzymes together in the metabolon. One point here is that PGLC when in nanodisks will recruit and bind to PGLA in solution and that those experiments were done in the Imperiali lab and really points to this idea of protein-protein interactions. So those are the thoughts that I'd like to leave you with and I just would like to thank my amazing team at BU and also the amazing teams of my collaborators in the Allen lab, Nino and Jaws who worked on PGLA, the glycosyl transferase, Katherine who did the sequence similarity networks and Leah Ray, Andrew Lynch who worked on the phosphoglycosyl transferase structure and Hayley just joined the team and she's working on the PGLI that puts on that branching sugar. In the Imperiali lab, DeBoss is for his beautiful mechanistic work. Hannah and Sonia have worked on the GTs quite a bit and as well as Greg working on all aspects of this fantastic project along with Veneto who was one of the originals doing work on PGLC. In the Straub lab with all their help for the various molecular dynamic simulations, Argonne National Labs for photons and funding and training support from NIH for support to the Allen lab. Thank you very much to them and thank you all for your kind attention today.
Bacterial glycoconjugates, including N-linked glycoproteins, are a diverse group of macromolecules that provide mechanical stability to microorganisms in challenging environments and mediate interactions among bacteria and between bacterial pathogens and their hosts. These interactions are often critical to bacterial viability and virulence in humans. These intricate pathways for glycoconjugate biosynthesis draw, in early steps, on substrates found in the bacterial cytoplasm to ultimately afford products that are localized to the periplasm or cell surface. Despite their great structural diversity, many glycoconjugates are biosynthesized using a common biosynthetic strategy involving en bloc transfer of glycan to proteins, lipids, or other glycans. The glycan to be transferred is assembled on a polyprenol-linked carrier at the membrane interface. The pathways start with a “commitment to membrane” step catalyzed by a polyprenol phosphate-phosphoglycosyl transferase (PGT). This step is followed by sequential glycan-assembly steps mediated by glycosyl transferases (GTs), each acting on membrane-resident PrenPP-derivatives, to complete glycan assembly on the lipid-linked carrier. The goal of our studies is to uncover the determinants of specificity and mechanisms by which these enzymes catalyze their reactions on membrane-embedded and soluble substrates. Biochemical studies and the X-ray crystal structure of the PGT from Campylobacter concisus, PglC at 2.74 Å resolution, show that the monoPGTs include a reentrant membrane helix that penetrates only one leaflet of the bilayer, then re-emerges. Subsequent molecular dynamics (MD) simulations show the undecaprenol phosphate (UndP) carrier mirrors this occupancy of a single leaflet with frequent transitions between stretched, coiled, and unstructured conformations of the polyprenyl tail. These simulations also allow a first view of UndP binding to PglC, corroborated by bioinformatic and mutagenesis studies. Moreover, a loop closure motion of PglC in the MD simulation matches the motion inferred from X-ray crystallographic data, consistant with an induced-fit model. Sequence-similarity networks and phylogenetic analysis of the monotopic PGT superfamily uncovered extensive numbers of fusions with other pathway enzymes and provide evidence that the enzymes in glycoconjucate synthesis are structured to gather rare substrates. We have recently determined the X-ray crystal structure of the enzyme that carries out the next step in assembly, C. concisus PglA, in complex with the donor-sugar substrate UDP-GalNAc at 2.5 Å resolution. The structure of PglA has remarkable similarity to the GT PglH (rmsd 1.9 Å) which catalyzes the precessive addition of three GalNAc moieties in the penultimate assembly step of the pathway. Comparative analysis of membrane-docked structures highlights significant differences between PglA and PglH in the relative orientation of the active site with the membrane interface. We posit that acceptor-substrate positioning in the membrane may play an integral part in specificity in the GT enzymes. This work is funded by NIH R01GM131627.
10.5446/58069 (DOI)
Good. With that, I'd like to briefly review our speakers. We'll be starting off with Scott Richard St. Louis, then, and hopefully I say these names correctly, Omid Ghezvan, then we'll have Marcus Herklutz and Lars Oberländer doing together, then we'll have Deborah Gebraak followed by Julian Franken, then we'll have Terrence William O'Neill, Paola Corti followed by Tim Tolle, and closing up our session will be Arjun Sanyal. So with that, I'd like to ask our first speaker to come on up. Scott Richard St. Louis, maybe every speaker can just briefly introduce themselves, say where they're currently at physically, and then once they've shared the presentation, they can go ahead and start the three-minute talk. So Scott, the digital floor is yours. Thank you very much. Hello, everyone. It's great to be with you today to begin what I am sure will be a fascinating poster session. My name is Scott St. Louis, and I am the Scholarly Communication and Discovery Services Librarian at the Federal Reserve Bank of St. Louis. My last name is just a happy coincidence, but I do like to say it's not every day you get to apply for a job, which literally has your name on it. So hello from St. Louis, Missouri in the United States. I'll keep my poster pitch nice and short today out of respect for the other presenters. Having completed library school at the University of Michigan in May of 2021, I onboarded into my first ever hybrid work environment at the St. Louis Fed, which I define as a work environment that blends remote requirements with in-person or on-site requirements. And to make a long story short, my poster is simply comprised of questions that came to mind for my supervisor and answers based on my experience as a relatively new employee, having just celebrated my one-year anniversary with the Federal Reserve System earlier this month. These questions relate to various aspects of the hybrid onboarding experience, and I hope that my poster booth will serve as a place for discussion about the opportunities and challenges of remote work as we all continue getting accustomed to this new normal of life in libraries and in many other types of workplaces. Thank you very much. It's great to be here, and I hope to see you in the booth.
The COVID-19 pandemic is transforming organizational cultures across the workforce, with libraries of all kinds being no exception. This poster presentation will focus on the experience of a scholarly communication and discovery services librarian beginning a new job in the United States Federal Reserve System in May 2021, immediately after completing graduate school. The poster will be organized around answering key questions that participants in INCONECSS 2022 might have in mind with regard to onboarding new colleagues successfully, especially in libraries that have experienced major changes in day-to-day working life over the past two years. These key questions will include the following, in no particular order: What has worked well with remote onboarding as a new employee? Where is the in-person component of working life unmatched by remote work? How can a new employee ensure harmony between their own expectations/preferences and those of their colleagues and supervisor? How might the lack of daily spatial proximity to colleagues impact the informal knowledge sharing that orients a new employee to a library/office culture, including the explicit and implicit aspects of that culture? In that ways might new employees compensate for this deficit? In a remote working environment, how might a new employee go about building relationships with important “secondary contacts” in a library organization? (The people you need to know, but don’t necessarily need to see or interact with every day.)
10.5446/58071 (DOI)
Hello. Hi, Scott, how are you? I'm doing well, how are you? Doing well, thank you. Good to see you again. Over here in Germany, it's coming up to 8.30 in the evening. So I guess there in St. Louis, what are we talking about? 1.30, 2.30? 1.30 in the afternoon. 1.30 in the afternoon. Then I hope you had a good lunch. And with that, I would say, could you tell us maybe, I've had the chance to visit beautiful St. Louis in the arch many years ago. When we come to St. Louis, what's something that you would recommend to see? A touristic place, museum, a ball game, what are your highlights? Well, of course, the Gateway Arch is probably the most famous recommendation, but I recommend going to see the St. Louis Cardinals if they're in season, or taking the time to see some of the lovely museums or other attractions at Forest Park. At Forest Park, wonderful. And St. Louis Cardinals, for those that don't know American baseball, it is one of the big major league baseball teams in the city. Excellent. Okay, Scott, so just real quick, I wanted to give a brief introduction. Scott is the scholarly communication and discovery services librarian at the Federal Reserve Bank of St. Louis. His topic today for this presentation will be Fed and Print, the Past, Present, and Future of Making Federal Reserve System Research Outputs Visible and Easily Searchable. Yes, and with that, I would say you have about 15 minutes for your presentation, and then for our last round of Q&As, about five minutes. So I see the screen is shared, and with that, I'd like to give you the digital stage. Please. Thank you very much. Hello, everybody. It's great to be with all of you today at this wonderful virtual conference. As was previously stated, my name is Scott St. Louis, and I am the scholarly communication and discovery services librarian at the Federal Reserve Bank of St. Louis. My last name is just a happy coincidence. I'm not from the city of St. Louis originally, but I didn't want to let Vanity get in the way of a good job opportunity. So after library school, I jumped at the opportunity to come here. And I'm here today to share with you a quick presentation of maybe 12 minutes or so about a web application called Fed in Print. I know it can be difficult to pay attention at the very end of a rich panel, like the one that we've all enjoyed today. So I'll try to make my presentation a bit shorter than the standard 15. And before we begin, I need to inform you that I'm representing only myself with these perspectives. My thoughts don't necessarily reflect official positions of the Federal Reserve Bank of St. Louis or the Federal Reserve System. So as one colleague of mine likes to say, if you don't like my presentation, you can yell at me about it, but you can't yell at my boss about it. So let's dive in. A little bit of background about myself before the main content of the presentation. I accepted this position before finishing up library school at the University of Michigan in May of 2021. So I've been on the job for around a year now. And at the bank, I wear a lot of different hats. For example, I'm currently serving as product donor on an Agile team for Fed and Print, which as previously stated is a web application. Fed and Print indexes research outputs from throughout the Federal Reserve System and then presents metadata about those outputs to major discovery services like Google Scholar and research papers and economics, also known as REPEC, as well as the Social Science Research Network or SSRN for select banks. I also maintain the publication process for the St. Louis Fed working paper series through our internally developed research information management system. Once a year, I have a leading role in the annual citation analysis project that contributes to assessment of our economists' work. And I also process user support requests for the popular data aggregation service known as FRED. Recently, I also worked on implementation of X Libris 360 link for the benefit of our researchers. And I also provide conference planning assistance and do a number of other things, both within and beyond the traditional scope of the librarian's duties. My main goal for this presentation is really simple. I just want to provide you with a brief, high-level introduction to a resource that can be very helpful for gathering information about the research pursued by the 12 regional banks and Board of Governors that make up the Federal Reserve System. This presentation will provide you with a very brief history of Fed and Print, as well as an exploration of the Agile software development processes used to articulate, refine, and prioritize forthcoming enhancements to Fed and Print. We'll also touch very briefly on the system-wide cooperation required to successfully populate Fed and Print with timely, high-quality item metadata. And then we'll do a quick tour of key Fed and Print functionalities from the user's perspective. And finally, we'll discuss future plans for Fed and Print enhancement and improvement. So very quickly, I'll start by providing you with a history of Fed and Print as it has evolved over the past several decades. As its name suggests, Fed and Print started life in print as an index of Federal Reserve System research publications first featured as a quarterly in the Federal Reserve Bank of Philadelphia's business review. Then from 1962 to 1976, an index was released every two years with Fed and Print first appearing as a publication all its own in 1972 after receiving its present day title two years earlier. The scope of Fed and Print expanded in the 1980s to include most textual publications from the various research departments scattered throughout the Federal Reserve System. And then by 2000, publication of the paper copy had ceased with successful online availability of the index. Expansion of scope occurred again in 2002 to include select publications from Federal Reserve departments other than research departments and streamlined submission of Fed and Print content to research papers and economics. The REPEC database took place in the 2010s. And today you can read more about the history of Fed and Print in more detail at fedandprint.org slash about. Changing gears now, I want to provide you with a quick look at the agile software development practices used to document, refine, prioritize, and implement feature enhancements, bug fixes, and other improvements to Fed and Print. Fed and Print is one product line on an agile team, including multiple products, and therefore multiple product owners. So in the course of my time at the Federal Reserve Bank of St. Louis thus far, we have used a mixture of scrum and Kanban methodologies within the agile development universe to strike a balance between the multiplicity of needs that our product owners bring to the table and the developer capacity we have on hand to address those needs. We strive to articulate product needs through the voice of the user by compiling backlogs of user stories that explain the business value of various enhancements and fixes specifically from the user's perspective so that this perspective remains top of mind throughout the development process. And those stories are then submitted to what are called refinement sessions in which multiple product owners and developers come together to improve how the user stories are written for greater instructiveness and comprehensibility from the developer perspective. Refinement meetings are also where our amazing developers estimate the relative complexity of various stories through a pointing system. Prioritization meetings then take place in order to sort out the relative importance and urgency of various user stories among the several products for an upcoming two-week sprint. And within those two-week sprints, peer accountability for implementation of enhancements and fixes guided by those user stories takes place in the form of daily stand-ups in which developers and product owners alike report to one another on what progress they've made the previous day, what progress they plan to make in the day ahead, as well as any blockers to productivity that they are experiencing. Near the end of each two-week sprint, review meetings take place in which multiple development teams come together to report and demonstrate their two weeks' worth of progress to one another. And retrospectives or retros also take place in which the product owners and developers comprising an individual team come together to reflect on what went well and what could have gone better in the previous two-week sprint. A quick look at the sheer amount of resources available on Fed and Print might leave you wondering how the index gets updated in a timely fashion with new publications coming out more or less every day throughout the system. And the answer is that that responsibility is collective in nature. It falls to designated content contributors at all 12 regional banks within the Federal Reserve System, as well as the Board of Governors of the Federal Reserve. For example, I serve as the content contributor for the Federal Reserve Bank of St. Louis. Content contributors are responsible for adding new series level, item level, and author-level metadata to Fed and Print as necessary to ensure that the index remains up to date. On a quarterly basis, as overall system maintainer for Fed and Print, I also assemble the content contributors throughout the system to share progress updates and listen for potential future needs that will need to tackle collaboratively. One example of recent collaborative work in which content contributors have participated together is some basic keyword quality control for Fed and Print. This is definitely still a work in progress, but as I'll talk about again later, we're hoping to remove typos and consolidate duplicative keywords that have emerged in Fed and Print over the years. So now let's turn to the meat and potatoes of the presentation. We'll take a quick tour of Fed and Print from the user's perspective. I hope that this quick overview will be of interest to you and that you'll keep Fed and Print in mind for yourself and your patrons as a useful point of discovery for a rich variety of resources from the Federal Reserve System. Turning now to the home page at fedandprint.org, you'll see that there are multiple ways to access the latest items added to Fed and Print. The home page itself displays a few of these items. You can also visit those recent items by visiting the individual websites of the 12 regional banks, as well as the Board of Governors, down at the bottom of the page. Longer lists of most recent items for the system as a whole, as well as for individual banks, and the Board are available via the latest tab up at the top of the home page, as you can see here. We also make RSS feeds of latest items available by bank, as well as by a couple of keywords at present. Those keywords are COVID-19 and diversity. For those users interested in browsing for content of interest rather than searching, we make browse functionality available by bank, publication series, publication type, author, Journal of Economic Literature classification, and keyword. So there are many different ways to slice up the content for your particular research needs. For example, Browse by Series will take you to a page displaying all of the publication series available in Fed and Print, organized by bank or Board of Governors. We also make it possible to see which Federal Reserve System publications have been assigned Journal of Economic Literature classifications. For those users interested in a more traditional search experience, you can enter search terms and see what comes up for title, author, abstract, keywords, or all of the above. And once a user has been served with search results, a number of filtering options remain available to make it possible for more refined searches to proceed. Fed and Print also offers Boolean search capability, as well as the ability to sort by relevance, ascending date, descending date, or alphabetical order. Turning now to future plans for the Fed and Print application, an API is currently under development that will enable public users to access series level, item level, and author level data programmatically using tools like Python, for example. The API will also empower our administrative users, also known as our content contributors, to automate records creation for Fed and Print. And as previously stated, our keyword quality control work is ongoing as well. In addition to the manual review that content contributors jointly pursued, I wrote a Python program utilizing web scraping and fuzzy string matching to identify duplicative keywords as good candidates for consolidation. And additionally, I'll be working with our developers in the not so distant future to add dictionary and spell check functionalities to our content contribution admin forms as a way of cutting down a bit on the long-term sprawl of keywords that we've noticed previously. And that's all for me today. I hope you've enjoyed the presentation and will keep Fed and Print in mind for your patron support and research needs, even after the conference adjourns. Please feel free to connect with us at the St. Louis Fed in any of the ways listed here. Please also consider joining us in November for the 2022 Beyond the Numbers conference. Our call for proposals is available at the link provided on this slide with a deadline of June 30. And you can also contact me via email at the address provided at the top of the slide. That's scott.saintluis.shtls.frb.org. Thank you very much to all of you for attending and to the conference organizers for such a wonderful, seamless experience. Thanks very much. And thank you, Scott, Richard St. Louis. And name that I cannot forget. Fantastic. Good. We are coming up to our final Q&A session here. And I'm just going to jump right into it. And the first question we have here is, let me see here, are your user stories available for interested colleagues? That's a good question. I'd have to follow up with folks on the team to see if that's possible. But I can't provide an answer one way or another right now. If however, the person who sent that question would like to send a note my way to that email address, we can talk more about the way that the agile development processes work for Fed and Print. Super. OK, thank you. Good. Then the next question we have, do you know about how many papers are added to Fed and Print each year? And how many staff there are to manage the system? Is it just yourself doing this? No, it's not just me doing it. We have content contributors at all 12 regional banks, as well as the Board of Governors of the Federal Reserve System. So those are folks just responsible for content contribution. I serve as the overall system maintainer, coordinating activity from those different content contributors. But I also work with three wonderful developers here at the St. Louis Fed, who make all of this technologically possible in collaboration with what I'm hearing from our stakeholders. Super, thank you. Let me see if we have another question coming in. Yes, the next one we have here. I like the idea to offer RSS feeds for current topics. Do you have any feedback from users, usage numbers? Not off the top of my head, no. But I think we're hoping to add possible additional RSS feeds in the future. So stay tuned. Very good. Good. Our next question. On your website, you differentiate between journal article, conference paper, and speech. Who provides the metadata for this? The Fed? Yes, that would be correct. The metadata is provided by our system content contributors using an admin form. And hopefully in the future, we'll be able to automate some of that using the API that's currently under development. It's an interesting kind of byproduct of the scholarly communication workflows in the economics field specifically. I found in discussion with colleagues that the economics field has a lot of open practices, but not necessarily a lot of open language. So of course, economists very often circulate working papers before publishing their findings and peer reviewed articles. And working papers have sort of a publication life of their own before the corresponding journal article that is published following input feedback from colleagues. So we make working papers available. And we update versioning as we go along on the St. Louis Fed Working Papers website, for instance. And then we make available in Fed and print information about when a working paper has been published as a peer reviewed journal article. OK, very good. Excellent. Yeah. And our final question, can anyone get slash use your metadata and in any way? Let's see. So I think that the metadata will be accessible through the API once that project is released at some point within the next few weeks or months. It just depends on how long it takes us to finish up the last batch of user stories that our developers are working through right now. And then I think we'll have to talk with folks about terms of use for metadata to make sure that the information we make available is used in appropriate ways. But there should be certain parameters in place to make sure that public information is available for public consumption in ways that our users see necessary or fit. That's it. Thank you. Yeah. And once again, Scott, thank you for your presentation and also for taking some time to answer our Q&A sessions. And I'd like to give you a virtual applause, hopefully from all of our participants around the world. Some of them may be speaking not Asia, but in the US, it's a good time. Great. Thank you very much. Yeah. Thank you. Yeah.
This presentation will focus on fedinprint.org, a web application that makes research outputs from across the United States Federal Reserve System – including twelve regional banks and the Board of Governors – searchable in one location by title, author, abstract, keyword, series, content type, bank, and Journal of Economic Learning (JEL) classification. Fed in Print also presents metadata about these research outputs to major discovery services including Google Scholar and Research Papers in Economics (RePEc). The presentation will focus on the history of Fed in Print, the System-wide cooperation required to successfully populate Fed in Print with timely, high-quality item metadata, and future plans for Fed in Print. Such future plans relate to API development, automation in content contribution, possible new RSS feeds, and keyword quality control. These future plans will necessitate an exploration on the poster of the agile software development processes used to articulate, refine, and prioritize forthcoming enhancements to Fed in Print, in balance with multiple other digital products maintained by the Research Division at the Federal Reserve Bank of St. Louis.
10.5446/58072 (DOI)
I'd like to welcome to our digital stage Patricia Condon. Patricia are you there? Can you hear me? Can you see me? Yes, hello. Yes, hi wonderful. Hi, very good to see you. I see here Patty. Do you prefer Patricia Patty? What would you prefer I call you? Oh, you can call me Patty. I go by both. Thank you. Okay, I just saw Patty on the on the screen, screen name. Okay, so so Patty if I may I wanted to ask you physically what town or city are you in right now and if we ever come to visit what is one of the tourist attractions or something that you have to check out when you're in that region? Yes, I'm in Durham, New Hampshire, which is the home of the University of New Hampshire main campus. The most common reason tourists come to New Hampshire is usually for outdoor recreation. So it's hiking, camping, swimming, walking on our little bit of the Atlantic coastline. But if you come to visit the U and H campus, I'd recommend staying at the three chimneys end, which is rumored to be haunted. Oh, okay, now that's interesting. And by the way, a personal note, my aunt lived in Nashua, New Hampshire for many years. So I had the chance to go up there when I was a kid. Beautiful, beautiful state, absolutely gorgeous. Okay, wonderful. So Patty, Patricia is an assistant professor and research data services librarian at the University of New Hampshire, as you just said. And her topic is what about data literacy, business librarians and the ACRL framework for information literacy. And just to review, the ACRL is the Association of College and Research Libraries. So Patty, you'll have 15 minutes for your presentation, then afterwards about five minutes for Q&A. And I would say your screen is shared. I'd like to give you the digital stage, please. Thank you very much. And thank you all for attending today. And thank you also to the conference organizers. Today, we're going to be going to, we're going to explore the connections between data literacy and information literacy, especially as it relates to library instruction and discuss strategies for integrating data literacy and the ACRL framework for information literacy for higher education. I'll refer to that as the framework from now on. So I am Patty Condon, research data services librarian, my co-presenter, Wendy Pothier, is the business and economics librarian also from the University of New Hampshire. She's unable to join us today, but she sends her best regards as she's on holiday in Iceland. And she's sending me beautiful photos. So I think that she both sends her regards and maybe only small regrets that she's not here because the pictures are lovely that she's sending. The University of New Hampshire is located in a small town called Durham. And it's the flagship research institution for the state of New Hampshire with an enrollment of about 15,000 students. So in our session, we're going to discuss our proposed business data literacy competencies. I'm going to outline our recent work on how they've mapped to the ACRL framework and also close with a brief summary and future directions of our work. In the professional business literature, there's been an ongoing discussion about talent development in the workforce around data literacy. And many businesses have been focusing on the infrastructure to support data in their organizations, but they've seen a gap in data literacy skills of employees coming into the workforce. So taking this conversation into consideration and our roles as information professionals in higher education teaching about data. In 2019, we introduced seven business data literacy competencies, which we proposed as baseline competencies for business students to help prepare them entering the workforce and working with data at various levels on the job. The competencies reflect data literacy requirements for working professionals at an entry or broad level within the organization, recognizing that greater levels of competencies would be developed in areas as needed by job type or through experience. The link that's provided on the slide goes to a research guide that has definitions of the competencies that you can review as we discuss them. And there's also a link to the ACRL framework for information literacy that you can also take a look at as well. So building on the development of the data literacy competencies, we recently mapped those competencies to the ACRL framework for information literacy. So briefly, the framework is a foundational document adopted by the Association of College and Research Libraries, the ACRL, in 2016 to engage information literacy more fluidly in higher education settings. It's anchored by six core concepts referred to as frames that outline potential learning outcomes. So we mapped those to our competencies to those six frames. We conducted the mapping because we wanted to more clearly articulate the connections between data literacy and foundational documents in our field. And we wanted to contextualize the conversation around our proposed business data literacy competencies as complementary to information literacy to help promote implementation. Through this work, we provide examples and strategies to help business information professionals move the competencies into practice. And we felt that aligning them with the frames would help provide a guide for doing this. And by mapping the competencies to the framework, we wanted to normalize data literacy as part of existing library instruction. And we wanted to strengthen the voice of business information professionals as key stakeholders in conversation and instruction of data literacy. I provided the link again, Kesa. You didn't catch it on the earlier slide. Again, it will provide definitions of the competencies and a link to the framework so that you can have a reference while we're while I'm talking. But before we dive into the mapping, I want to review a couple of assumptions that we had. So first, we extended the concept of information need that you find in the framework to data need. So we didn't just so we so we while the framework talks about information and information need, we really focused on data and data need. We approached our interpretation of the framework from the lens of how business students would apply the frames in their professional lives. And we discussed the competencies at both looking outward towards professional roles and inward at students learning in the classroom. So for the first frame, the first frame that we talk about is authority is constructed and contextual. This frame highlights that the trustworthiness of an information source depends on many factors such as who created it, why it was created, and how it's being used. The frame acknowledges biases within information sources, biases within the availability of information, and biases in the worldview of both the content creator and the content user. We map this frame to the the competency understanding data in a business context and evaluating the quality of data sources. So data used in business environments often vary from data that's used in scholarship. The context around collection and use of data has different motivations. The recognition of authoritative voices is distinct and the systems for establishing trustworthiness, authority, and credibility are also unique. And sources for business data can be challenging to evaluate because the proprietary nature of much of the data leads to a lack of transparency about the collection and analysis of it. So exposing business students to commonly used business databases which are subscribed to by libraries and also corporate entities furthers the conversation about authority and the context of data literacy. Using these sources information professionals can explore the conversation about how authority is determined and how quality is evaluated. Our next frame is information creation as process. This emphasizes that format of information sources, modes of delivery, and the process of creating and publishing an information source. And we can use this frame to look at data as a special format as a particular format. We mapped this frame to data organization and storage. So depending on one's role in an organization, the value of business data may lie in the interpretation and presentation of that data or it might be in that original organization storage of the data. So responsibilities for different stages of the data lifecycle often align with different positions in an organization. So the one who reads and interprets raw data, for example, might be a completely different individual from the person who makes decisions from those interpretations. So professionals can benefit from just understanding the basic concept that data are created, they're organized and stored before the point that they can be analyzed, interpreted, summarized, and presented. Business information professionals can introduce business students to the data lifecycle. Students don't have to be experts in all stages of the data lifecycle. Rather the goal is to help them understand the journey that data takes from being generated to presented to reused in secondary analysis. And this can help students contextualize their future interactions with data as employees. So moving on to our next frame, one of my favorites, information has value, which addresses the power of information by highlighting the legal, societal, and economic value and how that value can impact production, dissemination, and use of information. The value of information is contextual, right? When extrapolated to the often proprietary nature of data for business use, value comes from data contributing to an economic and strategic edge. So we map this frame to understanding data used in business context, data-driven decision making, and data ethics and security. Data is a business asset and contains real currency in terms of profit, operations, logistics, and many more. When a business can access better quality data in higher quantities, the more likely that business can apply those data to make decisions which would ideally increase profit and help companies achieve missions and goals. The value of data is further emphasized in making business decisions. While often the role of management and leadership, decisions actually happen at many levels of the organization and data plays a role along the way. And significant here is the legal value of data, including intellectual property and attribution, and also the social value of data, including the intention and use of data. Students can start to see the concept of this play out in the classroom as they commonly rely on access to data through campus library subscriptions. And we can discuss the value, the cost, and the ethics of access of data. Just as different universities may have access to different databases for research, so will different companies have access to different data sources. Next we look at research as inquiry. This frame draws on transferable ideas emphasizing that research is an iterative process of asking questions, explorations, exploring findings, and engaging with that information, even though it might look different in business context than it does in academic research. We've mapped this frame to interpreting data and data-driven decision making. Interpreting data involves engaging with the data and understanding the purpose and the context of that data. The interpretation of data takes a distinct significance in research for business operations because those interpretations can help drive decision making, sort of data-driven decisions. And we can look at decision making as iterative and driving business research. Each decision leads to a new question, a new data need, and then additional decisions are made. Information professionals can contextualize data in non-academic scenarios by highlighting business practices that mirror this research's inquiry process. So for example, iteration is a foundation of lean startups and is often used in entrepreneurial endeavors in which professionals will try something, change what they're doing based on the data or experience, and then adapt something new in order to maintain that progress. So our next frame, scholarship as conversation, underscores that the body of academic and professional literature is a product of many voices and different perspectives participating in a back and forth discussion across time and space. Scholarship is an open and participatory exchange of ideas, right? So we've mapped this frame to evaluating the quality of data and communicating and presenting effectively with data. When evaluating data sources for the inclusion in the conversation, it's important to recognize that businesses will not be represented, some businesses won't be represented as well. For instance, a public business is required to publish annual reports, a private business can be less transparent and isn't required to do so. And so that means that the conversation can be biased potentially towards public companies, which, whether there's more information available about those companies. And an important piece of the conversation is how to tell a story with data through effective communication and presentation. Business information professionals can explain to students why companies contribute evenly to the conversation and how this may skew the record. And the frame scholarship as conversation can be interpreted broadly and has valuable learning outcomes such as helping students learn to share and demonstrate ideas through skilled presentation. And lastly, we move to the, to searching as strategic exploration. This frame emphasizes that search process is a complex, systematic journey that's contextualized by the worldview of the searcher, the scope of the investigation, the context of that data need, and also the tasks required to achieve the outcome. We match this frame to data organization and storage and understanding data used in business context. Data organization and storage is foundational to beginning the exploration of data. To explore data, it must have been collected or generated, organized and stored. And if the data are not securely stored, cannot be located or not well documented, but then the time and money will be lost to having to recreate it if recreation is possible. And identifying who produces data and what kinds of data they produce helps students and employees understand the kinds of documents and data that are available and can be accessed. Understanding what's accessible within one scope of the search is essential when looking for available and usable data. Information professionals can provide exposure to databases, exposure to different kinds of tools for finding and creating data and discuss the structures and formats of data. We can also discuss what data are available, why some data have not been created, and what decisions can be made with the data that is accessible. This can lead to discussions around the kinds of reports and data that exist, conversations about primary and secondary data, and equities of data access, and training in the responsible conduct of research. To summarize, we identified these seven, we proposed seven baseline business data literacy competencies for students to prepare them entering the workforce. We mapped them to the AcerL framework for information literacy to create a bridge at the intersection of library canon, business librarianship, and data literacy. We began to introduce strategies to support the integration of data literacy into library instruction for business and economics. And in future research, we plan to explore the relevancy of the proposed business data literacy competencies to the workforce and if they address the skills gaps that employers have witnessed in the professional spaces. So I want to thank you and I would welcome future conversations and collaborations. Feel free to reach out to either Wendy or I. We will be at taking questions now, but also at the speaker's meetup tomorrow. So thank you so much for attending and listening. Thank you very much, Patricia Condon. Yeah, a really fantastic presentation. We'd like to give a virtual applause for wherever you are watching. We're lighting up here. We have a couple of questions for you. So I'll jump right into our audience questions. The first one is, do you have examples of student aha moments of thinking more about data authority or value? That's a really, that's a great question. I'm trying to think outside of this context as well, outside of business context. I don't have a specific example that comes to mind off the top of my head. I do love aha moments. So I will say that one of my colleagues often refers to me on the data services librarian as the librarian you'll think later. Because a lot of times the librarians will go into a classroom around information literacy when the students need resources. They need to write a literature review. They're looking for something specific. Where the data pieces are often come a little bit later. There's not often a point of need that I go in and help students with unless they're coming to me to find specific data. So that's a good question. I hope that I do get one of those. Sorry I didn't have an example. Fair enough, no problem. Great. Let me move on to our next question from our audience. And that says, can anything be done about the lack of transparency of the business data sources? Big issue of course. Yeah, yeah, that is a big issue. And it's a conversation that Wendy and I have had back and forth, right? Because she works very much in the business and economics realm. And I often work more in the open science and the stem areas where where transparency is extremely important. And that's why we really wanted to highlight that that distinction between the public and the private sources of data. So because it's proprietary, because when we are thinking about business data, there's so it doesn't align with the other scientific data literacy competencies out there because it's it's proprietary. We are talking about institutions that are using data for an economic and strategic edge. And so therefore, I don't I don't know if there's a way because it's if the value is actually a cash value at this point. And so that transparency, the the not sharing of data gives the businesses the the additional value towards their towards their mission and goals. So while it would be great to see, I'm not sure what the answer to that would be. But it is a it is a complicating factor. And one of the reasons why we did think that that that that business data literacy competencies kind of have a unique twist than say the science data literacy competencies do. Super. Okay. Very good. I apologize. My chat box here bounced around up and down. So hopefully, this isn't a repeat of something. The one I have here is that the students have any completely unexpected results while playing with the data visualizing it. That was our first question. I remember that. So let me just see. Again, I apologize. But sometimes we bounce here. I think this is it. Could I ask my technical team please to give me the last question that was there. Fortunately, I'm bouncing up and down very good. I have it now here. Excellent. So it says here, and this is replied to your last answer. I see very interesting. What implications does this have for business data literacy instruction? Yeah. So I think part of it, I think that, yeah, I think that part of that is is definitely highlighting we're able to talk about the bias in the in the in the data source at this point. So we're able to use it almost as a teaching moment to think about to think about how we identify authority, how we identify trustworthiness in business data and sort of what we do have access to. Also, we can talk about the cost of data, you know, in terms of if we're if we're able to purchase data, what does that mean? So I think that the impact or the implications potentially is that we have to focus on public data, but we have to make sure there are students know that it's a piece of the data that's out there, and that when they're in the professional setting that they may be generating data or or buying data that isn't open and isn't transparent, and they need to think about the quality and the evaluation of that data. So I think it's a it has a huge impact, and we have to we have to be we have to be transparent about that lack of transparency. Yeah, very good. I think those are great closing words. And with that, I'd like to thank you for your presentation, Patty, as well as taking the time to answer some of our questions. And again, once again, a virtual applause. Thank you so much. Thank you so much. And yeah, greetings on beautiful New Hampshire. Absolutely a gorgeous thing. Great. Thank you. And yeah, I was
To meet current and future workforce needs, business students entering the job market should be literate in working with and using data for a variety of purposes. Our presentation focuses on addressing business librarians as key stakeholders in the development of services to help improve data literacy in business and economics. While general and discipline-specific data literacy competencies have been identified, our work focuses on the data literacy needs seen in the disciplines of Business & Economics. In previous work (2019), we identified seven baseline business data literacy competencies that filled gaps in student and employee knowledge around data and data literacy. More recently, we mapped those data literacy competencies to the ACRL Framework for Information Literacy for Higher Education (Framework). This mapping helps establish a bridge between foundational library professional documents, business librarianship, and data literacy both in higher education and in the workplace, extending the conversation to how the Framework informs data literacy instruction. In this presentation we summarize the seven baseline business data literacy competencies and outline how they can be mapped to the Framework. From this mapping, we explore how business librarians can incorporate teaching data literacy skills and provide instruction informed by the Framework. This research will provide audience members with both context and foundation to develop strategies that integrate data literacy and the Framework into library instruction specific to the disciplines of Business & Economics.
10.5446/58073 (DOI)
She is the manager of the information management team and Baker Library at Harvard Business School, the HBS. And her theme is HBS Knowledge, a Knowledge Graph and Semantic Search for HBS. So we are now waiting for Erin Wise to join us. She's here now. I've just heard from our direction here that she's coming. As soon as you're in Erin, go ahead and just give us a quick thumbs up and a hello that we know you're there. Hello and thumbs up, I'm here. Fantastic, wonderful. And I see and hear you excellently. Okay, great. So Erin, real quick, before we begin, could you just tell us where you're physically at right now and something that's a tourist attraction or something that should be seen when we visit that area? I am in Boston, Massachusetts. And I recommend visiting the Seaport District when you're visiting Boston. And I guess having some clam chat or soup as well. Yes, of course. Excellent. Excellent, good. So glad you could join us. As I mentioned before, Erin is the manager of the Information Management Team and Baker Library at Harvard Business School known as the HBS. Her theme today is HBS Knowledge, a Knowledge Graph and Semantic Search for HBS. Again, to our viewers, there's two ways to ask questions. We're doing great so far. You can either scroll down to your interactive tool or you'll see the QR code coming up. You can put your phone up and go ahead and use that to put in your question as well. And Erin, we have about 15 minutes for the presentations and then about five minutes for the Q&A. So if you've shared your screen and you're good to go, then I would say the digital stage is yours. Okay, thank you. So I am sharing my screen. Hopefully you all can see it. Yeah. It's great. Looks good here, yeah. Okay, excellent. Thank you. So I am pleased to present about a website that we have been developing at Harvard Business School's Baker Library. We launched it as a proof of concept last June. And the concept that we are proving is that we can use a Knowledge Graph plus Semantic Search as a way for HBS users to uncover connections across our data silos. Our experience shows that we can do a lot with a small handful of librarians and developers. And that librarians have a critical role to play in any semantic data project. So first, what is our product, HBS Knowledge or our website? So the HBS Knowledge website is our solution to a problem that many of us are struggling with, bringing together information about a resource using data across repositories. With our website, we are proposing a Knowledge Graph as a way to integrate data across repositories, a search application, a semantic search application for effective entity identification and findability of resources, and vocabularies for consistent and unambiguous language. And when I say vocabularies, I mean any controlled list of terms. I'm defining it very loosely as any controlled list of terms or entities that aids consistent indexing and accurate retrieval of resources. So who we are, we are a small team, just a few librarians and developers, and we were fortunate to be given time and autonomy to act on our ideas to develop a site that we think solves significant problems for the school. So we are a product owner, a taxonomy and ontology specialist, a semantic technology lead, a semantic search specialist and a UI designer. So some definitions, what is a Knowledge Graph? We are defining it simply as a model of a specific domain or sphere of activity. In our case, we're modeling the domain of business as practiced at HBS. The Knowledge Graph gives us a common structure for data and allows us to create relationships across repositories to provide a holistic view of entities. So in this example, and I'm afraid it probably looks quite small to you, so I apologize for that, but hopefully you can at least see that we have Amy Edmondson in the middle here in green, and she is a faculty member. And with by combining data from various sources, we can see that she is an alum of HBS and we can link to her alumni profile. We see that she's a faculty member and we can link to her faculty profile. We know that she's the author of multiple cases and publications. For example, she wrote Teaming Up to Win the Rail Deal at GE. We know that that case is about General Electric Company, and we have alums who are currently employed by General Electric Company, sorry. And so alums employed by a General Electric Company and we have alumni stories that are about those alums, those stories have topics of leadership, et cetera, et cetera. So the idea is that we just continue to make these connections and one can start at any given point in this graph and expand outwards and explore. So a definition for semantic search, we just, we define it simply as search with meaning. Semantic search uses our vocabularies and our ontology to understand the intent of queries. So in our ontology, in this example, in our ontology, Michael E. Porter is identified as a person and as a faculty member. Our faculty vocabulary lets us know that Michael Porter is an alternate name for Michael E. Porter. So a user entering a search term, Michael and Porter, or a search, yes, a combined search term, Michael and Porter. Michael Porter is recognized as a named entity as opposed to two separate keywords, Michael plus Porter. So search with meaning gives us Michael E. Porter as opposed to literal matches on query streams. We are using our library managed vocabularies such as company names and topics, along with entity to vocabularies managed at the school and at university levels, such as faculty and alumni names. In the case of faculty alumni names, sorry, in case of alumni names, we took the extra step to disambiguate alumni names by appending additional metadata for degrees and class years. So otherwise we would end up with a case where we have about 10 different people with the exact same first name plus last name combination and no way to distinguish between them when you're selecting from a list of terms. So again, I apologize for how small this slide is, but hopefully you can see the main points in blue. So we're providing an experience that probably looks familiar to you from Google. We're returning info boxes or knowledge panels on the right hand side that focus on information about a particular entity and show that entity's relationships to other resources that are important to HBS. On the left side in this view, we are showing keyword search results from our selected sources. So we have, on the right, we have explicitly related data objects from the graph. And on the left, we have keyword search results on the left. And there may or may not, there's likely some overlap between the two, but the right hand side is showing very explicitly related things that we know to be true. And the left is relying on keyword search to show you results. So why did we do this? We have multiple data sources, schemes, and vocabularies at HBS. So the situation is that we have metadata that's inconsistent across sources. We have metadata fields that even when the names of the fields are consistent across sources, they've been interpreted differently in different ways and applied in different ways from source to source. We have entities that are not uniquely identified across sources. So companies and some data sources are text strings, for example, and sources may be using different vocabularies. So local topic vocabularies, for example, would be used in different sources. One source would use one local vocabulary and another would use another local vocabulary. So we're essentially cataloging HBS assets in a consistent and systematic and controlled way. And our goal is to be able to make any data, take any data source, normalize it, scrub it, process it, and add it to our graph. So basically we're taking all the disparate sources and aggregating them into a single, intelligible, and searchable source. So to go a little deeper into our data source, we're going to go deeper into an example here. Here's an example of different metadata for the same entity. Faculty names are represented in multiple ways in the source data. So using publications in working knowledge, which is the library's online publication that features faculty work. We have a field or a property called WK faculty name. And the values in those fields are used as a byline format. So it's first name, middle initial, last name. And the relationship is not clear. So there's no stated relationship. It just says faculty name. Does that mean that this article is about the faculty? Did the faculty write it? It's not clear from the data. The faculty and research site on the other hand uses a last name, first name format, and identifies that the faculty member contributed to the work. So they have a risk contributor name field, which shows us, picks out the names are consistently identified and consistently formatted, and the relationship to the work is identified. They also have a field called HBS suggestion, which we think is related to the implementation of enterprise search at HBS, but it includes faculty names along with other data. And then again, we have alumni stories, which uses a JSON string that concatenates multiple facts about the faculty number. So we have ID, name, and title. The relationship is identified as featured. So HBS story featured faculty. Is that the same as about possibly, maybe even probably, but we're not sure. And finally, in all three sources, there is a field called HBS faculty, which uses a username format and doesn't identify a specific relationship to the publication at all. So A Brooks, A Moreno, J. McComber, no relationship identified. It's just picking out that a faculty is somehow related to this publication. And then again, another example. So of the need to resolve meaning in the data and the source data, we have different interpretations of one attribute. So HBS content type is an attribute that's common across all sources. The faculty and research data uses a vocabulary of publication types that we recognize, so books, book chapters, articles. External relations who owns the alumni stories content, they call everything all of their stories, news. So there's only one value, it's all called news. Maybe that's an article, maybe not. Working knowledge uses the content type field to describe categories of articles. So everything that they produce, we would consider an article, but they have categories that indicate specific focus, the specific focus of an article. So it might be about a podcast, it might be about a working paper that a faculty member wrote, it might be about a book, et cetera, et cetera. So how did we do it? It's a kind of very iterative process, but it's essentially the way this works is that the information management team of librarians analyzes source data. We specified how to translate the source data to our ontology, and we were simultaneously developing the ontology as we went. We translate source data and give those specifications to the technical team, just essentially our semantic technical lead and developer. He converts the source data into a triple structure with URIs for the knowledge graph. So the triple structure is subject predicate object and the knowledge graph is essentially a series of statements that look like this. So Amy Edmondson is author of Teaming Up to Win the Rail Deal. And when we apply URIs to that, we have the little piece that you see here where it's a URL and then a specified relationship and then another URL to identify the publication. So the knowledge graph is just, it's still, I think, relatively small. I think it's over a million triple statements, but this is what the data looks like. So since launching the proof of concept and demoing the site to colleagues at the business school and beyond, the use cases have been coming out of the woodwork at HBS. So some examples include using the graph to give the MBA program a view into the cases being taught in their required curriculum and giving HBS initiatives, which are topic-focused research areas at HBS, giving them a view into the publications, events, faculty and activities of the school related to the topic. They're essentially interested in promoting the activity of the school in relationship to a particular topic or industry. And these are all excellent use cases for an HBS knowledge graph and we plan to implement them. So some of our takeaways, there were many takeaways, but the ones that the highlights are that there are many principles and conventions from the library cataloging world that informed our work on the construction of a knowledge graph. So we borrowed from vocabularies for specifying relationships among entities and roles people have in relation to publications. We relied on our knowledge of library conventions for tracking name changes over time and disambiguating conflicting names. For resource description purposes, we start with the basic principle of clarifying what it is that we are trying to describe and whether and how references to the resource or entity appear in our sources. Scoping our project was key. So we focused on specific use cases and problems that we were trying to solve just a few. And the types of data that address those use cases, as well as the HBS repositories that best represented those types of data. So we chose use cases, specific content types or class types or types of data that address those use cases. And then from there, what are the specific repositories that would best help us address those use cases? And finally, if there is one thing that we learned above all, it is the importance of having unique identifiers for entities across data sources. We knew this going in, of course, but we really knew it by the time we were finished. It was so helpful to have unique identifiers anywhere we could find them in the source data. And I would say in sum that any data-driven project we undertake is highly dependent on clean, trustworthy, uniquely identified data. And when undertaking a semantic data project, you would do well to ask your friendly neighborhood librarian for help. Thank you. Thank you very much. And I'd like to give you an applause. That's our tradition here. We know we're all at home and our offices. Yes, and thank you very much for that. We have a few questions waiting. Before we begin, I wanted to ask you a quick question. I find it interesting as more and more information and data goes digital is cybersecurity an issue for you all as well, not only in that someone could come in and steal information, but also change that information that's being stored. Yeah. So it is an issue for us. And one of the reasons we, one of the other considerations for scoping our data was being very careful about what kind of data we were exposing to the outside world. So in security in that sense, we weren't thinking so much about people coming in and changing our data, but we were definitely thinking about exposing internal data to the outside world. So for now, this site is behind login. We aren't using anything that people can't already find if they have an HBS login. So as long as we're behind login, we're sort of bypassing that question. But that's one when we, the goal is to open this up more once we have more permissions and security things in place. OK, great. Thank you. OK, let me get to our audience questions. The first one we have here is, do you have feedback from users? Do they understand the knowledge graph without introduction? To me, it sounds great. Great. We have feedback from users. So the idea is not that people should have to understand the knowledge graph in order to be able to use the source. We just, when we demo the site, we try to explain what it is that we were doing and why it's different from what others may be doing at HBS right now. But really, we've had great feedback from our users. And I didn't include a slide about comments. I could have, and I guess I should have. But some of the feedback is that, while you've solved a really big problem for us, this is what everyone wants to do at HBS. They want to have this view across. And that's helped us demoing this and talking to people about it has brought out all these use cases. And people can really see themselves in it, which is fabulous for us. That's what makes it exciting. Super, that's a big plus. Good. Our next viewer, we have two questions from this person. It says here, an excellent presentation. I can only agree. I would love to understand the project development phase and the proof of concept. Can you speak more about the proof of concepts? How did you decide what data was useful? What data sources did you use for the knowledge graphs? So there were several considerations. One consideration, which was actually pretty big, was what data can we get access to pretty quickly? So some of this, we had to request permission, and it wasn't so easy to get access. But for data that's available publicly on the HBS website, so the faculty and research data, it's the citations and data about their publications and the work that they're doing, that was easy to get. The alumni stories data, which talks about alums and also faculty members. That's also on the public website, so that was easy to get. And the work knowledge data, again, was available on the public website, so that was easy to get. So we were focused primarily on publication data, people data, and organization data. OK, good. That answer was the question. Yeah, that does. A follow up to that is, is this knowledge graph available to the public? Sadly, no, not yet. Any plans? It's in the future. We have lots of plans. It's definitely something that we want to do in the future. We have a lot of permissions functionality to put into place before we can do that. But definitely the goal, so this was our proof of concept. And the goal now is to scale it up and to start expanding the knowledge graph and incorporating data sets and making it available to more people, more users, than it's currently available to. OK, great. Good. Next question here. How exactly does this ambiguation work with same name or common name researchers? The viewer says here, I did not get that. Sorry. Oh, yes. OK, so when we were working with the alumni data, we discovered all these. If we were just looking at the mean data, we saw all these duplicates, duplicate values. So we had to look at other metadata related to the alums and start and bring that in to try to disambiguate between different names. So for example, we might have had a lot of John Smiths, but those John Smiths were in different. They took maybe they were not all MBAs. Some of them were either MBA students or exec ed student alums. So we included their degrees. And then we also further disambiguated by including their class years. So if they graduated in 1995 versus 2000, so we included the degree plus class year from other metadata that was already there and just propended it to the names. Wonderful. And Erin, that's all the time we have now. So once again, I'd like to thank you for your excellent presentation and also for taking the time to answer in some of our questions. A round of applause from around the world. We have over 50 countries represented with over 300 participants. So the world will thank you to that and continued success at Harvard. Thank you. Thank you. Bye bye.
Like many organizations, HBS has overlapping data in multiple repositories. These data are maintained in different ways for different business purposes, making it difficult to have a unified, cross-silo view of any given HBS-related entity. The library sought to address this challenge by creating a Proof of Concept for a Knowledge Graph that identifies unique entities (including people, companies and faculty works) across HBS repositories and defines relationships among them. This graph drives the data connections on our website, HBSKnowledge (HBSK). With this integrated structure we have uncovered many of the multiple and varying relationships among data across repositories. We have also set ourselves up to implement inferencing in future releases of the product to further enable the discovery of strategic information at HBS. In addition to structuring our data in graph form, we are using semantic search technology to enhance the search experience. We have leveraged our topic vocabularies, our company authority data, and our faculty & alumni authority data to steer users toward the most relevant information for them. The result is a product that enables discovery of 360 views of entities important to HBS. The HBSK PoC serves as an excellent product for demonstrating the promise of data integration and semantic search, the value of a Knowledge Graph in delivering on that promise, and the talent that exists in libraries for driving content structure and semantic technology. One of the lessons we (re)learned is that any AI initiative is only as good as its data inputs, and a foundation of well-structured, uniquely identified data is essential. To this end, many of our decisions about ontology and vocabulary development were informed by an understanding of library cataloging principles and practices.
10.5446/58075 (DOI)
to see if Arjun Sanyal is there. So can you hear me now? Oh, yes. I can hear you very well. Please go ahead, sir. The greetings, everyone. So I'll be just speaking on the idea of how we can rethink university librarianship in the post-pandemic scenario, where we'll just expect you on the experiences from my own university. So basically, the big challenge for Indian universities in the wake of the pandemic, and I believe for any universities globally, was getting students back to school after a long hiatus. Because staying at home away from the friends, away from the physical classroom, was a sort of a disturbing experience for them. And they got increasingly disinterested in academia and the university itself. So one of the major things that occurred to us librarians, when we saw the students back on campus, was that the owners of the responsibility lies on the librarians in equal measure, as much as it lies on the faculty members. So for the librarians, it was like going the extra mile in terms of commitment to bring the students back to the university selves. So what exactly was the problem with the students? So there are three problems, which are listed over here. There is an interference towards the academic curriculum, the informational anxiety, and a sense of loss of belief in one's own abilities. So the first one is the interference towards the academic curriculum, because they were away from the face-to-face classroom teaching. So they became very disinterested in studies itself, and there was an sense of academic privation. And the problem was all the more complicated or worsen, because the infodemic or the fake news that actually sort of penetrated the social spaces. Second was the idea of informational anxiety. Now, we all have heard about the idea of library anxiety, which was propounded by Constance Mellon, which was more so about the library, the size of itself, which actually sort of created a fear in the minds of the students or a sort of inhibition. But when I come to the idea of informational anxiety, it is all the more caused by the idea of digital poverty or digital redlining. And this is basically marked by a strong sense of distaste towards all sorts of educational information resources, particularly the digital ones. Lastly is a strong sense of negative fatalism about oneself. So there is a sense of fear, a sense of I can't do, a sense of whether I am fit for higher education. So basically it was a big challenge for us. So what we thought was that the best way was to pass by the interest or the curiosity among the students in the academia by unlocking the creativity. So we worked on a library roadmap. And on the library roadmap, we actually created a philosophical template, which is basically about rediscovering oneself. So the central idea is rediscovering yourself by bringing you back to your university self. And it has four aspects. The first one is intersectionality. So intersectionality is basically a theoretical idea, which is where a person's social, economic, culture, and various identities actually creates different levels of advantages and disadvantages. And given the fact that we have students from my right social background, so the idea of intersectionality was all the more very relevant. Second was the idea of epistemic justice. As someone who has been in the task, was impressed with the task of instructional library, I see that the idea of a student as a knower, as a speaker is very important. And one of the things which we must do is to make the students rediscover their love for learning, their love for knowing. So one of the major ideas that comes from this is the idea of radical imagination. So we allow or we motivate or we embolden students to think out of the box. And in the process, they should disrupt the status quo, which is the final idea. Because only by disrupting the status quo or only by disrupting the established knowledge template can we allow further research to progress. So basically, the library roadmap for CUHP or my university was, first of all, be an open house session with students regarding how they can improve, how the library services can be improved. And in this, we use mind-numbing tools. Second was a sort of an experimental maker space come third space, which was not consciously in terms of getting on with the studies, but allowing them to do whatever they love. So somebody could just code, somebody could just write poetry, somebody could just paint. And third was organizing motivational lifestyle sessions for students using Lifestyle Coaches from outside the university. So after two months, we did a survey. I'm just not fighting the rug. Could you please wrap up? We've got a lot of work. Please, last sentence. OK, OK. So after two months, we saw university students coming back in large numbers. And there was an uptick in demand for the third space and the motivation of Lifestyle. More importantly, we saw students demanding sessions with both the faculty and the library staff to make learning participatory and enjoyable. So the main motto for this presentation is keep learning and keep enjoying and keep hungry and keep learning. That's all. Thank you. Fantastic. Thank you very much. I'd like to give a big round of applause to all of our course participants. Really, really fantastic.
Post-pandemic, the problem with Indian universities was bringing students back to their former academic selves. The fact of being away from the universities, for long, have led to students developing an indifference towards academic curriculum. Secondly, they have developed a strange problem that goes beyond simple “library anxiety”, as Constance Mellon termed it. I call it “informational anxiety”. This peculiar malady is more about developing a distaste towards the educational informational resources, compounded by the fact of the ubiquitous digital poverty. Thus, being out of touch with the library actually leads to a sort of anxiety, regarding seeking educational information digitally as well as physically, shifting the students’ focus away from academia. The CUHP (Central University of Himachal Pradesh) library team devised a novel strategy to make students discover their love for their studies as an enjoyable pursuit and not as a tiresome obligation. First, we held mind mapping sessions with students on seeking information as regards how library services can be improved. After the completion of the session, we emailed the mind maps along with surveys, to the students, for any improvisations they saw fit. Secondly, we started an experimental makerspace-cum-third space where students could do whatever they felt like writing poetry, painting pictures and so on. Thirdly, we held some library motivational sessions for students relating to healthy lifestyle. Within two months, we saw students developing an interest for coming to campuses, particularly to libraries. In fact, the idea of makerspace-cum-third space and the motivational sessions appealed a lot to them in helping to rediscover the joy of learning by thinking passionately about the social milieu. This, I believe, will be a beneficial approach for libraries in general because learning succeeds when the individual thinks creatively, enthusiastically and in a jargon-free manner about things that appeal to him/her.
10.5446/58076 (DOI)
Hi Tim, welcome. Thank you. So please, and your presentation is shared already? Let's see, it is now. Yep. Super. Then go ahead sir, please. All right, my name is Tim Tully. I'm the business librarian at San Diego State University. In my presentation is entitled Trade Deficit Question Mark. Through professional practice, I have realized from allegorical examples, from publications such as Business of Fashion, which is the publication in the fashion industry, Mass Device, which is a reputable publication for the medical device industry that's cited in BCC research, that there's not coverage or indexing or abstracting of a number of trade journals and trade news sources in the tools that we use to discover these articles. And oftentimes they're not even listed in Ulrich's web. So to determine whether my assumption was correct, I designed a study where I went through thousands, over a thousand discrete industry reports from first research owned by Dunn and Bradstreet to compile a list of quality trade journals to see whether or not these valuable tool or valuable publications for market research, for understanding market sizes, consumer trends, competitive analysis and so on, were available in the tools which we used to find them. And the results were shocking after I compiled all of these sources and then ran an analysis on their Ulrich's web records. Out of the 768 publications, I found over 20% of them didn't even have records in Ulrich's web. And then out of the 588 that had Ulrich's web records, if you had a subscription to every major business aggregator from Business Source Complete, ABI, Inform Collection, Factiva, you would only have full text coverage for 52% of those of those titles. And taking that analysis even further, it's roughly 60% if you had every aggregator available from all of those companies that sell those business aggregators. So every product from Gale, ProQuest, EBSCO, Lexis, Nexus and Factiva. So the deficit of trade news sources which give you really that kind of real time insight into market conditions is lacking and libraries tools are not providing proper insight into that kind of research. I also analyzed the drop off dates of when the coverage is ceased for titles that weren't covered currently. And the majority of those dropped off between 2010 and 2018, but 2007 had the greatest instance of coverage drop. Also, there's not a lot of uniqueness among all of these aggregators as well. In my analysis, I found that Factiva had 21 titles, which was roughly 3% of the titles in the Ulrich's Web Analysis that were unique, which was more than any of the other aggregators. And that's a fairly low number when you really start thinking about it. So through all of this, the question is what is we as business librarians can do about it to provide our users with the information and discover articles on industry trends, market sizes and things of this nature for entrepreneurs and business students and faculty who are looking to commercialize their research. So what can we do in the absence of the vendors doing it for us? And please, please wrap up your thought just for time. Okay, yeah. So we the my idea is to work with metadata librarians to create title level discovery and network and community zones or for us to create our own index like the insurance division of the SLA did when their publications dropped out of the HW Wilson indices. Super. Thank you very much. Thank you very much. Appreciate it. Yes. And just one point of clarification. I made a small mistake after we finish here. It's not meet the speakers just to be clear. We'll be going to our poster exhibition. So just so we're on the same page here will be our poster exhibition that we'll be doing after we hear from our last speakers here. We'll泳.
Trade journals and trade news sources are an invaluable source for business students, entrepreneurs, and job seekers, but there have not been any recent analyses to determine whether these sources are adequately represented in the aggregator databases used by Business Librarians. In this study, the researcher compiled a list of 768 quality trade news sources using the First Research industry reports from ABI/INFORM Collection and compared the full text coverage and the currency of coverage of these sources in Business Source Complete, ABI/INFORM Collection, Business Insights: Global, Nexis Uni, and Factiva using UlrichsWeb. This study identified whether there was full text coverage of these sources in additional aggregator products from EBSCO, ProQuest, Gale, and LexisNexis. The results of this study indicate that there is a significant lack of full text coverage and current full text coverage in the business aggregator packages and other aggregator packages available from library database vendors. Lastly, this study offers a few suggestions for how librarians can collaborate to increase the discoverability of trade sources that are not available in these aggregator packages.
10.5446/58077 (DOI)
Good, let's see here. Paula, are you there? Hello, everyone. Nice to see you there. I'm Paula Corti, and I'm from Italy. Welcome, welcome. And yes, make sure your screen is shared. Great, fantastic. And you may begin whenever you're ready, Paula. OK, thank you very much. So I am the Open Education Community Manager at the European Network of Open Education Librarians, and I've been kindly invited to share with you one of the experiences that we had as a network in developing an open educational resource, let's say, which is particularly related to the benefits of open education. As you can see in the poster, we started working on the benefits of open education because we wanted to produce a toolkit that was meant to support librarians mainly, but everyone else at the university level, in disseminating about the benefits of open education itself for different stakeholders. And we wanted this to be available for everyone and to be reused and adapted. So the toolkit consists of three tools that are Twitter cards, slides, and posters that can be downloaded. And as I said, adapted because the format is very simple. Who worked on it? Librarians in the European Network of Open Education, 31 librarians actually, spread around different countries in Europe so that we could have, in the end, 16 different language versions of this because we wanted it to reach out to people where they are. With their own language and to understand it immediately so that they can join the open education community as fast as they know the benefits. The benefits are for students, teachers, institutions, and the citizens at large. Why did we do this? Because we want to advocate about open education consistently with the UNESCO OER recommendation that was approved unanimously by 193 members of UNESCO in 2019. And we wanted also to offer easy to use advocacy tools for librarians that didn't require them specific graphic skills, but if they have them, they can adapt those tools as far as they want. And also, we are providing guidance to attribute properly to adapt the toolkit to the logos of the institutions and the colors of the institution if needed, but also to enlarge their adaptation farther if they are willing to. What can you do with this toolkit? You can download it from Zenodo and you can add your logos or your photos or your colors to it and then use it to advocate for open education whenever you have a chance to so that people will know better about the benefits and might want to join the open education community. Super. Great. Thank you very much. And just a quick side note, congratulations to Italy for hosting completely different subject, the wonderful Eurovision Song Contest a few minutes ago. Thank you. Or at least most of Europeans were watching that too and stuff. So thank you very much. Look forward to seeing you also later in our speaker's corner. OK. OK.
The ENOEL Toolkit features reusable and adaptable templates for Twitter cards, slides, and leaflets. It can be used at any institution to convey the convincing benefits of Open Education. It results from the work of the European Network of Open Education Librarians in 2021 and the beginning of 2022 and aims to help raise awareness of the importance of Open Education. The Toolkit points out benefits for four stakeholder groups: students, teachers, institutions, and society at large. The ENOEL members have helped translate the Toolkit into 16 language versions to make it more effective and inclusive at the local level in different EU countries. Great attention has been given to making and keeping the ENOEL Toolkit as open as possible. Thanks to its graphic simplicity and its open licence (CC BY), librarians willing to reuse it don’t need advanced skills to adapt it to their specific needs. ENOEL members designed the Toolkit to welcome the addition of institutional logos, changes in colours, and fonts to adapt to local communication guidelines, standards, and tools. Reusers can change the order of the benefits according to the preferences of each local context and specifically identified target groups. At the beginning of each file, instructions guide users to understand each tool’s structure, correctly attribute it when adapting, and find items in the files themselves.
10.5446/58078 (DOI)
Great. Thank you very much. Hi Terrence. Please let me know if you have any issues hearing me or seeing my slides. Looks good and I think yep the screen is shared. Looks good. You're welcome to begin sir. Great. Thank you very much. So today I'll be talking about university technology transfer and university libraries. I am an entrepreneurship librarian at Michigan State University. So you're probably familiar but university technology transfer really speaks to the broad set of operations at a university where they're looking to take scientific innovations and support the process of bringing those innovations to bear in the market place. So could be many things from licensing of patents to actually starting up businesses as well. So one of the motivations for me is because I'm excited by the prospect of seeing those real world impacts play out and the university plays an important part in this burgeoning always growing always becoming more important I think technology landscape, which includes financial capital and startups and investors and governments but also the university and that the library can play an important part in this. So this is a very brief rundown of the technology transfer process, where you're going from research and innovation all the way to business and the university libraries are really already experts in business information literacy, but also making clear to different audiences where there's language differences where things are going to be changing as you're going from academia to business, and really making the process smoother, less stressful process for people, but these coaching and market sizing and market analysis and introductory patent search these are all things that business librarians often already do. So that's one of the first challenges to being more involved in this work. It may be outside the scope of your business libraries current operations, and there may be cultural pushback. It may be outside the scope of your entire libraries workflow. And that also is reflected in the materials licensing that they may not be licensed to support technology transfer work. And of course this work can be time intensive so scoping it is incredibly important. So some of the upside and why that might be worth getting involved in any way. So there's an incredibly high impact potential. And in my experience that these people may find you anyways, if you're involved in entrepreneurship research, so you can be proactive and be better prepared to support their work. If you're planning for it. So some of the best practices are to know what your universities research innovation strengths are. And to better understand the TTO and your institution to know who might be best coached and to know your licenses. But if you're curious, please reach out to me and join us in the discussion and thanks for your attention.
This poster will depict the innovation ecosystem relevant to University Technology Transfer (UTT), particularly focusing on the current and potential roles that academic libraries play in supporting and interacting with these functions. In studying the process of UTT, the audience will gain insight into how UTT functions, the potential value their expertise can provide, while also helping develop an understanding of the terminology and metrics for success used in the UTT community. An important challenge for responding to the needs of this group is the constraints on what is available to them through University Libraries due to licensing and budgetary limitations. The poster will explore the opportunities and resources available that map to UTT needs while navigating those limitations, featuring open access and other available resources. Though most informed by the experience of the home country, differences in home-geography governance will also be reflected in the poster.
10.5446/58079 (DOI)
And with that, let's just see if we're ready for Julian Franken. Julian, hello, how are you? Hi, I'm fine. Thank you. Excellent. Yes, please remember to share your screen. And once you're ready with that, we can begin with your presentation. Yes. Are you seeing my screen now? Yes. Yes, we do. Yes. All right. Hello, everybody. My name is Julian. I'm from the TIB, Latinness Information Center for Science and Technology in Hanover. My poster is titled, How to Support Early Career Researchers with Identifying Trustworthy Academic Events. I'm working in a DFG-funded project called Confident. And we are about to, or we're supposed to build a platform that will support researchers with finding the right academic event, like conferences, like this one, actually. And we are supposed to support them avoiding the wrong ones, so predatory or fraudulent ones. Recently, there was a report published about predatory practices, about predatory journals and predatory conferences, in particular, by the Interacademy Partnership. One of the main core insights, I would say, is that predatory conferences or conference quality is best seen as a spectrum, as you can see here on the top. I copied this infographic from the report. And typical markers are here also put to illustrate how a predatory conference, for example, looks like. And some of those typical markers for predatory ones are, for example, that in extreme cases, they don't take place at all. They can be a complete scam, or they only pretend to have a peer review process, but actually don't. So Confident wants to keep those conferences and events out of its database. As you can see here on the top, I marked those types. So in the infographic below here, I tried to illustrate how the rough process looks, how we intend to keep those out. So as you can see here first, a new event is entered. And then we want to conduct some automated checks. For example, if an organizer is on a blacklist, we want to keep this event out. Then if it is not on a blacklist, some markers that already can be found here and can be identified by a machine, those are highlighted and then sent into the manual check so that a professional can look into those. Again, if those check is passed, then they enter into the Confident database. And after that, we hope to involve the community as best as we can and give them the opportunity to flag events that they deem predatory or they deem not trustworthy. And those will be reviewed again. And but in the end, the most important part of this process is still the own scrutiny of researchers. So we can only be as good as our data probably is. And we still have to rely or the researchers themselves still have to rely on their own scrutiny and do their own investigation and research. We hope to inform them as best as we can. And during this process, as you can see on the bottom, I tried to mark that the certainty of the quality increases during this process. But there are always some challenges with this. For example, the first is what if the entered data is false in the beginning. Somebody simply lied about anything about the event. Next, for example, and one example for that is if an organizer is mentioned, that is actually not really involved in the organization of an event. That would be a problem too. This is one of the major challenges. In general, this addresses the issue of how to codify these markers at all. So most of those markers that are mentioned in the report actually can only be checked by a thorough investigation of the researchers themselves by looking at the website, for example, and really digging deep into the conference. And we trying to get this into a database is a challenge. Could you please wrap up your thought, Union? Last sentence, please. OK. And that's basically all of the challenges we will probably encounter. And thanks for your attention. Super. Thank you very much, Julian. And of course, just a reminder, when we're done here with our short presentations, you'll have a chance to go and actually go into the virtual rooms, meet the speakers, and continue to talk with them one on one. So we're looking forward to that as well. Good. Thank you very much, Julian. Appreciate it. Thank you.
Early career researchers, especially when lacking good support structures, can have difficulties identifying academic events like conferences that are of questionable integrity („predatory conferences“). In the ConfIDent project we aim to build a digital platform where researchers can inform themselves about academic events and get support with assessing an event’s trustworthiness. During the project we explored different strategies to evaluate an event’s trustworthiness, means of conveying this evaluation to the users of the platform and helping users to make their own judgements. This poster presentation will expand on those different strategies, discuss the currently preferred solution and highlight challenges.
10.5446/58081 (DOI)
Can I ask real quick, where are you physically at right now? Which city? I'm physically in Dresden, Germany, but working for the University of Mannheim. Okay, wonderful. Thank you very much. And is this also going to be that Lars will be joining you? Yeah, he's also already here. Yeah, I'm here and yeah, ready. Super. Hi, Lars. Hi. Are you also in Dresden in the same room? No, I'm in Mannheim actually. In Mannheim. Okay, wonderful. Yeah, that's the fun part about the digital world. We're all beaming in from other places. Wonderful, gentlemen. You have about three minutes. Please, you may begin. Thank you. So hello and thank you for hosting us. I'm Lars Oberlander and together with my colleague from BERT, Michael's Hacklots, we would like to introduce you to our interactive virtual assistant and how we developed it to promote data usage. As our illustration demonstrates, researchers might face the following situation. Processing data may be subject to legal restrictions. At the same time, researchers want to act lawfully. The rules for processing data are complicated and not easy to apply. And therefore, researchers may be not not to use or share data. To address this problem, we developed EWA as a legal information tool for researchers to foster data usage. But how did we develop EWA? As a first step, we had to identify the relevant legal regulations, especially the GTBR and for Germany, federal and state law. Then we had to filter these broad law texts for applicable and relevant regulations. Understand their meaning and structure them from a practical point of view. As a result, we got an examination scheme for assessing research data usage to comply with privacy rules. As this scheme was still way too extensive and packed with legal background information, we stripped the scheme from all this information and extracted the structure into a decision tree. Yes, and as the decision tree on the other hand lacks the necessary background information for users to answer the questions in the tree, we transferred it into an interactive object using the open source software. Every step of the underlying decision tree is presented on a single screen in this tool for which we also developed a consistent structure, which can be seen here on the poster. So on the right hand side, users see the currently X and Y criteria representing the step in the underlying decision tree, which then is followed by an explanation of the criterion, which also prepares the actual question which the user has to answer to advance in the tree. So the question comes also with answer options, usually three of them. Yes, no, and I'm not sure. And on the left hand side, the users can see a continuous table of criteria to show the structure of the whole process and the whole decision tree and also highlights the currently X and Y criterion on the screen itself. What was also important for us was to implement a final screen with an actual outcome. So if the GDPR is applicable or not in this case of EBA, which then also can be easily saved and copied and then used for further discussion with the responsible data protection officer. Yeah, and I think this already this three minute rundown of Eva and its functions and we're happy to answer any questions in our booth in the subsequent session. Super, thank you very much. Yes, two minutes, 50 seconds. Excellent. Great job to both of you. Good.
Collaboration on research data may be restricted by legal regulations in the areas of privacy or copyright law. Researchers face questions about whose data can be reused, which data can be shared, and how results can be stored or published. Still, legal knowledge does not belong to the main skillset of most data-oriented researchers and legal use cases regularly demand an individual assessment. Answering such questions of data collaboration is often time-consuming and resource costly. Further, these uncertainties may even nudge researchers not to share, use or reuse data at all. Even with appointed data protection officers and open science agents addressing these problems on an institutional level, a deliberation of each individual situation may not be possible due to time and staff limits. Out of this melange arises a demand for accessible and easily applicable legal information. In the Business, Economic and Related Data initiatives BERD@BW and BERD@NFDI we have been developing an interactive Virtual Assistant (iVA) to address this demand for legal information. iVA helps researchers and data service providers to understand the fundamental data privacy regulations and therefore enables them to evaluate their legal possibilities of data usage. With specific questions and the guidance of well-placed bits of information, iVA leads its users through a decision tree to convey the fundamentals of privacy laws. It enables users to contextualize the remaining uncertainties and provides a basis to facilitate further consultation of experts. iVA connects the theoretical knowledge and the user’s custom interest, which increases the expected learning effects and allows its users to apply the acquired knowledge directly to their own projects. At the INCONECSS Conference, we would like to share how iVA was created as an openly available and self-paced learning module that can be extended to further support data collaboration and FAIR principles.
10.5446/58082 (DOI)
Go ahead, please. Omit Giazvant, your three minutes. Go ahead, please. Hello again. We have started a chat with you and the W. The reasons that we are doing that is just for user support when we are not in the library and we are out of working hours. And another reason is just for support team in a contest with friends that they do not need to spend time to answer simple questions so they can be. Actually, my internet connection just got sort of, you know, disturbed actually I am sorry, but my internet connection but disturbed. No, no problem. If you could wait a moment. Omit is in the middle of his presentation will invite you right back in afterwards. Okay. Okay, so can I present after Omit. Yes, after Omit you'll be the last speaker. Thank you. Thank you very much. I apologize, Omit, please go ahead. It's okay. So, yeah. Right now we have worked on the different use cases the first use cases that we've almost done with it is around for just chat but and there are questions about the income base. And questions around the library like working hours and can I borrow this book from from that part or a simple questions right like this. The whole idea behind the chat that is something like this. There is a user interface and the middle steps, which are back end of the includes pre processing. And includes itself and hopefully techniques like cleaning correction, found phrase detection for detecting correct intent. And another part which is the central part of it is understanding. We use intense and stories and double flows to understand what users says and what users requires. And another thing that the chat that will do is to return an answer, which we call it actions against the intense. And final we use the same user interface to return to return the answers to to the customers. So, we have labels and intense and 10 different labels. This labels divide our chat transcripts into rough categories. We call them use cases, each use case have different intents. Okay, and fortunately we have specific and enough examples for each label or each use case. And intense assigned in a for eyes principle that are in the previously labeled chat transcripts for example for use case one we have, as I said, questions about the right library. So we have labeled them and we use them for training our machine learning core that we will be used in our chat. And the first steps are working and evaluating and chat but personality and evaluate dial flows for chat but only use cases with a contest reference team and improve them. And later on, we will work on evaluating chat but use cases for a contest team and later with users and review and eventually enhance use cases because we need still to work on the use cases to improve our chat but precision and work on more intense actions and examples for use cases because these are, as I said, training data, and when we have more training data we have more precise chat but then finally we will develop a high five vertical prototype for the chat but only use cases. Here is how many of you can please wrap up your thoughts. This is the final one and here is the contact for further questions will see you in the booth. Fantastic. Thank you very much. Excellent. Okay.
The Research Guide EconDesk of the ZBW – Leibniz Information Centre for Economics is to be supported by a chatbot in the future. Research Guide EconDesk stuff answers questions on literature search and library services and support users with their individual data search. On the one hand, the chatbot should support the colleagues from the existing EconDesk chat team in processing common user requests, which have increased due to the pandemic, and on the other hand, it should help to expand the range of services to support users from business and economics on the EconBiz portal. Over the last year, our cross-departmental team has been working with different stakeholders on the use cases and is currently working on our first prototype of the chatbot system. We made use of conversational UX design evaluation methods for designing chatbot persona and conversation flows for later implementation. Our chatbot has been developed based on NLP (Natural Language Processing) techniques. Machine learning and rule-based strategies are the main components of this approach and RASA is our main development framework. Another essential part of our project is preparing data for training and testing the machine learning algorithms. We labelled manually real chat-logs to make use of this data for our purposes. The data include intents or user inputs, actions or chatbot answers, and stories or conversation flows. Stories structure flows of conversation and are fundamental parts of the chatbot. Intents that are bases for training the NLU (Natural Language Understanding) and are used to indicate users’ purposes. They are created based on users’ intent and librarians’ experience, must be unique, and are then assigned manually in the chat transcripts.
10.5446/58084 (DOI)
Hello everyone and welcome back. I hope everyone enjoyed their time at our social event this afternoon and we're very happy to welcome you back to our next event for our panel discussion this evening. Before we begin, as I mentioned, I try to make it a little bit interactive and fun. So I do have my little trivia quiz here. And I wanted to ask everyone, kind of interesting, does anyone know what the population of the European Union is about? Thinking maybe 200 million, 300 million. It's actually the EU has a population of around 450 million people, which is even larger than the United States. So that's always a fun fact to know when we're talking about the power of the EU. And the second one, I'm an American and I'm always surprised myself how many Americans don't know this answer. And that question there is, what is the capital of Canada? And sometimes Americans will come back and say, it's not Toronto, is it? And that's correct. It's not Toronto. The answer is actually Ottawa. And so, yeah, it's just one of those fun things, but especially for Americans, we only have two neighboring countries. So it's very important that everyone knows that Ottawa is the capital. And hopefully one day I'll have a chance to visit it. Good. Now, as we're coming back, it is time for our panel discussion. And the panel discussion today is titled, Potential of Artificial Intelligence for Libraries, a New Level for Knowledge Organization. And this is really not only a fascinating theme, but it's something that we could talk about for hours. And I'm very excited that we have a distinguished panel with us today. Some of the questions that we've prepared, just to give you an idea of the direction we're gonna go, is we wanna ask, what kind of support do researchers need from libraries? How might support and services benefit from artificial intelligence? How can libraries best support the research process and add value using AI? What might be potential drawbacks? And even, how will the work of librarians and researchers change? Artificial intelligence and humans working together. What will AI excel at? And what will humans excel at? Yeah, and before we begin, one important thing, very important, please feel free anytime to ask your questions. As I mentioned before, there are two ways to do that. One, we have our QR code, which will be blended here underneath me in a second. So you can hold up your phone to the QR code and type that in. Hopefully the technical team can blend in the QR code. That would be great. Ah, there it is. Thank you very much, feeling done. And of course, the second way that you can do that is our interactive tool. That means you just scroll down on your screens to the interactive tool, and then you can type in your questions too. So we look forward to hearing your questions and trying to answer them. Yeah, in addition, we have here several speakers, and the way we're gonna do this is to kind of imagine we're all on a podium together in a room. What I'm gonna do is I'll introduce one speaker, say one or two words about him or her, then that speaker will have about three minutes to have an opening statement. Once that opening statement is finished, then I will introduce the next speaker, and the next speaker, him or her, will have also about three minutes for an opening statement. Once everyone's opening statements are done with, at that point, then we can come together. I have a few questions I have prepared already, and we're certainly looking forward to your questions and your comments for our Q&A period. So without further ado, I'd like to introduce our first speaker, and I've said this to our speakers before. If I mispronounce anyone's names, please let me know immediately. I won't be offended. The first speaker is Cecile Christensen. Is that correct? And Cecile, she's the Deputy Director General at the Royal Danish Library, and is responsible for the digital transformation of the organization. She's worked with communication and digitalization of the public sector for a number of years, and she has a law degree, LLM, from the University of Copenhagen and University of London. So for our first introduction remarks, I'll turn the floor over to Cecile, please. Thank you very much, David, and thank you for having me here. It's a great honor. I am, as you say, I'm the Deputy Director for IT Digital Transformation and Communications at the Royal Danish Library. I've worked there for about a year, and as you say, I've also worked with the digital transformation of the public sector in Denmark in many different roles for many years. I have been working for an agency of digitization, where I used to be responsible for the Danish National Digital Signature. I also worked for the city of Copenhagen, where I did a lot of different digital initiatives. And I also worked with solutions, implementing automation, machine learning, chatbots, speech analytics, speechbots, voicebots, and the different things in different citizen services. But now I'm concentrating on the library and the research library and all the digital transformation in that regard. And at the Royal Danish Library, our strategy states that it is necessary for our specialists to be aware of new technologies and by taking a role as competent data stewards and technology stewards, we will maintain the value as wayfinders for our users. And we hope to embrace new technologies and tools which can make a search, discovery, and mapping of academic literature much better and improve the discovery and review processes and make it faster for both researchers and students. We have already implemented some solutions, but we think there's a great potential for more in the future. And we hope to be able to embrace those new technologies even more. Yes, back to you, David. Thank you very much. I'd like to move on now to our second speaker, and that would be Martin Kuisner. Martin, am I saying that correctly? Kuisner? Not really, but it's okay. No one can pronounce it. Let me try. Please give it a try. Kweissner. Kweissner, ah, sehr gut. Kweissner, Martin Kweissner. Thank you. Excellent. The researcher in our group, and Martin, he has an MSc in management and applied economics from the Johannes Kepler University Lins, and he's currently a PhD student at Rur University Bochum. In his research, he uses data science techniques as well as meta-analysis to investigate entrepreneurial activity, as well as entrepreneurial ecosystems. It's my pleasure to give the digital stage to Martin for his first opening statements. Martin, please. Thank you very much for the warm welcome and also welcome. Hello to the audience from my side. I'm happy to be here and provide a very warm welcome to the researchers' point of view to the discussion. As David has pointed out, my research is about entrepreneurship with a focus on entrepreneurial activity and entrepreneurial ecosystems. In this research, I use supervised machine learning techniques to gain data-driven insights about the phenomenon of entrepreneurial activity and entrepreneurial ecosystems. Here in this InConX panel, I try to give a user perspective on the discussion topic with a focus on the support aspects. So basically how libraries, digitalization, machine learning can help researchers to do better and faster systematic literature reviews or in general help with the research process. My insights on this specific topic arises through the meta-analysis. This meta-study is a relatively large one. We searched in screen 1000 documents because of the upcoming hot topic entrepreneurship. We filtered these 100,000 entries with keywords and we ended up with 25,000 documents which we then had to screen via full text, abstract and so on and so forth. In the end, we had 500 papers and during this whole screening process, we experienced limits and challenges which might be mitigated through digitalization and machine learning. This whole experience brought me here today and I look forward to discuss how libraries, digitalization and machine learning explicitly can be beneficial for researchers to do better research. In my opinion, machine learning has several advantages and can be helpful but we should check if other things are more important or relevant to ease the struggle of researchers. So there must be a distinguish between must-haves and nice-to-haves to channeling our efforts accordingly and I look forward to discuss this topic with you here. Thank you very much, Martin. Then it's time for our third speaker. I'd like to welcome Osma Suominin. Osma, am I saying your name correctly? Yeah, good enough. You know, I'd like to hear this one time. Okay, my name is Osma Suominin. Aha, Osma Suominin. Osma Suominin. I'm trying my best. Almost, yeah. Okay. I try my best. Yeah, you did best. Thanks. Osma, welcome. Briefly about you, Osma is working on automated subject indexing tools and processes at the National Library of Finland. He's the main author of the open source ANF toolkit which is used not only in Finnish libraries to the Finto AI service but also in other libraries, including the ZBW and the German National Library. Osma, it is a pleasure to give you the digital stage for your opening remarks, please. Thank you, David. Hello, everybody. So yes, my name is Osma and I work as an information system specialist at the National Library of Finland. In my previous life, I was also a researcher. I did my PhD thesis on semantic web technology and semantic web portals. But that was 10 years ago. And now I've been at the National Library, I started work on the vocabulary service Finto, which is a place where we publish our controlled vocabularies including our main subject vocabulary, the general Finnish ontology, YSO, which we use as the main vocabulary for cataloging all kinds of materials. And lately for the last four years or so, for five years I've been working hands-on on AI technology because we started creating the ANIF tool which is an open source toolkit for automated subject indexing. It's a modular toolkit where you can plug in all kinds of machine learning algorithms. And then a bit later, we turned that into the Finto AI service, which is an automated subject indexing service that is used by catalogers at the National Library, of course, but also in many other libraries in Finland. And it has been integrated into many digital repositories used by university libraries. So, and also in museum systems that they use for cataloging museum objects. So it's quite widely used within Finland. And like David mentioned, it's also the same, it's open source tools, so others have started using it as well, including in Germany. And so I have quite of a hands-on perspective. I'm sort of an implementer myself, but I'm also a project leader of the small automated cataloging project at the National Library. And so we provide these tools for automated subject indexing to a community of librarians who use this in their cataloging and indexing processes, especially for digital or online materials, which are of course easier to process with a computer. It's not so easy when you have a printed book, but yeah, there's less and less of those these days. And our users are telling us that this is a useful service for them. It might save time in some cases to get algorithm, give you suggestions about possible subjects, but that's maybe not the main driver. The main driver is more about the quality or the consistency of indexing. So when they have access to these tools for automated indexing, automated classification, it ends up improving the overall quality of metadata that then is stored in these different library systems, which in turn improves the discoverability of library materials. So I can't say anything directly about how this affects researchers. It's quite an indirect relationship really, but when we are helping the librarians to help the researchers basically, but it's like a long chain of processes. And yes, so I would say that this is, there's a lot of potential in building AI systems. And also I think what I like about this job is that it's let us do also the implementation. So we're not relying on, for example, some company providing a service to do AI for us, but we're actually building it ourselves and also letting others use it. So it gives us more power. It's not like buying a black box, but it's actually building it yourself and improving it for the things that matter to us and our users. But I've talked enough now and we can continue the discussion. Thanks. Fantastic. Thank you very much, Osmo. And that brings me to our fourth and final speaker introduction. And it's my pleasure to welcome back to our digital stage, Aaron Wise. Aaron, I believe I'm saying that correctly. Yes. Okay, just wanted to make sure. And please make sure you turn your mute off as well. Real quick, Aaron is the manager of the information management team in Baker Library at the Harvard Business School. She and her team focus on providing metadata and taxonomy services to HBS. And on developing vocabularies and ontologies to integrate data across the organization. She holds a BA from UCLA, an MA from the University of Virginia, and an MLS from Rutgers University. So it is my pleasure to introduce Aaron Wise for her opening statements. Aaron, please. Thank you, David. So the information management team at Harvard Business School Library practices taxonomy and ontology development and metadata management for HBS and a state administration. And metadata management. For the purposes of this discussion, I am thinking about AI in terms of natural language processing and machine learning. That is interpreting language and leveraging data to aid in the performance of certain tasks. Specifically, I'm thinking in terms of using a machine to help with tasks related to text, content and collection analysis, subject analysis, and entity extraction. At Harvard Business School, we have made a few forays into NLP and machine learning methods for entity extraction and subject analysis with mixed results. For example, we have used entity extraction techniques to help us build our company names vocabulary, which resulted in many accurate extractions of company names from text, as well as many inaccurate, incomplete, or altogether missed company names. We also review the results of automated tagging processes for Harvard Business Publishing, where we see similar rates of accuracy and inaccuracy. Our experiences with NLP show that machines can process natural language, yes, but can they understand it? NLP and machine learning methods appear to be particularly well suited to STEM disciplines, where there is use of more formal language and medical reports, for example. In Baker Libraries Domain of Business, we work with more informal language. We often work with abstract concepts such as innovation, and we find that machine may accurately identify the resources about innovation and tag it accordingly, but it fails to make the leap that these same resources would frequently also be about entrepreneurship, which a human tagger would do. Going beyond the specific areas of taxonomy and metadata, which is my area of expertise, libraries and librarians excel at connecting researchers to information and enabling discovery. We have built up many services around those goals, and these services can be applied in the domain of AI, just as AI can enhance these services. For example, in collection development, digitization, and data licensing, I believe libraries have a lot to offer. Librarians can offer up structured data for training datasets. We are often collecting on the edges of domains and identifying new resources, new ideas, new terminology. AI techniques could potentially aid in the identification of those edges and be applied to training datasets to enhance machine learning, for example. We are also useful curators and licensors of datasets, and we are good at digitizing collections, which can be put to use as datasets. Librarians can apply their roles as sophisticated users and interpreters of information to help researchers be sophisticated consumers of algorithms. Librarians can provide feedback, for example, on the limitations of machine learning results, and we can inform the interpretation of those results. Librarians can also pilot ways to be transparent about machine learning. We do not have the profit motive, usually, for the most part, and we value transparency. So in conclusion, I would say that there are plenty of opportunities for us to experiment and apply our skills and our values to the integration of AI methods into the work we do. There is reason for skepticism as well as optimism, and we must think critically about how best to benefit from AI methods within our areas of activity, with the data and the use cases that we have and for the users we serve. Librarians are well positioned to use, interpret, and influence AI processes, and to help researchers understand how they might benefit from or apply AI techniques to their own work. We have a lot to learn, but then again, perhaps we already know more than we think we do. Back to you, David. Super, thank you very much, Erin. Just turning off my clock here. Everyone is good with the times. And now we're about to begin our discussion. So just a few ground rules. As I mentioned before, when we're doing this virtually, it's a little bit different than being in a room together, but basically this is our panel discussion. So I have a few questions prepared, and just for all of our speakers, everyone is allowed to answer them, but more importantly, you're allowed to ask with each other. You don't have to ask for the moderator. You can interrupt, you can debate, you can add something to it. Just think of it as we say in German, a Kaffee Klatsch. That means everyone has a cup of coffee and we have a discussion. Also, of course, to our viewers, I will remind you once again, that you'll have a chance to ask questions. You can do that at any time. The two ways, just as a reminder, is the QR code. Maybe if my tech team can put that up, you can go ahead and use the QR code to ask your questions. And in addition to that QR code, you can scroll down and you can also have the interactive tool or there's a QR code, fantastic. You can also use the interactive tool. And you can do that at any time and we'll try to get to all the questions. I might throw in a few questions for the audience in between, but at the end, we also have an official Q&A time. So with that said, remember to our speakers, you can have your microphones on the entire time or mute them as you wish, but just try to remember to turn them back on when you'd like to say something. So the first question I have, and again, it's to anyone, but I'll just start the conversation to Cecile. And the first question I have here is, usually librarians complain that they do not have enough resources to organize and label all publications. So in your opinion, is artificial intelligence truly a game changer? That's a good question. And I think maybe we should start out by talking about the definition of artificial intelligence. I think both Erin and others were talking a little bit about what do we understand by artificial intelligence because I think it's kind of ambitious to call it that, where machine learning and algorithms might be a more precise description at what we are having right now. So I think it will be a game changer, but right now it's not so mature. So we also made a few, we tried some things and tried some different tools that we bought. I like the idea that Osma mentioned about building it yourself. It's very nice because when you buy it, it takes a lot of time to discover that it's not really so relevant as we hoped it to be and we cannot change it. So we haven't seen the break change yet, but I think it will be a game changer in the future. So, and again, anyone can jump in after that? Osma, you were mentioned, maybe if you'd like to reply. Well, yes. Yeah, I think I have to agree that it's maybe not a game changer in the current situation, but it can be helpful in specific areas. For example, one of our early adopters of our automated subject indexing services has been the digital repositories of university libraries. And there, the one typical workflow is that a student completes his or her thesis, like a master's thesis or maybe a higher degree even. And when they have the thesis ready as usually a PDF file, they are asked to upload it into the repository so that the whole world can read it from there. And in this process, they have to not only upload the file, but also to fill in a form with lots of metadata, like the title and abstract and author, and the keywords or the subjects. And this is the hard part because most students are doing this for the first time in their life and quite likely the last time as well. And so, but they would need to know the vocabulary and they would need to know how to, you know, how to express the topic of your 100 page thesis in like eight, seven, five keywords. And that's a hard task. And if they just get an empty form, that's like a pretty difficult situation, then they just improvise something and, you know, but it ends up with being pretty low quality overall. And, but what we have been able to provide is this suggestion service that when they upload the file and maybe enter the abstract, then they get immediate suggestions. Okay, these are the 10 or 20 topics that the machine thinks your thesis is about and then they can select, yes, no, yes, no. And that's a lot easier for them and it also ends up improving the overall quality. So it's maybe not a huge win for libraries in general, but it's a specific improvement that we can provide. Yeah, fascinating. I saw Martin smiling, go ahead, Martin, please. I would also like to add that metadata is especially for systematic literature reviews relevant because we use these metadata as moderators or as control variables for our estimations. So if we enrich the raw data or the original data from these papers or thesis, then we can of course improve also the systematic literature reviews and gain maybe better insights about the research topic under study. So we can enhance really the research process through machine learning. But here I would like already add that the indexing or the analysis, the automated analysis of these documents must be accurate to a certain sense and that also the document at the end becomes available. I mean, if I index a thesis correctly, but at the end, for instance, for my systematic literature review and I have no access to the raw document or it's very difficult to find it, for instance, via Google meta analysis, for instance, you have to search on five different platforms to show the editor, you have everything and you covered everything. So that we basically combine different data sets, data basis and so on and so forth to actually create a platform which enhances the research process. And I think there is AI, or machine learning a very good tool. Excellent, thank you. Aaron, did you wanna jump in for a final thought on that? Yes, I would like to comment just on something that Osmo was saying. So speaking to the point of anything that helps at the point of data entry, so anything that helps, any tool that helps a user assign keywords or metadata at the point of data entry is great. So I agree that that's helpful. I would just say as long as it's the start of the process and not the end of the process, so I guess that's the first step. And as long as there's kind of follow up or analysis beyond that as well for anything that maybe the machine didn't identify, that's good. But I agree that anything that helps at the point of data entry, which can be a painful process for some is a good thing. Great, anyone else would like to comment on this before we move on? I'll just wait a sec. I have a comment, one more comment, David. I think there's also that our data situation is changing where we used to have much less data and much more structured data. We are moving towards having so much more data and so much more unstructured data. For instance, we collect the Danish part of the internet four times a year and keeps it for research. And here it will be very, very important to have some type of machine learning and algorithms to work with this data afterwards, otherwise it will be useless, I think. So I think we'll move more and more in a situation where this has to be part of our toolbox. These algorithms that you mentioned, Cecil, are they already being worked on or discussed or is it still right now just in people's thoughts? For my organization, it's in people's thought. I don't know if some of you guys have tools that can work with that already, but we don't have those tools yet. I don't have those. Oh, sorry. Okay. Well. Wait, Osman, then Aaron, please go ahead. Okay, thanks. So we also collect the finished part of the internet, basically. So I guess we have similar activities. So we have a web archive of finished websites. And yeah, like you said, it's like a very unstructured and also very big. And I'm not saying that we're making full use of it yet, but one thing that we are currently doing is that we're working together with researchers who are building language models. So these are these huge deep learning machine models that can be used for many language related tasks. But they are sort of a general purpose, like GPT-3 and so on. So we're working with the researchers and we are providing them with the text extracted from the finished web. So because what they need is just huge amounts of natural language texts from basically any source. And they just need many, many gigabytes of raw text. And we are providing them with training data from the web archive, but also from other digital collections. So that's at least some making some use of that kind of big data. Go ahead, Aaron. Yeah, that raises a question that I have. So definitely it seems definitely that the NLP and machine learning techniques are really useful for big data sets. And so even if there's 80% accuracy on a big data set, that's good. But I've been wondering, we're still working with fairly small data sets. And so I've been wondering, is it equally good? And I don't have an answer. It's a question. So is it equally good and equally valuable? Excuse me for smaller data sets. So it was 80% good enough for a small data set, for example. And I think we're not sure about that, but I think it's an interesting question. Excellent question. Would anyone like to take a shout at the answer? I maybe can talk about the user perspective. From the research, we tried to find causal relationships. So the test set and an accuracy of 80% must be accommodated by different methods and traditional methods to actually find the underlying causal relationship between, for instance, natural language processing. If I have a large amount of text in here, I know a paper from a colleague of mine, they scrapped text from firms and analyzed firm innovativeness based on their tech, based on the website. And then they applied a different data set to measure the innovativeness of the firm. So they accompanied the big data and machine learning aspect with more traditional techniques to find these underlying causal relationships. And if we start relying more and more on these machine learning aspects, we maybe identify relationships that are not truly cause and effect type relationships. So we keep that in mind. And I think machine learning has advantages, but this is one of the disadvantages and a huge disadvantage that it can find underlying structure and combine structures into input and output. But the true relationship is still hidden and unknown. Excellent, thank you very much. Martin, I'd like to switch gears and this question will be directed to you, but of course anyone else can also have their thoughts. So in your opinion, what kind of support do researchers need from libraries? And I'm sure the list is long, but maybe you can come down to two or three main points. The first and maybe main point is the access to the documents. In many cases, we relied for this meta analyzers, for instance, to go on SIHUB and type the DOI directly for a full text screen, because the library or the National Library in Germany had no access to the paper. So we had here to rely on illegal practices. This is maybe the first main point and the second point is the search engine and the indexing of all the available material that we had to rely on many different web searches, because all of them resulted in different results, different entries, and maybe harmonize everything together, bring everything together. We need working papers, published papers, dissertations, thesis, because only with the variety of content, we can enrich the research process in general. Fascinating. Yeah, and I opened up to the rest of our panel. If anyone would like to add to that, the question again, will kind of support, do you all in your experience, feel that researchers need from libraries? Anyone who's feel free to chime in? I'm happy to chime in on that. Please. So I agree with Martin that that access is key. I would say in addition that they need access to information and that information needs to be reliable, so curated and sort of reliable information and content. Users need to understand where gaps exist and they need to understand what data is available and understand what tools are at their disposal for their research needs. And I would say they also need coaching and support in terms of understanding what is realistic to obtain from collections and from data and what might not be realistic. And there's also the legal point Martin mentioned. I think we have a lot of problem. We have so much interesting data, but much of it is not possible to present due to personal information or copyright legislation. So I think this could also be very interesting if we could use machine learning to help either sort these data or somehow anonymize the data or make it in a way so it can be used for researchers without the restraints that it have already. I think that will be very interesting. Great. Anyone else would like to add something before we go to our next theme? Well, I could say something about, I mean, what Martin mentioned is of course, of course, documents are crucial and that's sort of the core function of the library to be able to provide them. Although they are challenges like Cecil said. And also from the, I mean, you mentioned the licensing that not all libraries have access to all the papers published in the world. And that's of course the case. But there's also an ongoing battle being fought between the publishers and the libraries around the licensing and the open science and open access models. And unfortunately for researchers, it means that sometimes you won't get what you would like to get because it's just really hard to negotiate these contracts. And sometimes libraries have to compromise on availability to be able to get a better deal on, for example, on open access. And I hope this will be sorted out in the next few years, but right now it's a mess. But what we have some experience with is also sharing not just papers or publications or documents but also data with researchers. And I already mentioned the raw text that we provide for language model researchers, but also we provide some bibliographic data in the form of mark records to researchers who are analyzing these large national bibliographies and trying to find patterns in, for example, specific time periods, especially the older literature from previous centuries and what kind of things you can infer from that, for example, the relation, which languages were used for publishing books in the, let's say 18th century when Sweden and Finland were the same country. And what kind of publishers were involved. And you can make all kinds of interesting observations based on bibliographic records. So this is one thing that we also provide. One question to Osmar. Have you checked the bibliographic entries in this database? For instance, we found that depending on the paper or the bibliographic entry, the authors were written differently. For instance, we tried mapping the entrepreneurship literature who are famous authors and so on and so forth. And for instance, the authors have sometimes a middle name, sometimes they don't have a middle name, sometimes the language apostrophes are placed. So have you used machine learning to actually harmonize all of these entries before giving them to the researchers? And have you pushed these information, for instance, back to the journal also and so on and so forth that you actually harmonize all of these data entries? Well, in short, no, but maybe we're talking about slightly different records because what we are mostly dealing with is records about books and other similar kinds of materials, not so much academic papers. I mean, just in practice, they tend to live in different databases and have slightly different metadata schemas and cataloging practices are different. So unfortunately, there is no single big library of the world that would have a unified set of cataloging practices and so on and like a big unified database of everything ever published. It simply doesn't exist. And that's just the world of bibliographic data is fragmented. There are national libraries, there are academic libraries, there are companies like OCLC or the Web of Science and all these, you know, there are big publishers with their own portals and it's just, it's not harmonized and I don't think it will be ever. At least it doesn't look that way right now. And it means that the data is going, when you look at the basically the same record from different sources, they are going to be different. And if there is a way to normalize them, for example, with machine learning methods or even just simple heuristics, then yes, that could be useful. But right now it's just like there are so many different systems and some of them are talking with each other, but it's really not harmonized at all. But what we can do is to provide our national bibliography, which also has its own problems there. I mean, it has been built over decades or centuries and of course the practices have changed over time. But at least we have some idea of what these layers are and we can communicate that to the researchers. You know that this, the publisher used to be a catalog like this in the, in the up to the 1940s and then we changed the practice. But yeah, but so we can tell about our data, but more generally there are so many different bibliographic database and they all have their own, you know, problems and perspectives. So yeah, sorry about that, but that's how the world works. Very enough. Thank you very much, Josme. Anyone else like that? Last thought to this? I would like, I would like to speak to that because Martin Rees, it's an interesting question. So the harmonization of names is the bane of our existence, even just for our own data. And we're at the business school, we're particularly interested in company names and it's really difficult to keep track of them because they change, companies are kind of fluid things, entities and they change. And there's always a desire to be able to track relationships with a single company that the school has, like for example, through alums, through faculty, through MBA students. And it's, we keep trying and we've been, we've bitten off, you know, small, small portions of that problem, but we're still, you know, we're still interested in looking at AI techniques for helping with that particular problem because it's too big. I mean, that is a case where it's just, there's too much data for us to manage manually. Okay, yes. An interesting point, but I think that's a difference from being from a very small country and a very big country because in Denmark, we have a register where all companies are registered in a certain way and it's been standardized for many years. But it reminds me of a discussion I had in my former job where we discussed if we could have a semantic standardization of keywords in public sector so that like names and addresses and companies and stuff like that would have a common standard across all sectors. So we would all use it the same way. And then the discussion came up, is that worth the trouble or could you with AI just do it just as easily? Or is it necessary to have a standard? Because the idea was that it would be good to have more standardization. And I still think that it would make it many things easier if you have at least some basic standardization across your country or EU or whatever. But sometimes it might not be worth the trouble because in the future it can be easily done without our work. So I don't know what it's like for in Finland or other places. I really wish we had a standardized name of a list of companies. That would be fabulous. Excellent. OK, I'm seeing. Good. Let me have a fantastic discussion. Let me continue and also just a quick reminder to our viewers or audience that in a few minutes will certainly open up the lines for your questions. So feel free to go ahead and ask your questions, scroll down to our platform, to the interactive platform, or of course with the QR code, which is up there right now. The next question, I'm going to go straight to Erin. And again, this is open for the entire panel, but we'll start with Erin. That's a tough one here. Erin, how can libraries best support the research process and add value using artificial intelligence, in your opinion? So I think automated tagging is one way. I mean, as Osmo pointed out, it's not a direct way of benefiting the research, but ultimately that's why we tag things so that researchers can find them. So I think that's one particular way. We can provide structured data to support any machine learning processes. So I think using our own data as training data sets would be a good contribution. And I think support and services, this is probably a repetition of what I mentioned before, but I really think that support and services could also be extended to cover machine learning topics. So librarians are already good at being critical thinkers and expert users of information and evaluators of information and interpreters of information. And I think the same skill set can be applied in the realm of AI. Thank you very much. Martin, please. I would like to add, I like the idea about tagging and tagging the structural data. Here I would like to ask whether you start tagging these just by the classification, so the classification of research topics. I have you thought about structuring them like these are theoretical concepts, these are quantitative papers, documents, and empirical data. Because for instance, we had these 100,000 documents, and we had to do these screening manually. So we would be very helpful if you had, for instance, tag applied to these information. Is this information empirical, quantitative, or theoretical? That's an interesting question. We haven't thought about that at all. And when we talk about auto tagging, we're talking specifically about applying subject vocabularies. We have all kinds of subject vocabularies that we use in various subject vocabularies. So we're in the business of trying to connect to those various different vocabularies that describe content, and then using a vocabulary and machine learning, training machine to tag using that subject vocabulary based on tagging that's taking place in the past. So we haven't been thinking about tagging data in that respect, but I think that's an interesting question. Yeah, I mean, if I understood you correctly, you're analyzing the text through natural language processing, and then assign these texts. And I would basically just ask whether you could use, for instance, in these natural language processing for the document, like if there are certain key words that you can then tag it for example, just empirical. If in the complete text is the word descriptive statistics or there's a correlation table and so on, then based on the natural language processing, you can filter these documents and tag it automatically. What do you think about that? So maybe I can chime in. So what we have been doing is automated subject indexing, and the focus is on the subject. And of course, that's a little of a slippery concept. But basically, it means that to try to find the central topics of a certain document. And those are usually between five and 12 maybe subjects per document. It depends a little on the data set and the system. But anyway, that's like a small number of key words or concepts that best describe the main content. That's not quite the same as what Martin is asking about, whether it's possible to tag something as empirical or theoretical or so on. But I think it would be maybe possible to do that. But that would require a separate training data set. So you would need to have a certain number of papers manually sorted into these categories. That these are theoretical ones. These are empirical ones and so on. These are quantitative. And then you could train based on that, train a classifier based on that. I'm not sure if it would help to. I mean, you could do it indirectly through the subject tags that you get from a system like ours. But then you would need to somehow map those subjects. And there are quite many of those. You would have to map these to the categories and hope that it works out. And I'm not sure if it would. But maybe if you're lucky, you can find a mapping that works. But to me, it would seem easier to just train a separate classifier for this task. Excellent. Anyone for the last comment there? I'm seeing. OK. And if not, then my recommendation. I have one more question. Well, I have a few questions prepared here together to the ZVW team. But I have one more question I'd like to ask. And then we're going to open up the floor for our questions from the audience. And this one may be Osmo. I'll start with you. And again, this is open for our entire panel. The question is, librarians may put a lot of effort into organizing the knowledge of the world. And researchers may still prefer Google or other services. And your opinion, is that really so? And if so, why? OK. That's a tough one. Those are different perspectives. I guess librarians have this perspective of trying to put things into neat boxes, or at least shelves, or whatever the digital equivalent is nowadays. But anyway, to try to find a way of a structured way of describing the world. And it can be done through a classification, like, well, the Jew does in more classification. It's a famous one. And it has its problems, of course. It's maybe not all parts have aged so well. For example, when it comes to religions, or other things that where the discussion has evolved. But anyway, that's one way. Source is another way. And ontology is a third way. But anyway, the idea is to build this vocabulary, or taxonomy, or whatever you want to call it. That sort of describes the world. And then you file everything into the right place, or places. In some cases, you can have many categories. And it works for some tasks very well. But then again, Google also works fantastically well for certain tasks. And I think those are complementary. I wouldn't see this as one winning over the other, in a general sense. The librarian approach, obviously, has a kind of a scalability issue. If you think about the early internet, I don't know how many remember the original Yahoo service. Yahoo.com, it was basically a big taxonomy of everything on the web. And well, they managed to maintain it for a few years. But then they gave up, because it just wasn't possible to cram everything into a hierarchical taxonomy. Librarians have the same challenge, of course. It's always an open question whether it's approach scales. But still, it's very good, because it guarantees that you can find all of the, or at least most of the material, around the same topic in the same place, whether that's a physical shelf or more likely a digital categories in some system. But anyway, and Google can never guarantee that. I mean, you can always come up with a different variation of the search. And you're still not sure if you found everything that's relevant. Good point. Anyone else would like to add something to that? I totally agree with Osma. I think it's also my point from earlier, with the standardization of the semantic standardization of data in public sector. And Denmark is very much the same discussion. Like, should we standardize everything, or could we just use search engines instead? But it's also the logic and that you can have these authoritative data that somebody curated, that you can believe in, which is nice. But as the more data you get, the more you also need the search engines and the machine learning algorithms to help you get through all this data. And I think it's also something we see at the library. Like, we see it at our website. We're building a new website. What should the logic be there? How much should be in a logic structure? And how much should we just expect that our users will come in through Google and get into the piece of information they are looking for? So yeah, same picture for me. Great to see you. Anyone else on that? Erin, please. The Google question does raise another question for me. So maybe I have more questions than answers on all of these topics. But so when thinking about a potential drawbacks of AI, think about biases, inherent biases. So Google algorithms from what I know, they're presenting results, search results, based on how frequently something is accessed. So in some sense, I do think that we have to be conscious of that and take that into consideration that the more something is accessed, the more it will be surfaced, which maybe that works for a lot of cases. But I just think it's something to be maybe a little bit skeptical about. Yeah, very good point. I totally agree. And I think we also work a lot about the search engine optimization of our articles and how should it be presented. And furthermore, I also think that Google search engine is based on the data it has on you. So what are you interested in? And what would you be presented for instead of some more objective, neutral data? So I think that the bias discussion is very, very interesting and something we should definitely be very much aware of. Excellent. I think with that, first of all, part one of our discussion round, really fantastic, very, very good answers to the questions. And I think it's time now to open it up for our audience questions. And let me begin with the first one. I will open it up generally and anyone is, how do you say, can answer as they wish. There's no order here. So the first question we have here is based on your experience, what is your opinion on the selection of training model data, random or intentional selection that may influence or favor a specific outcome? Ethics in ML slash AI? Question mark? OK, maybe. Let me start with Cecile. I'll start with your hand first. And then, Osma, you can go on after that. Go ahead, please. Thanks. I guess that is also a little bit about the bias discussion. We had in Denmark, we made this algorithm that we have all the old photos and portraits of people. And we have many, many. And then we made this algorithm that could go through the database and then find the picture that resembles you the most. So you could come in and see and then find a historical person that resembles you the most. But it turned out that we mostly have pictures of white people. And it just wasn't good enough that we could only show pictures of white people. So we made the algorithm. But we are not presenting it at the library because the ethics and what we wanted to present of data in this regard. Fascinating. Osma. That's funny because I also built a system like that. I didn't know you did it. But I actually showed it to one of your colleagues. So maybe there's a connection there. But anyway, yeah, we also didn't make it into any kind of public product in the end. And it's disappeared by now. But about the original question about selecting training data. So the choice was between intentional and random in the question. And well, for us, we are training, we're mostly training models for subject indexing, which means that we need training documents that have been already manually indexed. And so we are collecting these from many sources. Our main source maybe is the FINA discovery system because it's a discovery system that contains all the most of the collections of Finnish libraries, archives, and museums. So it's a huge data set. And that's our main source of training data. But we also have more specific collections like from the digital repositories and from book publishers and so on. But I would say that we are doing intentional selection because we are basically collecting what we can access. So we use what we have or what we can easily get access to. I don't know what random would mean in this context because it would mean that we would somehow limit ourselves to using a random subset of whatever is the full set of things you could use. And I don't think that set is very well defined. So to me, from my perspective, the question doesn't make that much sense because we are using what we can get and we are trying to use as much as we can find. But also we are sort of guided in the way that when we notice, for example, in our previous, we are providing this service in three languages, Finnish, Swedish, and English because Finland is a bilingual country, Finnish, and Swedish. And obviously there's also a lot of English language materials. So we collect the training data in Finnish, in Swedish, and in English. But we have most of the material is in Finnish. A lot of it is in English. And not that much is in Swedish because it's a minority language around here. So we identified this as a problem that we don't have that much training data or evaluation data in Swedish. So then we went for looking for new sources. And we found, for example, just incorporated some materials from Obo Akademi, which is a Swedish language university in Finland. So there we could find more. So this is certainly intentional on selection. But it's just based on practical observations that our data is biased or lacking in this aspect. So we look for more. Great. Thank you. Let me see if everyone is good here. I would. Pardon? I see Martin's hand. Go ahead, please. Thank you very much. I would like, I see it similar to Osma. You use whatever you have to get the best model you can. Of course, you use cross validation to leave one out or to use blocks to optimize the model. But you use what you have with respect to ethics. I would like to comment that you have with machine learning lots of hyperparameters you can select. For instance, in our networks based on back propagation or feed forward networks, you can basically model a little bit the outcome. So you can choose model specifics to get an outcome you would prefer in comparison to others. So for true research and in case of transparency and reliability of research, I'm not a huge fan of these models if they are applied alone. So I would, as I have mentioned before, accommodate these machine learning techniques with some traditional models to really gain the insights from the data through machine learning. But at the end, the researcher must apply some models which are common in the field to then gain insights. Very good. Thank you, Martin. I'm checking here. I don't see any hands up. OK, great. Thank you very much. I think it's time to go to our next question. It starts with a comment and then the question. So the comment on the fact that users preferably have an understanding of the tools available, there's a myriad of tools out there. And the question is, is there any source publication website offering an overview or at least a good basis to start from? And I'll open the floor to anyone who'd like to try to answer that. I guess my answer to that is that not that I know of. That would be lovely. But I don't think that there is such a place that exists. But I think that's where librarians might be helpful in terms of evaluating, understanding the kind of universal tools and curating them and being able to make recommendations about specific tools. But there is just a proliferation of tools, and it's hard to keep track. And I see anyone else? Just looking at our screens. Yes, Asma, please. Yeah, there is an informal group called AI for Lam, which stands for artificial intelligence for libraries, archives, and museums. And I've not been heavily involved in that. But I think they have some kind of registry of tools. But it's a little bit of an open-ended question, because, of course, the web is full of different tools for different purposes. But what's the relevant? Where do you draw the line? What tools are relevant? Because in libraries, you can use all kinds of tools, also tools that are not specifically made for libraries. So it's not an easy thing to sort of delineate what's the set of interesting things. Excellent. Where do you draw the line? It's a very tough question to answer. OK, let me take a look. Anyone else would like to add something to this? I'm going through. And if not, I'll jump on to our next question. Wonderful. Our next question then is, at the moment, do we have enough content to, quote unquote, feed the ML methods, considering the licensing copyright issue and most of the published content? And maybe I'll start with Osman this when we're dealing with licensing, if you want to take a shot at that. Yeah, that's a tough issue. And of course, it depends also what you're trying to do, because I think my use case is quite different from Martin's, for example. And in our case, we have been lucky because we have a long tradition of doing subject indexing with a specific vocabulary since the late 1980s. I mean, the vocabulary has evolved since then, but it's still the same paradigm, in a way, same continuum. So we can make use of even the older bibliographic records because they have been produced in the same way as they are done as the modern ones. So there's been a lot of time to accumulate this potential training data. But the situation is completely different if you start by drafting a new vocabulary. You make a new classification that's relevant for you, and then you want to classify documents using that. But you have exactly zero documents classified unless you start the work, and you have to just find the way to bootstrap the process. And usually, this means that you need to do some manual indexing beforehand, manual labeling. And I don't have, I mean, there are techniques for this. There's things called zero shot or few shot learning and so on. But it's not an easy situation to be in. Good. Thank you very much. I'll take a look. Do anyone else like to add something to that? Sorry. I mean, I missed completely the legal aspect here. But that's also a challenge. And I mean, luckily, as we are a national library, we get access to a lot of digital documents. Nowadays, for example, book publishers, they are required to deposit their ebooks with us. So they hand over their ePubs or PDFs to us. And I mean, it's not like we can do whatever we want with them. Of course, they are protected with copyright, but we can use them for the kind of systems development. And in practice, it means that this is something that we can fortunately use for us training data. But another, I mean, that's unique to us because we are a national library. Nobody else would be allowed to do this unless they have some kind of contract with the publishers who own the copyrights. But this is really difficult. And this is also one of the things that I think is under discussion with the implementation of the new EU copyright directive. How much can you do or can you do sort of data mining on copyrighted materials? And it's not very clear. And there are a lot of restrictions on what you're allowed to do. And I think I've heard this message from researchers that they would like to do things that are basically not allowed under the copyright laws and copyright directives. So the laws are perhaps a hindrance to new developments in this area. Thank you very much. If we take a look at anyone else, as we say, and if not, then I'll continue on to our next question. And that is, do you have an idea if and when the publisher market will go for artificial intelligence or they allow libraries to index their content? Question mark. I'll open that to anyone that'd like to give that a try. Erin, please. I'd love to take that one because we actually work closely with publishing here, the business publishing. And we do both. So to the point that Osmo was making where any kind of machine learning process requires some tagged data to teach it, we've done both. So we kick started the machine learning process by actually manually tagging using their enterprise taxonomy. And now that tagging is being used in a machine learning process to train as a training data set to enable auto tagging going forward. And now we're reviewing the results of that auto tagging. So it's an iterative process. And we've done both. Super. Yeah, I was a bit surprised by the question because at least to me, it seemed to imply that publishers are not going for AI or that they are not allowing libraries to index their content. And I think both of these things are happening. Maybe not universally, but at least the bigger publishers are, I would be surprised if they weren't using AI in some of their systems. And also, I mean, that's sort of the duty of libraries is to index content published by publishers. And it's not a new thing, but maybe you need a specific kind of indexing like using AI on full text or something. And that, of course, that can be a problem. Excellent. OK. I see it's time now. I'd like to continue to our next question. Also, thank you to our audience. Fantastic questions. We still have time for a few more. So feel free to scroll down and type them in. I see we have three on standby. Our next question is, compared with your AI tools, where is Primo by ex-Libras? Not comparable or even poor? I'm not familiar with those two, so I'll let anyone here take a shot of it that is familiar with Primo by ex-Libras. Anyone want to take a shot? I'm familiar with Primo, but we have not used it in an AI way. So I cannot answer that question. We are also using Primo in ex-Libras, but not in an AI way. But I think the community around ex-Libras, where you can build your own components in an open source way, sharing it with other customers, maybe give you some possibilities that we can use more and more. We have some good components, but not so much AI yet. OK, take a look here. And if that's OK, then I'll continue on to our next question. And I'll go to Cecile directly to you on this one, maybe. Is data analytics and data science used in your libraries? And if yes, to what extent? We'll start with Cecile and let's open to anyone. Well, yes, in many ways and in many levels. So we have many projects going on. We have done little solutions testing things. But I think in many ways, we're into the what Martin said earlier about a need to and nice to. And sometimes it becomes more nice to the solutions we make. But definitely, we are working with it. Excellent. Anyone else would like to add something to that? No, OK. And I'll jump on to our next question. It's a little bit longer here. In a recent blog post, Saria Azut writes, in a world of infinite information, it's no longer enough to organize the world's information. It becomes important to organize the world's trustworthy information emphasis on trustworthy. What do you think about this? The question is open to any of you, he says. Or she, I'm not sure who wrote the question then. Stefan, anyone like to start on that? Cecile, give it a shot, please. Yeah, we have the discussion on a more strategic level. If the Danish library should play a more active role in the democratic sense, could we maybe use algorithms to sometimes test fake news or help educating people more about what can algorithms be used for and what can they not be used for and have our students debating more and putting focus on your source and your critical source review and stuff like that. So I think it's a very, very interesting question. But I think it's also something where we are not so far yet. We just have ideas. Yeah. Anyone else like to add to that? Just agree with Cecile that it's about understanding the source of the information. I think that it's always been the case that it's how we evaluate and interpret information. So I would say that it's important to understand the sources. Yeah, great. And I'll just add on a personal note. So I'm a journalist here in Germany. And one of the big concerns in the journalism community are deep fakes. And those are actually videos, not just pictures, that can imitate someone's voice and basically look exactly like that person. And so in a society where people are swiping through the news feeds and someone does an imitation of a President Biden or President Putin in their voice saying, it's time to go to war tomorrow, that's also a big concern. So certainly also in the journalism community, this is also an extremely big issue. OK, I'd like to continue on with our next question. This is open to our group here. Will artificial intelligence replace librarians? And will I replace research work within the process? Great question. I'll open it up to anyone. I'd like to jump in on that one. I think I will start with whether AI will replace the research work. Yeah, I must say I don't think so, since the research in general is more abstract and requires creativity. And as long as artificial intelligence is not able to keep up with creative processes and combining different types of knowledge into a new form of knowledge, researchers are relatively safe from my perspective of librarians. Since I'm not a librarian, I cannot add to that much. But I would rather think it will not replace librarians or libraries in general. It will just transform maybe the way they operate and their business model, so to speak, from a firm perspective. Anyone else? I think it's a great question. I think it's something that's been discussed in many different sectors. And I totally agree with Martin that it will not eliminate librarians. It will just change their work and the business model. She is absolutely. Super. Thank you very much. Yeah, looking at the clock. Osmo, I'm sorry. Please go ahead. Yeah, just a quick quick clip that I think that AI is a good assistant, but not as not so good as a master. So you still need the human in the loop in a way to do anything productive or creative. Yeah. So based on that, I'd like to wrap it up. And this is a very spontaneous thing. But I'd like to ask our speakers then just to give a brief closing statement, two or three, four sentences. It doesn't have to be very long about our discussion today. And basically, are you very optimistic about the future that AI will play in the role of research and librarians? So let me just start off. Maybe we'll go with Martin first, please. Just a quick closing statement. Yeah, I will think the research process will definitely benefit from AI because it makes the work easier. And the researcher becomes more productive through AI, for instance, through indexing, subject analyzers, and text, and so on and so forth. Excellent. Thank you. Erin, I'd like to give you a chance now. So this has been a fun discussion. I've learned a lot. I am both skeptical and optimistic about AI. I take it with a grain of salt, I would say. And I think that AI has the potential to enhance what we do and help us with what we do. But I definitely think that humans are in charge and need to remain so skeptical. But I think if we're critical and knowledgeable about what the tools are and what context we're applying them in and what benefits we can get out of them, then I think we'll do well. Great. Thank you very much. Then Cecila, please, your final closing statement. Yes, I think it's also been a great debate and very, very many interesting points. And I'm mostly optimistic. I think AI will give us, it has great potential. And it can give us the possibility of being more productive, more efficient, more looking into much more data. But as long as we are aware of the problems it also causes and where the difficulties is, and that's probably the librarians very important role in the future to be the one who can see through it and can make sure that the data is still correct and objective and great to work with. So yes, that's my final word. Great. Thank you. And last but certainly not least, Asma, you have the final word. Yeah, thank you. Thank you, everyone. It's been a very interesting discussion. I would say I'm skeptical of AI being at this big overhaul of everything. I don't think it's going to change the world of libraries dramatically. But what it might prove, what I'm optimistic about is that it can provide specific solutions to specific problems. So to help keep the machine well oiled and to be able to deal with bigger amounts of data, with unstructured data. So that libraries can better serve their users, researchers, and the people in general. So I'm optimistic that AI will help, but it will not change everything. Yeah. Excellent. And on that closing statement, as is tradition here on our virtual InkinX, I'd like to ask everyone also at home or at offices to give a round of applause. Thank you all very much. I must say for me too, extremely interesting. I've learned also a great deal. Thank you all for participating. And yeah, I just had one or two quick organizational notes before we continue. So you'll have the chance. We'll have a break in a moment. And then you'll have the chance, starting at 1900 Central European Time, which is basically in about 35 minutes. So that's 7 o'clock in the evening German time. We'll have another session for Meet the Speakers. And there, feel free to bring a coffee or a wine or some snacks and basically have a chance to discuss with speakers one on one. And here I have a list of the speakers that will be there. We'll have Aaron Wise, Kimberly Ann Boushette, Patricia Condon, Scott Richard St. Louis, Caroline Ball, together with Lucy Barnes, Wuseppe Vitello, Cecil Christensen, Martin Quisner, and Osama Simonin. So they'll be there for the Meet the Speaker. And then also tomorrow morning, we'll also have another Meet the Speaker. That's going to be at 9 o'clock Central European Time. And at 9 o'clock, you'll have the chance, among others, to talk with Lorna Wilgard, Ulrich Krieger, Sabina Rauchman, and Aaron Tay. And then, of course, I look forward to seeing you tomorrow at 11 o'clock Central European Time for day three, the final day of the Inconnex. And yes, I very much look forward to seeing you. We'll also have some closing remarks in the afternoon. And just a reminder, feel free. We've seen some very nice posts on the hashtag Inconnex, especially on Twitter. So feel free to post some things there. And on behalf of the ZBW team, I wish you a lot of fun and good conversations. And you meet the speakers. And I'll be back tomorrow at 11 o'clock. So enjoy your pausa, your break. And for those that want, continue on at 7 o'clock for me to speak here. Goodbye.
„Potential of AI for Libraries: A new level for knowledge organization?“ What kind of support do researchers need from libraries? How might support and services benefit from AI? How can libraries best support the research process and add value using AI? What might be potential drawbacks? How will the work of librarians and researchers change? AI and humans working together: what will AI excel at and what will humans excel at? On the panel we will bring together experts from different backgrounds: Research, AI, Libraries, Thesaurus/ Ontology.
10.5446/58085 (DOI)
We have here Ulrich Krieger. Ulrich, are you there? Yeah, I'm here. Ah, wonderful Ulrich. Before I begin, before my introduction, just real quick, where are you physically at right now? And if we visit there, what's one thing you recommend that we visit? I'm here at the beautiful Baroque castle of the university, where the university library in Mannheim is located. I'm at the office and if you happen to come to Mannheim, you should check out the water tower. I guess the ensemble is really nice. Beautiful, the water. As I mentioned before, when we spoke, I did visit the water tower by chance a few months ago and had a lovely time there. But we both agreed probably when you visit Germany, Mannheim will not be on the top 10 list of things to do, but maybe the top 20 or 30. 120 I'd say. Oh boy, coming from someone living in Mannheim, very good. Good, then let's get to business. Just a few words about you. Ulrich Krieger, he's the coordinator and project manager of the Barrett at NFDI consortium based at the University of Mannheim. And just to refresh, Barrett stands for business, economic and related data. And he has a background in serving methodology and served as head of operations of the German Internet panel. His topic today will be burnt at NFDI, structuring unstructured data for business, economic and related research. So it is my pleasure to give the digital stage over to Ulrich Krieger. Please. Well, thank you. Thanks for the kind words and the introduction. And yes, I am happy to tell you about our platform, our enterprise, the Barrett at NFDI consortium. NFDI stands for the German National Data Research Infrastructure, which is a rather new funding scheme where the German state and the federal lender joined forces to enhance, to create projects that enhance the data and research infrastructure landscape in Germany. Today, I want to talk about our consortia, which is BERT. And you already mentioned what the acronym stands for, it's business, economic and related data. And I want to tell you about what it's all about. I want to start up with the use case for BERT. And with that, I'm going to take you back into time, beginning of survey research or research into human behavior. And here we have the case of unemployment research. And some of you may remember from school or somewhere there's this famous study of Marine Tal, which is this Austrian town where, as you all know, probably know the main employer closed down rather suddenly and lots of these inhabitants of Marine Tal became unemployed. Researchers from the University of Vienna, I think, jumped onto the occasion and swamped the town and observed everything that's going on to see what the consequences of unemployment are on the inhabitants of Marine Tal. They did observational studies, they interviewed respondents, or they also invaded the privacy of people's homes and checked out what they have, how their economic standing is by observing what they have in their houses or in their flats. So this is a very detailed look at one town. So this is a small scale study, very influential study, but you can't draw inference from that. So you can't say that all unemployed people are like that. And it is very driven from the researchers, so there may be some of survey or survey error in there. We moved on from that to large scale standardized studies. Like you see here in the picture, computer assisted personal interviews, where we could take a very in-depth look and long interlongage interviews with respondents about everything that's going on in their lives. And that can be generalized or potentially generalized to the general public, but these endeavors are expensive and they're getting more and more expensive. And it's harder to convince people to participate because this place is a high burden on respondents because you have to let somebody into your house, into your home. And it also tends to have misreports. It is data that's generated by the respondents themselves. So then we move on to new avenues of data research. And I will mention here this study of the IAB SMARC study, which is study where they asked respondents to download the smartphone app, which is then installed on their handheld devices, which then allows not only for standardized research, a large scale data collection, but is potentially cheaper than sending real interviews to people's homes, which has a lower burden because you can also split your questionnaires or your data collection in smaller chunks. So you can have some short 10 minute, 10, five minute survey on the go somewhere. And you can also link this to real life events. So if somebody has a contact with the employment agency, you can then after the fact ask people about their experiences, you can also use all the sensors that are in the smartphone. And our smartphones are not only telephones, but they collect all sorts of information, as you know, your location and your trajectory's speed has a microphone camera built in. So there's a lot of ways we can collect data with this with a smartphone. Fun fact, the colleagues tried to replicate one of the findings from the original Marine Tile study from the study back in the days, where they observed that unemployed residents of Marine Tile walked slower at a slower walking pace than employed residents of Marine Tile. But this couldn't be replicated in our time. So there's no difference on the walking speed that is collected with the smartphone sensors. But that just gives you an idea of the potential. That's maybe a fun story, but that should just give you an idea about the potential of the sensor data. So here we have, but then the downside that comes to this data collection is this huge amount of data, which needs to be processed for analysis, because information about locations or trajectories are not ready for analysis. So we need tools, infrastructure for post-processing, and these are lacking, which is a problem for the researcher. So we turned, so we came from this, what I'll show you here is this analytical paradigm, where we have this elaborated studies where there's a high effort prior to data collection to set up a survey or an experiment. And then you have a standard processing, which can be done by hand and is of lower effort. It's still a lot of effort, but it's compared to the low effort post data collection. Now with data where we have text, languages, images, social media data, or other digital trace data, like the one that's collected on the smartphone, it turns that up into its head where the automated recording is comparatively low effort, but the post-processing is complex, so you have more high effort post data collection. So with this unstructured data, we enrich the traditional data model where we only have the structured data that is collected by experiments, surveys, and such, where we then could apply empirical methods for cause and analysis and prediction. We now have this other part that's here where we have this unstructured data that needs to be transformed to be fed into this traditional model of empirical research. And here with this transformation, that can't be done by hand. We need artificial intelligence and machine learning algorithms to provide the data processing due to the sheer volume of the data and the complexity. So there's huge potential for exciting discoveries, the methodological costs are higher, and there's high technical burden comes with it, which then leads to data graveyards where data are just dumped somewhere and can't be used or even reused or made transparent and open as we want to have this in the scientific process these days. Just to give you an example of what the need for such an endeavor is, I have here some remarks that were given in a study where respondents said what is hindering them in their research. And you have used things where they said there should be more of a community where we can exchange on ideas of analysis or the algorithms used or we need support in publishing data and accessing data because this comes together with cost or legal problems. And this is where, and this is the case where BERT comes in. This is an important piece of a platform that we will create that is open and to link structured and unstructured data combines the best practices in machine learning to process these data. It's all reproducible and transparent and can help with the whole management of the data life cycle. So it's about collecting, processing, analyzing, preserving, giving access and then making reuse of data possible. So how we do this in BERT, we have different task areas that comes along this or that is lined up with this data life cycle. So we have these data sources that are then harvested and or shared and collected by a task area, as we call it task area two. Then there's a task area that is in charge of data quality, a huge important step. Data is not just equal, we need high quality data for the research purposes. We need to take a look at anonymization and digital documents need to be processed. Task area four is then about data analysis and semantic enrichment. And then we have two task areas that are just or that are made their main focus this on helping researchers analyze data. So there's a training aspect in here as services and we are helping with preserving data and making data accessible to the research community. This is a joint endeavor with many colleagues and many institutes and I just shared this slide here with everybody, everybody that is involved. So you see the starkish green, you see research institutes and universities and then in the other color, other greenish color, there's infrastructure providers, which are very important and to provide the service and together combining all these institutions, we have the knowledge in the brain power and the manpower to provide this service and put up this platform. What are the first steps? We started now with training activities, for example, there's a course that is conceived out of birds that will be offered through an existing summer school. We're reaching out to scientific community like you and we're working on two services that will go online this year. One is this service that we call Open Big Data where users can upload data, search and download data. So this will where we start off preserving and giving access to data to researchers and the other service is an open, we call data marketplace. This is an exciting endeavor where we partner with data providers such as companies and match them with interested researchers that can apply for or submit a data request and apply for data access and then make use of the data that has been provided by the data providers to benefit both parties and make scientific discovery possible. This is all that I have for you today, so I thank you for your attention and please do check out our website for developments because there's going to be more exciting developments coming up in the future follow us and if you have any questions I'm happy to take them or contact me when you watch the video next morning in New Zealand. Thank you. Thank you very much Ulrich. It has become kind of our tradition here even though we're online watching in offices and homes around the world we'd like to give our presenters a thank you applause for a great talk. Thank you very much. Yeah, already we have a couple of questions coming in. I also saw that we have a comment, actually a very nice comment. Sorry I don't have a name for that. Very nice conference, nice speakers and great organization. Thank you very much for that compliment. I can just pass it along certainly to the ZBW team for all the organization and certainly to our tech team that's keeping everything running. Okay let me begin then. I see here the first question coming up and this is from our audience member and it says are you working with test data or real data from projects and the follow-up question where did you get the data? So there's so getting the data so there's one avenue which is data harvesting that is where we where data is scraped through the web and through APIs and means and legal means possible and the other avenue is getting data that is provided by users. This should be done with real data and not test data that is that's the aim. Surprising community needs. Yeah the next one let me just read that. Hi. Yeah no problem. So were there any surprising community needs is the next question. Go ahead. It's a journalist question. No I it doesn't surprise you that I don't think it didn't surprise me that that much coming from data research. It's always the same problems or that these things need to be easy and accessible because researchers have many other tasks to do than documenting their data. This is hardly surprising but it's very important I think. Okay great thank you. Let's move on then. We're coming on the next person writes thank you Ulrich exclamation point. I'm especially interested in task area six supporting users. Can you elaborate more on this topic please? Okay so there's two things that so there's one one avenue here is classical is training. So we we partnered with with course with researchers that have existing trainings and develop new trainings that should empower researchers to just I mean just to to gain more knowledge on methodology and processes. The other I mean that you all know I mean from as I mentioned from summer school the other thing is data data the concept of data stewardship where we will be help will we help provide support for researchers to so hands on we have a dedicated person that helps a project to publish their data and documented data to take them by their hands and get data to be reusable open and fair that is maybe the year that's the other aspect that I would mention here. Okay thank you and the final question this is coming from me the last couple days we've talked a lot about open access of course and sharing data and I'm curious do you see any negative effects to that or is it all really positive? Yeah thanks I I guess it's a good thing if data are not stored on a laptop hard drive that gets then stolen or lost and I think that's possible from my background in survey research we often have this problem of the consent and how general the consent is and providing an anonymity to people that provided us with their data and said that for example if we say to that data is collected for research purposes and then we can't share it with somebody that has that it has a commercial aspect in their project or and sometimes we can't be as open as we want to be but we want to tackle this and make make secure spaces available where this data can be analyzed and then maybe counter that negative point of data sharing. Excellent and that's a good last word with that once again I want to thank Ulrich Krieger thank you Ulrich for your presentation and also taking a few minutes to answer our questions and again once again I am going to give the flowers and I will have the best and enjoy the rest of the Inconnex, take care.
In addition to structured data, which is often collected explicitly for research purposes, unstructured data is increasingly becoming relevant for research in economics and social sciences. These data often come from non-standard sources, such as websites, digital business reports, social media, etc., and they come in diverse formats (audio, video, text, image or multimodal) and large scale. This heterogeneity in data sets and their underlying unstructured formats give rise to new challenges for research data management such as adequate computing and storage resources, deeper knowledge of programming languages and machine learning methods for data collection, selection of appropriate metadata standards for data representation, and pre-processing and analysis. Thus it can be difficult for researchers and infrastructures to manage the complexity of re-usability of unstructured data and algorithms. To this end, the BERD@NFDI consortium will address these challenges as a contribution to the development of the National Research Data Infrastructure. The aim of BERD@NFDI is to build a powerful centralized platform for collecting, (pre-) processing, analyzing and preserving Business, Economic and Related Data. We will facilitate the integrated management of data, algorithms and other related resources and provide support in terms of services along the whole research cycle. We will dedicate special focus to unstructured (big) data, in line with the FAIR principles for research data. The presentation gives an overview of the starting position, the mission as well as the composition of the consortium and shows why the usage of unstructured data requires an extended model of empirical research. After a preliminary analysis of the community needs, we present the work program of the (entire) project.
10.5446/58087 (DOI)
It's my great pleasure to introduce Zabina Rauchman. Zabina, are you there and can you hear me okay? Yes, everything works fine. Thank you. Wonderful. And Zabina, before I give a brief introduction, I'd like to ask our speakers physically what city are you in right now and if we ever come to visit that city, what is one thing you would recommend to visit as a tourist, a tourist attraction? Okay, I'm at Hamburg at the moment and I would recommend to visit the Plaza at the Exzimony because you have a really nice view on the harbour and on the river Elbe. Wonderful. And since I'm also in Hamburg, I can say the Elbe for Harmony, we just had our five-year anniversary, a very beautiful, beautiful symphony and concert hall. I can totally agree with that. Okay, then moin moin. And we will continue. Good. So with an introduction, Zabina Rauchman, she's a subject librarian at the library for the Faculty's Business Economics and Social Science and Business Administration at the University of Hamburg, Germany here. And she teaches information literacy and provides in-depth research consultations to business faculty, undergraduate, master, as well as PhD students. She holds a doctorate in library and information science. Her discussion will be about support services for systematic literature reviews and economic and business studies. And with the question, how can business libraries cooperate? So we have about 15 minutes for our presentation, then afterwards about five minutes for the Q&A and just a gentle reminder, the QR code. We can put that up there just for a moment. And if you want, you can do that. Fantastic. And you can also use the interaction tool if you scroll down to put in your questions. And with that, it's my great pleasure to give the digital stage over to Zabina Rauchman, please. Thank you. Like David said, in the next 15 minutes, I will talk about support services for systematic literature reviews and economic and business studies. And it's seamlessly connected to the talk by Mrs. Kladjew third before. So in the next few minutes first, after a very short introduction, I will talk about best practice elements from the health sciences that they use in their service models. And afterwards, I have a look what business librarians offer in equivalent and what the status quo of those elements in business libraries. And at the end, I propose a way to collaborate. Let's start with just background. Just Mrs. Kladjew already talked about what systematic literature review is. So I will skip this. And in my abstract, I stated that there is an increased consultation request and consultation request. And I was asked if there's hard evidence that would support this. And I think there are three points. And Mrs. Kladjew also pointed the same things out to you already. So I think there's raised awareness. Last year on the Bipartika talk, there have been two sessions focused on systematic literature reviews outside of the health sciences. And then we also had the German meeting of business librarians. And there was a lightning talk, which garnered quite a lot of interest among the librarians. The second point is, there's quite recent study that has come out done by Prem. Jiswander and Yang that surveyed business librarians. And one of the questions was how long have they been involved in supporting systematic reviews? And a lot of them had just come into the systematic reviews in the last two years. And compared to health sciences, they are a little bit further along. They had a survey which had looked at the degree of involvement of librarians. So they also showed a degree, a higher degree, higher involvement with regards to co-authorship and acknowledged by name. And the third point would be that the systematic review as a method becomes more recognized in the business and management sciences overall. So I think that libraries or business libraries have to ask themselves, what kind of role do they want to play, what kind of scope and of service can they offer and what kind of expertise do they have? And on the first I think we learned that it's not always productive to look at competitors. But I think in this case we can profit from the experiences that health science libraries have had or have collected in the last 40 or 50 years since systematic reviews started in the 1970s. So all health libraries offered a service model where they're really very clear definitions of what the librarian is doing and when the librarian is doing what kind of service what's required then by the researchers. So there's always a clear distinction between consignment co-author or there's the model where they have standard academic and premium services or another one would be that there's a tier approach. But all services have the same. They really tell what they're very specific, what kind of time the user can expect that the librarian is giving them and how the librarian has to be acknowledged in the paper. And in addition to these supports or to these kind of really going through the process, there is also educator support like workshops. So either done by librarians or by non-librarians. And librarians tend to focus more on search strategy and the search itself, while non-librarians tend to focus more on the analysis of the data. And there's a lot of online guides too. Those are very diverse and very specific too. Apart from the process and the educator services, there's also expertise. The health libraries can base on their support on protocols and reporting stun nuts done by collaborations in the health sciences like the cochlear collaboration or the camp collaboration. And I think everybody knows the Prisma flowchart, everybody that has already done a systematic review. Another area of expertise we have a ton of literature on the medicine and health sciences is the evaluation of databases and the search syntax for doing researches within the databases. Like the subject headings working out, what kind of subject headings are the best ones to use. There's a lot of literature in this area. And in addition, there's even a systematic review competency framework for librarians and health sciences working with systematic literature reviews. So all in all, I think there's a very comprehensive service model framework for systematic literature reviews. There's five things or expertise at the bottom, like the protocol and reporting standards, learning outcomes for researchers, database licenses, evaluation of databases and search syntax and the competency framework. And on the base, there are individual services that are offered by the libraries and that range from the library's educator via the advisory consultant and then to the co-authorship that librarians can take. So the next step, I would like to have a look at the specific elements and how business and economic libraries are doing in this area. So before you can compare those areas like the health, the economics and business studies, you have to be aware that there are differences in the disciplines. Like in economics and business studies, it's stated that studies are often narrative and codative and they are often more often multidisciplinary and not just focused on one sub area. And in addition, also in the business and economic studies, the content and business databases are constantly changing. The corporate age is not as controlled as on the medicine like with the mesh terms. And most databases in business that the search require a license. There has been more awareness and more support coming from the economics and business researchers as well in the last few years. There have been a few articles writing about how to write a systematic review in the business and economic studies. And on the second part, the librarians, there are also some educative support, but they are not as many guides that you find that have the specific focus on business and economic studies. And here would be interesting to see how much they overlap and also how specific they are in those the advice they give for the searches in this business databases. So in addition, there are some workshops, but those are very rare. Either they are very rare or they are not mentioned on websites. There's some more coming in with public health and also with doctor and post-graded workshops. But what we don't know is how much systematic reviews is already included in curriculums. For instance, for evidence-based management. And in addition, they are training courses from collaborations like the Kemper collaboration, but it would also be interesting to see how much business researchers and business studies, how much do they take away from these courses and is this enough for their needs. Then the third part would be protocols and reporting standards. And there are some things that are subject-specific like the framework for questions, but there are also other things that are multidisciplinary like the project list or the Prisma and Flowchart. So it would be interesting to see if there's a need for extensions in the business areas like for Prisma and Press. I've seen some extensions for instance for ecology or evolutionary biology. And it would be also interesting to see how business researchers have been compliant with reporting standards that are out there so far. Yeah, with connection, with this, it's also interesting to see what kind of learning outcomes researchers want to take away for doing literature reviews. So it might be interesting to see the search behavior for systematic literature views of students and researchers in business economic management and how they perform in terms of the quality of search expressions and also of the quality of the documentation they're writing. And then I think the most important part would also be if there's any kind of evaluation of databases and search syntax. And compared with the health sciences, I think there is very, very little that has been done so far. There are some very general articles about business source and ABI and form that compare those databases in general. But there's very, very few literature that takes into account the coverage of subject sub areas in those two databases or just the coverage of journals in general. And on the first day, we've already heard the coverage of trade journals. So it might also be interesting to see what is the coverage of journals of a specific subject area, not only the main journals, but all journals that are relevant for this area. Interesting, but also we see because there is no central thivorus for the business area subject, how the subject headings are working in either ABI or from a business source and how they compare. So all in all, this makes up quite a big research agenda. If you want to know more about the databases, if you want to know more about the learning outcomes that researchers expect, and if you want to know more about how protocols and reporting standards are applied on the business and economic studies. So those are the basis in order to offer consultations, workshops, online self-help and online guides, as well as explore the systematic reviews as research methods. And because it's a lot of knowledge, I think it calls for international collaboration. And I would like to propose a framework for the cooperation in the form of the community of practice. Community of practice is a group of people who share common concern, a set of problems or an interest on a topic, and they develop and disseminate best practices, guidelines and strategies, and organize and manage a body of knowledge. So the community of practice, some people think it's just very loose in it, but it can also be very organized. And in order to have it very organized, there are four elements. There's the management, the team, the supporting tools, and the process element. So let's have a look at the management element first. I think it would be interesting to have the community of practice connected with an association. So it's endorsed by an institution. It might not be as in connection with membership, this would have to be discussed, and also the degree of binding. It might also be interesting to see if there are any interest groups with the Cochrane collaboration or the Campbell collaboration or any other collaborations that are interested in working with systematic literature. The second element would be supporting tools and documentation. This would also be part of the community of practice because you need a place where you just put the knowledge, where you can edit documents on a collaboration and where you can just collaborate and either post-meetings and also dates. So there's one called Press Forum. I haven't been actually in this forum, but it looks very interesting. It's a peer review of search strategies via a submission form. That's a very interesting idea. The third one would be the process and training aspect. So last year there was, or earlier before, there was a SWAT analysis done by medicine libraries who looked at what libraries are doing and what they need in order to support their researchers. I think this would also be very interesting to see what kind of strengths and weaknesses libraries have in the economic area and to have more just data to support this. It would also be interesting to have a skill development plan and just to have a common exploration research agenda. There is the Evidence Synthesis Institute which offers free materials for self-learning and I think it would also be interesting to have an online club or just online meetings to go together for these materials so not everybody's doing it on its own. It might also be interesting to have a look at the competency framework and especially with connection to subject specific content. The last and fourth element would be the team and the hands-on approach so that people can actually share information, share experience. There is the idea of a search club to peer review search strategies that you can consult colleagues that you can meet to discuss journal articles. There's a Google group called Business Librarians and Systemic Reviews organized by Sarah Premji and I think they are meeting twice a year just to talk about what's coming up, what might be problems, what problems they have encountered during the last few weeks. In order to do this there might be regular meetings, you have to discuss if you have clear objectives. I've also said that it might be interesting to have a list of experts with knowledge profiles and it always you know those groups they depend on the commitment and the motivation of the people taking part in them but I think in this case the support by the management and the leadership is also very important. To conclude I think there's a lot of potential, there's a lot of knowledge already out there and if you find a way to collaborate to share this knowledge more widely and to fill the research gaps you will be able to support our researchers in the best way possible. Thank you. Thank you, thank you very much. Very interesting presentations Sabina. It is our tradition even though we're doing virtually from our offices and homes. We'd like to give you first of all an applause for your presentation. Thank you very much. We also have a few questions that have already come in. I just wanted to point one thing out before we begin with your questions. We had one or two questions from our previous speaker, Francisco Klatt. That was a video that was presented so of course she couldn't answer those questions so if you still would like to ask Francisco Klatt something feel free to contact her per email and if you need the email address, if you don't have that just contact our conference organizers and they'll make sure you get that. So I just wanted to put that out there first of all. Now we're coming to Sabina and after your presentation we already have a couple of questions here. The first question, there was a poster on a chatbot at this conference. Could Q&A on SLR also be a chatbot topic in your opinion? That's an interesting question. I think it can be for the starting stages. So just to make them aware of certain materials that are available to have a first look at the process itself and then to determine what kind of support they need in detail. Excellent. Our next question, do you offer kind of a guideline or summary for researchers which evaluates the databases and the search syntax? How do you keep up to date with the changes in database search engines? Good question. Yeah, I concur. This is actually, I think we do not have those guidelines and we do not have the summaries because the databases are changing constantly and I think this is an area where I would hope the collaboration would work or would come in. Okay, great. Thank you. Our next question here, we've got multiple questions and that is, what could an international cooperation look like? A jointly curated website, for example, a support and mailing list, special responsibilities for certain topics or questions. What do you feel about that? I think it could be all. Could you maybe expand a little bit more on that? Yeah, I think there needs to be a website just to share information and not everybody is courageous to use the support mailing list. So it would be helpful to have a web page to just look through. Maybe the question has already been answered. The mailing list might be interesting for questions, for more specific or very in-depth questions to get an idea about what experts are out there and if they could be, there would be help. And with the special responsibilities for certain topics and questions, I think this would be very interesting and the kind of that there is the kind of expert list with the knowledge they have. Excellent. We have one more question that's coming in right now while we're waiting. I just wanted to ask you, the University of Hamburg, in general, the universities, are they very supportive about this initiative or is it kind of climbing uphill? I think it's very new and I think also professors here at our universities, they are starting with the research method systematic literature review. So we are both just trying to come together and try to find out what kind of services they need and we can offer. Very diplomatic. Good. And our final question, what could an international cooperation look like? A jointly curated website, a support mailing list, special responsibility for certain topics or questions. I believe that was a repeat of the question we had before. I apologize about that. Let me see here and I see already that that was a repeat. I think then that was our final question. Okay. Would you like to just say a closing statement or have you covered everything? I'm looking forward, unlike Franziska Ksart already told before, to try to establish network. I'm really interested in doing this. Wonderful. Thank you very much. Oh, Movet. And this is kind of like the breaking news ticker we used to have. We do have one more that just came in hot off the internet press. Do you see more faculty-library and collaboration for more enhanced systematic reviews in these days of emerging tech? This is also a very good question. I think I think it's not dependent on the technology. I think because the method just gets more attention from the faculty. This is just in development. Okay, great. We have a final question here. Could you share your contact details? What's the best way to reach you? People have more questions. Oh, yes. My email address, it's also on the website of the university. Okay. And could you just say it one time? Sabine.rauchmann. University of Hamburg.de Super. Excellent. And it's also on the university homepage. Yes. Yes. Okay, wonderful. Sabine Rauchmann. Again, we wanted to say thank you not only for your great presentation, but for taking the time to answer some of our questions and our answer session. A round of applause for Sabine.
Whereas systematic literature reviews have become a standard in the health sciences, the method has usually been neglected by researchers in economic and business studies in favor of creating and working with primary data. In the last few years, business librarians have received an increasing number of consultation requests for systematic literature reviews. Due to Corona restrictions, the collection of new data in face-to-face or group settings was limited, prompting researchers to focus on aggregating results from previous research. Business librarians by themselves face the challenge of finding very few subject-specific resources or guides on sophisticated search options or the quality of source materials in databases for economic and business literature as well as subject specific reporting standards, frequently falling back on guidelines and best practices from medical libraries. This presentation starts by establishing key service components for supporting students and researchers in conducting systematic literature reviews by looking at best practices from all fields. Then, the presentation identifies knowledge mountains and gaps, i.e. in regard to functionalities and source quality of databases, information seeking behavior, analyzing and managing data as well as reporting guidelines in the economic and business studies. Given the enormous amount of knowledge needed, the presentation thirdly looks at options how business librarians can cooperate on an international level for creating space and infrastructure for sharing findings, insights and materials not only with fellow librarians but also with researchers. The presentation concludes with encouraging business librarians to combine forces for providing better support and teaching services for conducting systematic literature review in economic and business studies.
10.5446/58088 (DOI)
Hello, I'm Francesca Klatt from the Economics and Management Library of the Technische Universität Berlin. I'm going to talk about systematic literature reviews and how service offerings around that topic can enhance the methodology, competencies of young researchers. First of all, I'll give you a short overview on the Economics and Management Library, then about the systematic literature review method, the background and objectives for offering systematic literature review services, the method and approach we used, and about the findings and lessons learned, and the implications that can be derived from that. And questions can be exchanged by email. So, the Economics and Management Library is the special library of the Faculty of Economics and Management of the Technische Universität Berlin since 1968. It's the second largest library of the TU and the main target groups are students, teachers and researchers, and it was the first academic library in Germany awarded as Ausgezeichnete Bibliothek, Distinguished Library, in 2013 for its Quality Management. And we are 11 and a half employees with a couple of library students assistants. Let's talk about the systematic literature review method. It's an independent academic method. It follows a transparent process that is replicable and it has predefined inclusion and exclusion criteria for literature, and the literature selection is carried out by at least two persons. And the objectives of that systematic literature review methods are to identify and evaluate all relevant literature on a topic, and by that, minimizing the virus of literature selection through a formalized approach. The method aims to avoid research redundancies and wants to identify research areas, gaps and methods, and also links between different research areas. And it also supports the evidence-based management. That means that management decisions are based on research findings. And the background for offering SLR services are that we realized that there are methodology competency gaps. For example, regarding the search and economic databases or also managed references, stuff like that. And we also saw that there are increasing requests for SLR consultations, and the need to adapt the method that originated from the health science to economics. So there was no material when we started with the consultations for the economic context. So our objective for offering SLR services was to close this gap because there obviously was some need of the researchers for, or they needed support with that method. So what we did is that we developed a detailed description of the systematic literature review process, adapted to economics, for example, regarding the amount of steps within the process, and also the scheme with which you develop the research questions, stuff like that, so some adaptations were needed. And we developed videos like understanding retrieval bars, and we also shown or discussed articles that applied the SLR method in an economic context along the SLR process, so what was good, what can be improved so that you can learn from that paper. And we offer SLR consultations. This is a very intensive process along the whole, yeah, conducting this SLR. So there are several connotations for each researcher, just necessary or required, and a very really deep dive also into the topics. Yeah, we also act as a reviewer for publications in the context of systematic literature reviews. So we see some screenshots of the materials we developed for the website, and we have on the left-hand side the SLR process from defining research questions, selecting literature databases, setting the search terms, merging hits from different databases, or applying the inclusion-exclusion criteria, performing the review, and how to synthesize results. And after each step is a detailed description of what you have to do. And we provide an overview of differences between a systematic literature review and an ordinary literature review, where sometimes it's interesting to see that also Bachelor or students writing a Bachelor thesis should use this method, which is, yeah, actually quite a lot to do, and high effort. So question if the superwiser know the differences, and discuss it with the students. So not only researchers come to us and need our support, but also students. The discussion of SLR examples you can see on the right-hand side, where we just really in detail discuss the way the article applied the SLR method. The findings and lessons learned, you see that the amount of SLR consultations increased. Our website, Go Live, was in 2019, but the request for SLR consultation started already in 2013. And in the middle you see the systematic literature review website impressions, with over 12,000 you see that it is recognized beyond the two Berlin. So yeah, many people use our website to inform themselves about the method. We not only get requests for SLR workshops in economic context, but also from other research areas like education. On the right-hand side you see the number of articles published by researchers of the faculty. We only included articles that use SLR in the title, and unfortunately we have not access to all publication currently because there is a website relaunch of the two Berlin and not all lists of publication are available right now. But it's increasing and from our SLR consultation we know that there are even more articles in the pipeline. But it takes a lot of effort to write an SLR paper and it's similar to gathering primary data. It's a very intensive process. So we have an increase in the amount of consultation, the website impressions, and of course in the article it's published by the researchers, but also the quality of search strategy increases because we are helping with each step and adding our knowledge to the search and the other things the researchers developed and we see that if we wouldn't support them and say, well, here use the search syntax correctly or you should use near-field operators and so on and really check everything and develop it further, and it would be lower quality. So yeah, it's really helping them in many ways. Findings and lesson learned. It's an intensive consultation process along the whole systematic literature process. So several consultation sessions are needed. It's intensive, it's very good to have at least two employees. That work on that topic and for me I have an economic background and also scientific background and research. So that helped me of course to understand the method and how to apply it. And also my colleague Michael Michieloch, he also is working as a researcher. So that was for us, it's very good or it supports us in being able to offer such a service. That's for sure helpful, but not necessarily needed, so it just helps a bit. And what we see is that the researchers, even though they are lacking basic skills like search and databases and economic databases, they kind of are not aware of this lacking, so they wouldn't visit our workshop on searching economic databases for example. But because of the intensive communication and the support and discussion of each step in this journey of a systematic literature review, they learn a lot of things, not only about the method but also basic research skills. So that's very good. The implications are that systematic literature review service offerings are helpful to support the methodology. Competencies of young researchers, especially at the faculty of economics and management, the website is available to everyone, but the next step could be that we create an open educational resource, for example a textbook or something, and if there are people in this room that are interested in writing something like this, just to establish a network with information specialists in the economic fields, but also researchers applying this method, that would be very great. It's complex, it's also developing, there are new things like rapid technology assessments, so how can we apply this method to an economic context? There are many things to do and it would be great if you could jointly work on that topic since it's really useful for researchers. And with this I thank you for your attention. It would be great if you reach out to me via email or LinkedIn and that we can give feedback on the service offering, discuss things or maybe even starting a network or working on a textbook or whatever. So thank you very much and enjoy the rest of the conference. Bye.
Method and approach Systematic literature reviews aim to reduce biases and redundancies in academic research by using a formalized, transparent and replicable process. The SLRM was adapted to the Economics context and information about it has been provided on the library’s website, which has been structured along the SLR process. In addition to a detailed description of each phase, a toolkit has been developed consisting of SLR sources, learning videos (“Understanding retrieval bias” and “Understanding publication bias”), feedback on example SLR articles, as well as individual advice. Findings The amount of articles using the SLRM published by the academic staff of the faculty for Economics and Management has increased since the publication of the library’s SLR website. Researchers especially need support with developing an appropriate research string as well as the conduction of a content analysis. Only a handful of libraries provide information on the SLRM. Implications Information about the SLRM is relevant for young researchers and can improve their methodology competences. Other information institutions can also refer to the website. We are currently working on a SLR online course.
10.5446/57506 (DOI)
I want to speak a little bit about my work in the last two years, as I have already detailed on the performance stage and research and what I have done in the last two years. So I am working together now with people from Caltech and some people in Germany about problems in numerical weather prediction. My plan was two years ago after my retirement to the US, but then we have discovered stuff and I had to stay at home for all the time. Okay, so and today I want to explain a little bit how at the moment there is a large impact in mathematical research or also that you see at different places in the world at the moment people are trying to go with weather forecasting to new frontiers. And new frontiers mean increase resolution and also to couple your modeling activities together with machine learning and to incorporate more data in your simulations. So and because I'm more a mathematician and have not that much experience about machine learning, I will more speak about the first one going to higher resolution. So I have here written some things which are what we are now in going. Maybe the first one what I said that we go from a standard latitude, longitude to unstructured grids that we already have maybe the last 10 years and now we have already in better application we have different codes with different type of grids with different type of numerical approximations. So you will see the whole so what we have seen here today or somewhere you also will find in numerical better application. So we have people which are taking simple finite differences on unstructured grids. There are people going from finite difference to low order finite elements but with special structure which we also need that we have something like in the talk about that we have the right balances between different pressure terms and other terms in the Navier-Souce equations. And we also go to higher order time discretizations and to multivariate methods because we have in this better application stuff we have waves or scales which has different order in space and time some are interesting for us and I don't know. And if you look what in projects with that then explain you in the next slides you will see that people now are going in global better modeling global so on the whole earth to something where they want to resolve deep convection something like that you have in your grid cells something like thunderstorm is in some sense explicit resolved and this means you have to go to horizontal resolution which is below something like two or three kilometers and that is at the moment the scale where the German better service computes the better for Germany and maybe part of Europe. So and there is now a goal to do that on a global scale maybe not at the moment for better prediction at the moment more for experience but also in the next years to make at least in climate simulations also things which works on this on this higher resolution. And this means that you have to refactor your models and you have also to refactor this models with respect to all this new options of what you have with respect to computers GPUs nodes all the stuff you have also to have to incorporate in your model that you get the right speed up what we may be now people call excess their computation or already behind that. So it's not only that you buy new computers you have also to have methods and ideas how you can squeeze out from these computers as many as you get. So and this means also that at the moment it's not possible that maybe people which are make the decision and people then make look how you bring this this model then on a computer that all these people on this different type of have to work together to make this models a viable also then connected if you go to that because then you have also a lot of data how you store data how you write out data how you compress data all this has also to take into account that this this way in some sense makes sense. So when we speak about that applications then there is something what people call a dynamical core and what if you read this this name then this means at the moment that you have to solve the composable or equations on a special domain so on a sphere which has in some sense also already in space two different scales so you have the surface on on this sphere with respect what you do in height so you have a you have a very thin layer around the earth where you compute something that is already one of these main challenges what you have to do. So and what you then do is you start with this by composable or equations you add then later on from a physical point of yet you add diffusion or you add more substance in like water vapor rain ice snow and maybe also with respect to turbulence modeling that you also compute some higher order moments or something like that that you get your right answer. So but the first goal is that you have to discretize this equation in some sense on a sphere and that called people like all and then around that you make more infrastructure and more infrastructure and more infrastructure. So and crits what you at the moment at the moment call I have your two types three type of grits so on the left side that are crits with the art which people are called cubes here so you take a cube you make on each panel you make you make a regular liquid and then you project it with different ways onto this fear so you can do that that all of the areas are all that the the angles on some sense all organize so there are some things there are different options what you can do the other one is that you take something like a triangle or quit but you have on your right hand side or something like the dual of that which usually ends up with with the text I heat runs and and with this right name and some of them are also then not have some other so and okay we are now not only computing on on on on regular space with in some sense now most of this new codes has the option to make local local refinements and they are also different options so so on the left is a cube sphere it's it's a right refined in a way that you have no hanging notes but and all our quads and in the middle you take a grid where you make some in some things like it's it's a continuous grid and it's in some area you have smaller quids and then you can continuously or on the on the right side you take a triangular quids and then make something like a local grid and you have something like hanging notes so what what we do there and this key three types of quids we had the moment the cubes the cubes quit is in some sense the winner in most of the models and the new models these are the grids which are now usually taken for the implementation so this is at the morning old is only in 2d and then you have to go to you have to go to 3d and what then people usually build up is also so-called extruded quids so you take your two-dimensional grid and then you go in the vertical same make I take 100 layers maybe layers maybe a different in height and then you can do it in two maybe in two different ways on the left side you increase these fields on the next level all the time what done people where you end up what people call a deep atmosphere so you take into account that also the curvature plays a role or you have the other options that you increase your grid in the vertical but you take the same in some sense the same area with which you have on the lowest here so this is here in the middle of this for 2d example and then you're on the right to see how this then will happen in the in an extruded grid but then in this this position you assume that also in the right way these grids are on some heads are connected so you can have a flux from one foot from one to the other one but you have in some sense a little bit a different type how you incorporate all this something like in in a model okay so maybe I have here a list of some of global eye course which are at the moment are used worldwide and I start with icon this is the German German global model which is now developed together mainly by the German Wetter Service and by the Max Planck Institute in Hamburg but now also a lot of other institutions inside Germany use already that model also have have written a new substance subsystems for that and this model will I also was shown later is also now the next years will be will be refactored again and again and also to to bring us more to this extra scale stuff so other main models what we also usually here's this are two models which are is is from the European Center for medium range weather forecast so this is the model which usually is called the best model in the world with respect to global skills they have one model which you use at the moment and they're also working for model within works on finance volume the usual model takes global spectral at global spectral elements so something like in in the in the horizontal so it's a special implementation also the English Met Office works at the new model so their last model is called end game this is the only model yet which works on a normal that long with and now they are going to a new model what they call a freak and this is named after Lewis Fry Richardson who has described the first idea how you can make a better forecast and also has the idea of something like that you do it do it in parallel with people sitting together in a room and giving information back or to your neighbor or something like that and also after this ideas maybe then in the in the forties in the US after this person they implemented the first weather forecast okay so we have a model which is called the model prediction for skills and past this is this model which takes this grid what I have in the middle it's the only one which takes this extra heat words and then we have something like a model where I'm working also admit a little bit is what is called home this is the main atmosphere model at the moment in the US to as a part of their Earth simulator and this is a model which uses which applies in some sense higher order spectral elements in the horizontal I come back to that later and and then we have also main model is Nikam it's it's it's the Japanese is a Japanese model and this is the model which first maybe already 10 years ago go at least for some days to this high other to simulate some days to this to this high order a resolution I'm personally working together with people from Caltech which is called at the moment the climate modeling aliens and there are also other things in the US all these last two are new models and they are sponsored not by the NFS but by private people especially by some guy which is called Schmidt which is the third person from from Google and which gives this money to develop this type of models okay so what I already mentioned some projects which starts now at this year or the next year one is what is called is next Gems this is to a couple in Europe to couple atmosphere ocean models with this high resolution and one of the mods models will be icon and the other model will be this IFS model so they sponsor two models that they can go to this high resolution and also with a lot of money going in then is a is a is a project in in the in in England which starts 10 years ago gun hope projects so already 10 years ago to start to come to to rebuild their their model so that they go to this what I call this Alfred model so they started and they started trust with the big projects inside the mathematical world and after that they go back to the UK Met Office and then we will have in Germany a new project which is called farm world which also gives German money to do this something what we have in this next Gems and it's also interesting that this new IFS model will also will be then developed in Germany because now we have in one we have a new department of the Rubens European Center of medium range weather forecast because all money which are coming from the EU to this institution this money now goes inside the EU and for that we have this new institute in in one also so you see most of them will be done there okay maybe I go after that okay now I will a little bit explain what I have done personally so it's not that much so in this climate alliance idea they started three years ago to build a fully new model together with people from MIT and from jet propulsion laboratory to make in some sense a system model which is which is coded in Julia so a full model full coded in Julia and we have the people at MIT which write a new global ocean model we have the people at Caltech which write the atmospheric model and we have people at jet propulsion which write a new land model these people work together the code is is free access you can look every day there what the people do what are their problems so all is documented on github and it's it's fully open and most of these us codes are also open at the moment for instance I can is not open but now the ideas also that German battles service has the idea to bring it that it will be open open source so that all people can download really but at the moment you are not at that stage it's internally internally open but not to the community so and the idea in this what they have for the for the for the atmospheric model also go was to go to to a higher order dg model something like that you have for each element you have spectral elements or polynomials of of degree maybe of order five six or something like that and the idea is that this type of discretization maybe also good later on for parallelization and for speed up if you go to and what you do in this spectral model a message is that you want to buy what we already heard in the talk before that you want that you want to patch together for global solution of the PDE that you have local ansatz function which already has a large degree of freedom so it's not the function maybe in this cell with this piece by scontrol piece by linear so it's it's a higher order polynomial so you have a degree you have three three degrees of freedom in the in the order of maybe 30 to 200 degrees of freedom you use usually you use a special system of functions something like monomials which is the standard way sin is causes or something like way flats so and because you have this higher order ansatz functions you have you have in some sense a localized heavy computational work which is good if you want to go to to to to accelerators but on the other side you have to have idea how you patch together all this all this localized ansatz functions and what you usually do you do that with that you enforce some types of of content of content you can do it that you say this is the methods should be continuous and you end up in some sense with the with the gaiokin method or you make it in some sense with some type of of of penalization which has also to do something with the physics of the problem so you have three month problems and then you have to do that together okay so and we started they started with the first with it I see what I already maybe can I go further so so usually what you take as piecewise as ansatz functions are are polynomials and you can write polynomials in different bases and start the standard one what what you use is you to take you take Lagrange you take Lagrange polynomials so I have it written down for piecewise quadratic in one direction and if you have only quad elements then you take tensor products that you can go to do two d dimension or also two three dimensions this is very easy for quads it's more complicated maybe if you are on triangles there it is not that easy to to bring that to this higher order so how this function maybe it can look like okay so so we'll see maybe you have a higher order and you have to patch them then in some sense together okay how this then works maybe I want to explain it in it's it's it's very simple at the moment as you maybe what we have the simple affection equation here and what now do I have I have some interval x i and in this interval I have some nodal points in this nodal points I describe my my Lagrange polynomial and I write my solution as a linear combination of this of these polynomials so it's not fully metametric correct what I do what you now do you put in this locally function in this affection equation and you integrate over this volume and then you have to compute the integral on the left and on the right hand side you have to compute this integral by some type of quadratature because this integral so it's it's a product of two polynomials with this with this with this projection ansatz but what now people do in this in special spectral element methods that they take quadratature rules where the nodes where we are where you also compute the the the integral at the same nodal points where you also compute your ansatz function and that gives you very simple quadratature rules and then also in case of Lagrange polynomials you get on the left side you get a diagonal mass matrix what you usually not have so and then you can define with this Lagrange polynomials you can define this differentiation matrix and then you can locally write your digitalization in a simple form so you have to multiply your the small matrix which is the order of the polynomial something like five to five or six by six with this with this with this vector you have to multiply with the velocity and then you get your equation and then you have to catch that together in two-dimensional you have to think that you have to multiply from left and from the right but all this implementation in some sense at the end it's all some is our local sums and in some sense very easy to program okay so if we are now not on simple parts we have to make something like we have to go and we have the purple in the elements what we then do is that we map all our elements to a reference element and make also our ansatz functions in this reference element and make simple transformations then and what we need then there for that for the implementation is is in some sense the the Jacobian of this of this matrix and from this from this matrix you have also to compute something like what people call a contraband basis where I have an error this I has to go upward and with this you can write down the divergence the gradient and the curl of a vector function or scalar function in this in this stuff and what you now has to implement finally you have to implement all this formal loss together with this differentiation matrixes that you get all that what we need but you have to okay you have to compute all that and you see you have all this multiplication so have you have you have heavy locally work so and if you then so and if we do that on this fear now assume we have already shown and what we then do is which doesn't lie on the sphere and then we project this point back to the sphere and and we get them this this are and this is our transformation it's a non-linear transformation for that they have to compute the Jacobian and we have to take that that in all this this is a station but it's straightforward you have to do okay so that's and what you now do you take your you take your your equations and do that maybe the DG or receive G you have to do something yet what I not want to explain you have to implement that and then you can start to simulate something okay and that I have done or we have it done at Caltech so the problem was we people as we had I started somewhere with that we were together in May 22 they implemented a lot of stuff but somewhere we started to or want to simulate examples in 3d directly on the sphere and then we add up with with stability problems and nobody knows what happens there there was a lot of ideas to add some limiters to make some type of of stabilization but all of them doesn't have at the end the code runs maybe sometimes longer and then and then crashes and then I started I want to understand that a little bit and then I looked for codes in Germany which also has implemented in G-member and in Germany you will find something like deal 2 fluxor dual Brick C which are codes developed in Germany and really available I contacted some people and then I started with the code which is both which is developed by Gasmin Hindelang and the good thing was in this code already has a cube sphere I implemented my better examples and see the same stability problems and then we had what I have never seen before about this DG this is what it was called the special computation of all these integrals so which in some sense give you kinetic energy conservation and then with this type of of of choosing code the the methods were stable and we could we could run maybe something like 100 days of already simulations okay but at Caltech they stopped the implementation it said we have to do something new and they changed from DG to CG so they want the following home home and again we had some restability problems I'm also implemented that's too long that in a simple mudlap code my code then works then maybe in three days because all that people that are working Julia with the help of some people there we transferred it to Julia and after that three days they also find the big error in their code and then also their code works but now with all this exercises I have now also a CG code in Julia which which can also run on the sphere so maybe going from here I can show maybe some examples what I have one now with this DG code this is something what people call code then to the beginning so you have a you see and the course the air is cold it goes down and then we can it's already then it falls down and then it goes to the left and and and to the right so and I have also now maybe simulations so this is now a simulation on the sphere the output is in in lat-long coordinates they can show you it's too much so you see here a little bit the cube sphere so a little bit the grid cells and for that we have already simulated something which is called a cleaning wave this is an implementation with with with this cold fluke so and I have made the same simulation also with with with my code yes the is the resolution a little bit lower but you have here also already some type of of grid imprinting okay so I'm now also this code what I now have is also free available it's also on this climate machine stuff and at the moment I'm working also there to make it the code more faster so something like to look at multi-threading and already what I have done for other codes to also look to do that some type of parallelization and to learn how to bring it from Julia to GPUs thank you thank you any questions more okay we're behind a schedule maybe I have out of curiosity maybe a question how long do these simulations take like what I have I have simulated something like it's another example it's so yeah I so compared to maybe a hundred I so I takes maybe 24 hours on on my laptop but it's only on one for the moment but then in once but usually what you have to go people now go to 20,000 for something like that and maybe much larger higher but at the moment also in our community we have at the moment we have a first parallel version it's also doesn't scale up so it's also not there is no credit in the other code so also there are people are working and usually I think at the moment they also have not enough manpower to do that all in parallel development and finally the dichro is not for the main persons are not interesting stuff the main interesting stuff is the dichro works and then they can start machine learning and other stuff and at physics and physics okay thank you we will have 30 minute break yes
There is ongoing work worldwide to write new dynamical cores for numerical weather prediction. The reasons are simple refactoring, take into account new processor architectures, try new programming environments, and finally use latest achievements in numerical mathematics. I will summarize actual developments and show some examples from my own personal endeavor within the Climate Modeling Alliance (https://clima.caltech.edu/). Here we develop a new numerical core using the programming language Julia for a new earth system model which should learn from different data sources. The new dynamical core is based on a cubed sphere grid with high order continuous or discontinuous ansatz functions. To understand stability issues I have implemented standard test cases in the DG code FLUXO. Here the same stability issues were observed but could be resolved with a so called split-form kinetic energy conserving formulation. For a second planned formulation with continuous elements I have implemented my algorithm version in Matlab and subsequently in Julia. By means of the Held-Suarez example we will compare implementation details to get efficiency for both programming environments and present a new Rosenbrock-W-method where the explicit part has the strong stability preservation (SSP) property.
10.5446/19690 (DOI)
Hi, everyone. I'm Miho Fuan. I'm talking from Japan. And I have chosen a rather provocative title, is Inclusiveness in Scalcomery Beneficial for Scholarship. And ultimately, it is. However, in reality, how it is done now, it isn't. So let me explain by telling you a story. First, assume you live in the land of rabbits. And today is the annual conference of the Rabbits Society for Survival. The theme is Striving for Diversity and Inclusion. Since it's a conference of rabbits, many rabbits are presenting. Our rabbits present how to raise and dig giant carrots. Another one presents how to run fast with an Easter egg. Another presents how to dig safe tunnels. Many rabbits are listening, and the presentation gives many applause. There are also other animals than rabbits presenting. Our elephant presents how to bath with a trunk. A hippo presents how to sleep in the water. The rabbits are listening very politely. They find the presentation interesting, but not relevant to their lives. Ajiraf knows how to contextualize his presentation. So he talks about how to spot yummy leaves on trees and how you could possibly apply this knowledge to carrots. However, Ajiraf has not tested his idea with real carrots, so his presentation is not really convincing. A fox is also presenting. However, before he could talk, he could start with his presentation. The rabbits run away. However, the fox was a Japanese fox, and he was the messenger of the rise of riceweeds. He wanted to talk about when it is good, when you can expect good crops, but all were running away. So this is how our research of you from the marginalized countries. You're per... researchers like me are pressured from the local government to be an international player. The government wants to be ranked high in the World University rankings. They want to have Nobel awards, but they really want to have voice in the global context, the global discourses. But they also know they cannot be heard if they don't have some convincing quantitative indices, like being ranked high in the World University rankings or having Nobel awards. So the government tells the researcher, go and get world-leading research done. So the poor researchers, they go to the international conferences and try to submit articles to international journals to be heard in the global context. But we always feel that we are being judged upon Western standards, always having to contextualize your idea upon Western standards to be heard and hardly be able to be the best in the world, as it was the case with Sahirah. And not only this, we are always at risk to be disconnected from your home, even from your home country. You're presenting to the international audience and neglecting the local needs. So you are getting disconnected from your home country. I have been working at the University of Tokyo on international issues and I have asked a history professor how humanities and social sciences could be more visible in international sphere. He must tell me that a genuine historian would scrutinize the historic event in detail. This year, Shibusawa Age is very big in Japan because he is going to be the face of the 10,000 yen bill in Japan. And there are dramas and books and souvenirs. Everything is on Shibusawa Age. But who cares in the world about Shibusawa Age? You cannot even remember his name. So the history professor is making a comparative research between a major restoration and French Revolution. Major restoration happened in 1868 and it was a very fundamental regime change in Japan, which was the first step to become, to make Japan a modernized country. And we did it without shedding any blood. So he was comparing these two incidents. Such comparative research is very interesting and also meaningful. But we also need historians who really make research deep into the Japanese history. There is also another language issue in international research, even in STEM field. You may have properly learned about photosynthesis at schools. We teach photosynthesis at elementary school in Japan. For photosynthesis we have invented such a Japanese term, kōgōsē, and have added also the Chinese letters with three Chinese letters. The first Chinese letter means skadi, it means light. And the second and third Chinese letter together means assemble something together. So every kid understands through these Chinese letters that it has something to do with light and putting something together. And that way the kid can memorize the concept of photosynthesis. But if we were to teach the concept in English or Latin term photosynthesis, we would never be able to teach it to our kid, our seven-year kid also. So language is very important to raise the literacy rate at your own home country. There is always a talk if we should teach in English at the university level. But the University of Tokyo has voted against it because if we are teaching in Japanese at the university level, it inevitably forces us to translate professional terms into Japanese. And this will trigger down to the elementary school level and the literacy rate goes up. But you can assume that it is very labor intensive and also drives us down with on at the World University rankings and international research, excellent research. But research nowadays are more almost publicly funded and it has to serve the local needs first. It should solve local issues with local language and the knowledge should be shared among the local people, including also schools. We always talk about diversity, equity and inclusion as the various to be pursued. But I always think the people really want to listen to the voices of marginalized countries as it was the case with the giraffe or hippo or elephants in my story. So how should we move forward? Open science is of course a way to move forward. If we could share knowledge among the local people across various nations, it would be could achieve much more and prosper and make the humankind more happy. However, it's very local knowledge is embedded in local people. And it is very difficult to let's make them communicate to each other as it was the case that the rabbits were not so much interested in the presentation of hippos or elephants. How can we bring these people together? We could do it by having some common agenda or common issue on which they could say on which they should work together as equals. Sample projects could be something like effective agriculture in monsoon climate, disaster management in earthquake regions or something which you see on the slides. It should be something where every country is or the participating countries are suffering and where people have the passion to solve the issue. But if we could do it, we are achieving much more and we could it would lead to people's happiness and prosperity. And ultimately, knowledge and scholarship is not just about impact factor and world university rankings and citations. It is something that should be pursued for people's happiness. So let's move forward to make something happen to solve global issues together. Thank you. That would be it.
Talking about diversity and inclusion, we often take for granted that it benefits everyone and that it is a goal to be pursued for the sake of equality and innovation. However, there are cases where inclusion, in fact, can harm local scholarship. For instance, being included in global scholcomm assumes working on research topics that are interesting and relevant to the global audience. This presumption can undermine local scholarship focused on domestic issues such as national history, literature, local economy, legal framework, and other social issues. Since many countries put it on their agenda to compete globally and achieve high world university rankings, their researchers are sometimes forced to change their research topics to be able to publish in global and high-impact journals if they want to sustain their academic career. Thus, it can be said that the pursuit towards inclusive scholcom largely distorts the scholarship landscape, ending up in research detached from local interests. But shouldn’t research also serve local interests, especially if it is publicly funded? This presentation is based on discussions and confrontations that occurred at forming the internationalization of the University of Tokyo.
10.5446/19691 (DOI)
So hello everyone and thanks for having me today. It's my great pleasure to be here and since our time is short, let me jump right in the middle of it and open my talk with the following I think quite powerful lines brought from my colleague Anbio. Imagine if you were stopped being first and foremost a scholar, a little while in order to take a job in which you could do something that would be useful not just to your personal career but to the whole scholarly community. What would be the focus? What would seem most useful to you? So apart from opening a floor to some shared photo experiment or even fantasizing I would say, these lines also very clearly articulate the conflict that is recognized as the real oculus heel of open science globally residing in research evaluation. So we found it truly important to bring them and work on these issues in the context of social science and humanities in the Horizon 2020 project opera. So in my talk I'm going to bring back the humanities perspective as indicated by Jimena already. But back to the project, so the overall goal of this opera speed project was to support open scholarly communication in the social science and humanities and also to establish my organization's Darya's sister infrastructure opera. And within this project we had a specific task force because we recognized that capturing realities of peer review is super difficult. This is a kind of dynamics that define scholarship but are super invisible. So we wanted to learn how they manifest currently in social sciences and humanities, researcher realities and also how they deviate from the global version of open science. So to this end we collected and analyzed 32 interviews made by researchers and publishing state collages and we analyzed this rich polyphony and the end project we are trying to turn it into an open access photograph. So couple of things we wanted to better understand. First of all, how the notion of excellence are constructed and negotiated in social sciences and humanities, who are involved in the processes and who remain outside, what are the boundaries of peer reviews in terms of inclusiveness with content types and what are the underlying reasons behind the persistence of certain proxies in the system. So speaking of this latter, one of the reasons why it's really difficult to change anything in peer review, not only in social sciences and humanities but also in interdisciplinary sciences and what makes it also super difficult to study peer review is that peer review as an institution and a collaring institution is very deeply embedded in the broader systems of academic power structures, commonly referred to as what we call the prestige economy. So what you can see here is rather a vicious circle in which research evaluation currently lingers. As long as the evaluation criteria is dominated by bibliometrics and publisher prestige, open research practices such as blogging, such as sensitivity to multilingualness as also recognized by Jimena, sharing and creating born digital outputs and the like will remain strongly counter incentivized and will not grow sufficient enough to replace the current harmful proxies. So what we recognized is that the major challenges around peer review is much more social than technical. So the biggest challenges are along the dimension of the who, who is involved in gatekeeping, who remains out. As active practitioners, scholars, scholarly communication experts, I think you may not be surprised by the fact that the shortage of evaluative capacities turned out to be by far the biggest challenge in operating peer reviews in our age. So in this digital age, our printing and dissemination capacities are non-finite anymore, but human attention is very much so. And contrary to this, still administering and gaining recognition to one's reviewing activities is still barely existing in reality. Many reviewers said that they gained some sort of a symbolic capital from reviewing for prestigious journals and publishers. But you know the problem with this is that this rainforest existing power structures and makes it much harder for certain kind of scholars and scholarship to contribute to this game. This situation gives editors really a hard time to put any diversity measures in place, although we found huge biases in terms of multilingualness, in terms of gender, in terms of geography, in the institutions of peer review. But in this middle of evaluation, labor scarcity, editors who are the real gatekeepers in SSH scholarly communication have really hard time to find at all somebody who is competent in the research question of a given paper, let alone to implement such measures. An interesting funding was that this shortage of reviewers opened the floor to the next generation to establish themselves as reviewers. But of course here again, people are not coming from equal, with equal chances, institutional prestige can be a network, can be a real game changer here. Okay, let's discuss a little bit the how aspects. So working in small disciplinary communities like social sciences and humanities researchers do usually also seem to define their attitudes towards openness. It seems that social sciences and humanities community priorities significantly differ from how we know open peer reviewing the global open science agenda. So that said, we faced strong and complex, but at least not univocal resistance against the open peer review or revealing openly reviewer identities because a recurrent argument that we heard is that so it's hard enough to find competent reviewers who are doing their work voluntarily. Providing them to sign the reviews and bear all the consequences that it takes in terms of academic power relations is pretty much a mission impossible. But on the positive note, we know that openness also takes many shades and flavors in this respect. So publishing peer review texts along with the articles or book chapters is the flavor of openness in this respect that enjoyed the most support or even endorsement from our respondents. In addition to the who and the how, we also wanted to learn a little bit about the what challenge. So the scope of like and challenge the scope and inclusiveness of peer review in terms of digital content types. So if you are working with digital and computational methods, you might be well aware that there is a pressing need to assess kind of scholarship that cannot be placed on a bookshelf. So interestingly, a kind of digital extension to the genre of book reviews that has a long tradition in social sciences and humanities fields, we see an emerging culture on post publication tool data and code reviews actually. And these are usually the discursive places where discussions over or even debates over how to accommodate the notion of reproducibility in social sciences and humanities research are taking place. So it's a super exciting new innovative direction. But here again, it's really difficult to find reviewers who are competent in all aspects of this usually quite complex digital scholarly objects. On a positive note again, and especially in the context of this novel content types, it was reassuring to see that the critical discourse around them is much more abundant than what is channeled in official channels or established channels of peer review. It happens in social media, it happens in mailing lists, it happens in discussion groups, in blogs, quite a rich discourse spaces. So to finish with something positive, in addition to pointing out current anomalies around peer review in social sciences and humanities, we also wanted to know what drives scholars to voluntarily, to still voluntarily contribute to this complex enterprise of reviewing. And so here is a summary of the top incentives. The good news is none of them are tied to direct monetary or hierarchy aspects, but instead they are purely scholarly in nature. It seems that peer review is essentially driven by curiosity, by intentions of continuing a meaningful dialogue and advancing one's field. So the presence of this collective scholarly sovereignty that seemed to be driving forward peer review should not be underestimated, but not should be exploited. If anything, I think scholars well deserve to be recognized for that. Zooming out a little bit, if you want to learn more about our study, about our findings, about this beautiful polyphony that we gained through the interviews, here are a couple of pointers. And again, stay tuned. Hopefully, we will be able to publish the results also in the form of an open access book. Thank you for bringing it up to this presentation.
Peer review is central scholarly practice that carries fundamental paradoxes from its inception. On the one hand, it is very difficult to open up peer review for the sake of empirical analysis, as it usually happens in closed black boxes of publishing and other gatekeeping workflows that are embedded in a myriad of disciplinary cultures, each of which comes very different, and usually competing notions of excellence. On the other hand, it is a practice that carries an enormous weight in terms of gatekeeping; shaping disciplines, publication patterns and power relations within academia. This central role of peer review alone explains why it is crucial to study to better understand situated evaluation practices, and to continually rethink them to strive for their best, and least imperfect (or reasonably imperfect) instances. How the notion of excellence and other peer review proxies are constructed and (re)negotiated in everyday practices across the SSH disciplines; who are involved in the processes and who remain out; what are the boundaries of peer review in terms of inclusiveness with content types; and how the processes are aligned or misaligned to research realities? What are the underlying reasons behind the persistence of certain proxies in the system and what are emerging trends and future innovations? To gain an in-depth understanding of these questions, as part of the H2020 project OPERAS-P, our task force collected and analysed 32 in-depth interviews with scholars about their motivations, challenges and experiences with novel practices in scholarly writing and in peer-review. The presentation will showcase the results of this study. Focus will be on the conflict between the richness of contemporary scholarship and the prestige economy that defines our current academic evaluation culture. The encoded and pseudonymized interview transcripts that form the basis of our analysis will be shared as open data in a certified data repository together with a rich documentation of the process so that our interpretations, conclusions and the resulting recommendations are clearly delineable from the rich input we had been working with and which are thus openly reusable for other purposes.
10.5446/19693 (DOI)
My contribution will be about thinking what diversity, inclusion and collaboration. In the beginning, and I thought that it would be great to talk about this from my digital humanities perspective, but taking into account that we had only 10 minutes, I thought that it was useful to maybe then bring our own disciplinary perspectives in the breakout session and talk a little bit more with a perspective about two user cases from my region, Latin America. Can we move to the next slide, please? So this is it. Here we have an image of a picture of Latin America. I'm not thinking of just my perspective that would be, let's say, a person who works in Argentina who uses Spanish for scholarly communication, but let's take into account that this is quite diverse region in which we have different languages, a very big country like Brazil that speaks Portuguese and many, many different native languages in the region, in the different countries. So a region that is characterized by being a kind of pioneer in open access debates before many declarations, groups in Latin America like Cielo or Latin Dex were working in open access. Can we move to the next slide, please? So a little bit about what we talk about in our region when we talk about open access or the publishing model in our region. And I would like to make a point in this to stop a little bit about, to stop and talk a little about this because I think it's in the breakout session when we also agreed to talk about what we consider about open access. So a definition of open access. In our region, the publishing model, the open access publishing model has been mainly defined as a non-commercial open access model. Why? Because most of the research done in the region has been traditionally funded by governments, by higher education institutions, but without commercial players here in our model. But one interesting thing about this is that the open access model in the region has been, I would say, defined by being community led in the sense that professors are editors, editors who sometimes also work with students in their journals, but with, let's say, volunteering for this work. So this is a kind of community and also volunteer work in which many people who are with this kind of work in their universities, in many different universities, not just working for one, devote their time for working as editors in journals, as copy editors in journals. So this has also built a big community in the America of editors, has modeled a collaborative model for the region, and has also worked as a common goal of understanding the publishing model also related to knowledge as a common goal, el conocimiento como un bien común, also as a public good, como un bien público. I highlight this because this is going to be part of the examples that I'm going to show you in this presentation. Can we move to the next slide, please? So important to highlight also that this collaborative, let's say, diverse, because as I was pointing out, we have editors, we have volunteers or people collaborating inside the scholarly publishing model who are working as professors or students, etc. So this is part of also our context, let's say, the important thing is that we're talking about one of the most unequal regions in the world. So diversity is also related to equity and how to achieve equity when we talk about scholarly communication in our region. Let's move to the next slide, please. So here we go, and this is what I wanted to talk about. Equity, diversity, collaboration can sometimes look a little bit like this. We have different people working to achieve a common goal, but sometimes putting their best and coming with different, let's say, perspectives on what innovation can be. And I'm thinking about this because when the organizers asked me to present something, they said, okay, let's make a point on diversity, equity and collaboration, but also in innovation. So how do we see innovation? Well this is mostly the way in which sometimes in the region we innovate in scholarly publishing, having in mind that we're dealing with this model on open access. So we have this common goal for open access, but sometimes it looks a little bit like this and it looks great. Let's go to our first example. Can we move to the next slide? Okay, so this first example would be about language, about the languages we use for our scholarly publishing model and how we have been innovating in this. Let's move to the next slide. So in the past years we have seen growth in the interest about multilingualism. Multilingualism seems now to be part of many debates related to diversity, to collaboration, to equity in scholarly communication. But how much multilingualism have we achieved in our scholarly publishing model? Let's move to the next slide. This is an image of that group here in Latin America offers us with different reports in which they compare how much investment countries do in different areas. And this is just an image that shows how much, for instance, a country like the United States of America invests in research and development in research and education. And you have others, other data in the image related to other countries from Latin America. I selected Argentina, Mexico, and Brazil. So you can see the difference in the investment in research and development. So let's move to the next slide. To illustrate some cases related to investment, related to scholarly publishing and to open access, I decided to use the data from the Directory of Open Access Journals, mostly because the data that the DOHA shows us is a data that has been curated by the community and as we all know, it includes just data of full open access journals. So I tried to do some research inside the DOHA to see how many multilingual journals we had in different countries. So first, I had a look, let's go to the next slide, please, to the journals that the DOHA nowadays shows us for journals published, sorry, in the United States of America. And how many of them were published only in English, where, as you can see in this image, many of them were published in different languages like English, Spanish, Portuguese, and how many of them were multilingual. So this is the data that the DOHA shows us for 2021, and these are the numbers, 56 multilingual journals in the United States. Can we move to the next slide? Okay, so this slide shows us the same numbers, the same data, the total journals for Mexico, published only in English, Spanish, Portuguese, and how many of them are multilingual. So if we compare how many of them are published in Spanish, 80, with a multilingual one, we will see a more interest in having journals published in different languages. This also can be seen in other countries. Let's move to the next slide that shows us, for instance, what happens with multilingual journals, for instance, in Argentina. The number goes a little bit down, but it's mostly the same. And finally, let's go to the next slide, the multilingual journals in Brazil that also are more than the Portuguese only journals. Of course, and I must say, when I checked the data from the DOHA, as we can suspect, were mostly related to social sciences and the humanities. When I started having a look at this data some time ago, I also started a survey in which I asked the editors why turning into multilingual journals, journals that in the beginning were only published in just one language. Firstly, they mentioned that, of course, it was a way of being part of the global scene. We can talk more about this during the breakout sessions. And not only being part of the global scene, but also improving and being more diverse and achieving more diversity, not only in having more authors publishing in different languages, but also readers and also reviewers. So this was a way in which just with the effort of including more languages, not only you could be part of the global scene, but achieving a more diverse journal for your discipline and your country too. So obviously, this is a lot of work, I can share the data of the survey with you during the breakout sessions. But one interesting thing that we can debate more is how journals that maybe aren't part of countries in which there is a lot of investment in research and development have multilingualism as a goal in scholarly publishing. Can we move to the next slide, please? So just to finish this example, this user case example, I wanted to bring a project that an Argentinian biologist, an Argentinian researcher, Humberto de Bat, started a pair of years ago, that is called Panlingua. And this is an interesting tool that could also help with multilingualism. I'm not ignoring that there are many multilingual or let's say projects related to how to improve multilingualism and scholarly publishing, like triple, etc. But this is a project that with maybe let's say not so much funding has been trying to improve multilingualism using very, let's say, not completely open tools, but with an open way of understanding how a kind of tool like this could improve multilingualism in scholarly publishing. I think that the model of Panlingua that is like doing search in your language and use a tool in this case like Google Translate to look for, in this case, preprints in different languages could be applied to different models or let's say journals. So this is one of the examples that I wanted to bring related to innovation and multilingualism. And now I would like to move to a second case. Can we move to the next slide? Good. So not so difficult to achieve in our, let's say, innovation, diversity and inclusion system, but really good to achieve. Multilingualism is always the goal that we want to achieve. Let's move to the next slide, please. Technology was the second user case that I wanted to bring for this talk. And how we are also innovating and trying to bring more diversity to our scholarly publishing system through technology. Let's see what we have here. Can we move to the next slide? So persistent identifiers that have also as multilingualism being part of a lot of debates in scholarly communication in the last years. We all love persistent identifiers and it's hard to think that this is not the technology that can give more visibility to our work in scholarly publishing. One problem with many or with some persistent identifiers or is that they are commercial and not all journals in the Latin American region can afford paying for persistent identifiers. Let's move to the next slide. In this sense, I would like to bring the project that has been carried out in Argentina that has been reworking on the possibility of giving persistent identifiers to journals that cannot afford buying, for instance, DOIs. This is a project carried out in the Centro Argentina Informacion Cientifica y Tecnológica queisid that has been developing their own ARCs, identifiers for journals, for small journals, journals that maybe do not have a collaborative model but cannot afford buying these kinds of persistent identifiers. So what they have been doing is to work with the collaborative systems with other universities to give small journals this opportunity. Can we move to the next slide? So this is an example of how we see this in one of the journals in one Argentinian journal that didn't have a persistent identifier and now has been given one by ARCICID. Let's move to the next slide, sorry. And also in the last year with the help of Peruvian researchers, the plug-in for adding this persistent identifier in open journal system has also been developed. So let's move to the next slide. So this has been a little bit harder to develop. Not only because it implied research related to technology and people working together but also because some indexers here in the region still do not accept these kind of identifiers to be part of them. So a little bit more difficult but also interesting to see how we have been innovating in this sense and also bringing a more diverse and equitable way of achieving this kind of technology for small journals. So up to here and I would close my presentations and beg your pardon because this I didn't know it was really unexpected that I couldn't use my slides and it was difficult to me to follow the presentation remembering all the slides that I had for you. I'm really sorry about these technological issues but I would say thank you and that I would like to continue talking about how we can achieve a more diverse, equitable, collaborative and innovative system in scholarly publishing during the breakout sessions. Coriam, thank you very much.
The Digital Humanities propose innovative digital and computational methodologies and practices, together with new approaches to research, publication and evaluation. Although debates about the values of the Digital Humanities have a long history in Northern academies, Latin America has been more interested in rethinking the Digital Humanities from open access and open science movements. Values such as diversity, inclusion and collaboration have thus been benefited by new theoretical approaches and implementation through different scholarly communication resources and digital tools.
10.5446/57496 (DOI)
to give this talk. It's been a long way to get here, both in time and in space. I had to get three times at the railway station, you know, building works because I always got it wrong and managed at the end. And so it's a pleasure to be here and to give this talk about the first image of my group. Okay, so here is what I will talk to you about. First of all, I will tell you about how you actually do the observation. How do you take a black hole and you take a picture of it? Something that you may think it's a logical contradiction. Then I will tell you what we, which is more relevant for what we will be discussing here. That is how you do an interpretation, a theoretical interpretation of a supermassive black hole. And that's the modeling part, which is the one I have been more closely exposed to. And then how you go from, you know, an important comparison between theory and observation. And because I am basically a, you know, a theorist, I want to take you to across another question, which is the question of asking, do we really believe that this is Einstein's theory that is right or are there other alternatives? So let's start with setting up the stage, or why this is a difficult problem. First of all, black holes are the most compact objects we know in the universe. You take a certain amount of energy, you compress it in a very small volume, and then you produce a black hole. We don't have black holes on earth, we have black holes in the heavens. So they are all astronomical. And that means that they're also very far apart from us, very distant. So if you have something which is intrinsically small and intrinsically far, you can imagine that seeing this object in the sky, just the projected size will be very small. So, and of course, if you want to take a picture of this, you better have something that you can see. And if you want to solve this problem, then essentially you have just one way. That is, you have to take black holes, which are for some reason, it's not gonna work anymore. You want to have black holes that are, sorry, okay, I can't control it anymore. Sorry, let me try again. So we thought that Zoom problems were over when you have a live presentation that doesn't seem to be the case. Okay. Of course, there's going to be some delay. Okay, so what you want is to have black holes which are as big as possible. We call them super massive black holes because the bigger they are, the more massive they are, they have the larger they are on the sky, and they are sufficiently close. You can play with these two degrees of freedom, size and distance. And then of course, you have a certain resolution. That's what you physically can resolve. And you check all the black holes that you have and you know about, and there are just two that fits the size. One of them is M87, the black hole at the center of the M87 galaxy. And then the other one is the black hole in our galaxy. The black hole in our galaxy is smaller than the one in the M87, but it's closer. So they both conjure to have the right size. So this is M87. This is a super massive galaxy. There is a mass of three billion, sorry, it has a mass of several billions of solar masses. But in the center there is a mass, dark mass, which we don't know, which is of the order of say six billion solar masses. This is the way it looks like in the optical. And you can see already in the optical, there is this little filament coming out. We know that that is a jet. Now, the beauty of astronomy is that you can see the same object in many different wavelengths. So if you start looking at this in the radio, you see that this is a much bigger object. Okay, there is a very large cloud of plasma, ionized plasma, which is emitting in a radio. And now that you can use a technique, which is called interferometry, which allows you to essentially zoom in into this image. And so depending on the wavelength which you're making an observation, you can go and zoom in. For instance, you can have a look at what happens at the very base of this jet, where the jet is produced, and you can further zoom in in the very inner part of the jet and you find something like this, or you can even further zoom in. And nowadays radio astronomers have studied this in very great detail. They can, they name all of these little dots and little maps, maxima. But of course, what would be nice is to have an image of where this jet is actually coming from. And this is what has been done with the event horizon telescope. And to give an idea how more advanced the image is done by the event horizon telescope are, this is just a comparison of the resolution that the event horizon telescope has reached as compared to the best resolution that was available before. So the technique used is called DLBI, a very long baseline interferometry. And as many other techniques in astronomy, it relies on a very simple equation. If you want a given resolution, then you'll have to consider the wavelength at which you're making your observation, and the telescope at which you're collecting this information. So the telescope size plays a role in determining how resolved your image. That's why we tend to build large telescope because we want to have high resolution images. Now, when you ask yourself, well, what does this mean in terms of black hole? You can't get any light. Any light is producing a black hole, but not all of the light reaches us because most of it is trapped between us and the black hole. So you want to get the light that is produced near black hole as close possible to a black hole that reaches us. And then you find out that the light that does that is radio. And so in particular, something over the order of 1.3 millimeter wave radio waves. Okay, so this sets the numerator of that expression. And the number you want on the left is of the order of tens of micro arc second. That's the resolution you need in order to see the little object where the jet is starting from. And in order to get that resolution, you need intercontinental distances. So you just have to build a telescope, which is as big as the whole planet. You may think, well, it is impossible. Well, if you do it physically, yes, it is impossible, but you can do it virtually through this technique, which is called VLVI. So the idea is as follows. You take small size, 20, 30, 50 meter telescopes across the planet, in France, in Spain, at the South Pole, in Chile, at the Hawaii, and you connect them. You connect them so that they are observing at exactly the same time, the same wave front. And when you do this, then you can do interferometry. And so when you do this, for instance, and you connect a telescope at Hawaii with one in Arizona, you have virtually the size of the telescope, which is of 2,500 kilometers, which is the separation between the two. Now, if you think a little bit about this, this sounds funny, right? Sounds like, how is this possible? Why not we build all of the telescopes like this? The reason why this is possible is because you need to make sure that you are really recording the same wave front, the same electromagnetic wave front. So together with the recording of the electric field, this is our radio telescope, you're recording electric fields, you have to make sure you record the time of arrival, exactly, and as precisely as possible. So that is why you need atomic clocks at each of these telescopes. Once you have these two pieces of information, then you can combine them together and obtain an image. And of course, because we have telescopes across the whole planet, this is useful because not only we have, we have then telescopes of different sizes, and so virtually by this very simple expression, we have different resolution of which we can see the image. We can see the same image, a different resolution. And in addition, because the Earth is rotating, we always have a few telescopes that are observing, that are able to see the source. Because of course, when at one point, the French telescope will stop seeing the source because it will be on the other side. And, but then there's going to be other telescopes in Chile, for instance, that will be able to see it. And, okay, so this is basically the technique. It's called, as I said, DLBI. And you can think that there are baselines between different telescopes. And mathematically, what we say is that out of these information, we can build a Fourier transform. This is a two-dimensional Fourier transform in space that provides you quantities, which are real and imaginary. These are called the visibilities. And essentially, you know, that these tracks that you can see here in this diagram, these are the representation of the Fourier transform of the intensity on the sky, i of x, y. So just think about what you know of the Fourier transform when going between time and frequency. Now you are going between visibilities and the aperture on the sky in terms of the intensity of the source. So once you have a certain length in the tracks, you can build a certain image. And the longer the time, the longer the tracks, the more these space of Fourier spaces fill, the better is your image. Okay? So that's a basic principle. And now I really show you with a concrete example, as the different telescopes are building these tracks, we are able to see more and more higher definition details of this image. And of course, you can see that, you know, at one point there's going to be no telescopes that are able to see the source. And also you will see that this visibility space, the Fourier space, is not going to be perfectly filled. In principle, you would like all of this space to be perfectly filled, fully filled with information. And we don't, okay? So there will be some piece of information that needs modeling. So these are the four images that we obtained and published in 2019. These refer to essentially four days of observations. Normally the way it works is that, you know, you ask time on all of these telescopes, this is a competitive process, not we don't want to own the telescopes. We have to ask for the use and then we get the use and then during those days we make the observations, no matter how the weather is. If it is bad weather, we have bad luck. If we have good weather, we have good luck. And that's why not all of the days that were available, there were some days where the, either the weather was not sufficient in all of the telescopes or there were too few telescopes that provided images. But four days were enough to produce these images. And as you can see, they are all consistent with each other by the whole, this different. And that's because we expect a certain variability from day to day. So the image was published on the 10th of April, 2019, this is the 11th of April. So the day after, essentially the image has gone on all of the first pages of newspapers across the world. It's been calculated that in less than 24 hours, 4.5 billion people have seen the image. That's a good portion of the human population. And I think the reason is that, you know, it was a fantastic source of inspiration for the social media, that used it for, you know, explaining to us what is it exactly where we were seeing and of course, you know, that's how, that's why an image is so much more powerful than anything else. If I were to build this, you know, show this in terms of a Fourier transform that no one would appreciate it, but in an image everything understands. So now the question is, and this is where we enter into this game, what are these rings and what, whatever to do with black holes? Okay, so to answer this question, we need to go through three steps. The first one is we have to perform GRMHD simulation. GRMHD stands for general relativistic because we are in a curved space time, magneto-adrodynamic simulations. And this we want to do them in black hole space times in the Einstein theory, but also in other theories. That will tell us how is it that plasma moves near a black hole? The second point is how does this, the plasma that is moving in this space time produces light and how does this light reaches us? So we have to do radiated transfer and ray trace radiated transfer. Radiated transfer tells you how light is absorbed and emitted, ray trace tells you along which part. And then the last step is, compare observations and theory. We have four images which we have observed and we built 60,000 synthetic images. So 60,000 mathematically consistent, physically consistent images and we had to compare. We were very lucky here in Europe that we had received a synergy grant and which is called black hole CAM and essentially in Frankfurt we built an infrastructure, a computational infrastructure that does exactly these three steps. So Bach does the DRMHD, BOS does the ray tracing and Gina does the comparison with the images. And the real, you know, eras of the story are these guys here or were the members of my group. None of them is with me now. They're all moved to faculty positions in Europe and elsewhere. So because this is a mathematical modeling and I guess you will not be scared of equations, this is the kind of equations we need to solve. First of all, we have an energy momentum tensor. So this is, something tells me about the properties of my plasma. And I have a covariant derivative. So these are conservation equation on a generally curved space time. And then I have a conservation of rest mass. If you're familiar with that adrodynamics, you can think these are adrodynamic equation in a generic or more complicated space time. If you're familiar with plasma dynamics, this is again, plasma dynamics, but on a space time where curvature is not necessarily zero. And of course you need an equation of state and this energy momentum tensor, well, this is, you know, that the combination of all a number of, of elements, you know, that to do with the actual plasma, plus the electromagnetic fields and so on. And so you need to carrying along also the Maxwell equations. So again, you have, you can think these are the standard Maxwell equation induction equations, in particular in GRMHD. And you have to solve this in full generality in three plus one dimensions. In addition, as I said, you need to study how light is emitted and absorbed. Essentially this is the Boltzmann equation which you can convert in terms of an evolution equation for the intensity of the radiation along a given path. And this is the path kept followed by photons in this space time. So once you have that equations and you've spent 10 years building a code to solve those equations, you obtain something like so. So this is a typical simulation we start with something that is a ring of matter in equilibrium around the black hole. And I'm showing in red and yellow the rest mass density of the plasma and with white and blue, the magnetization. Yeah, the movie is a bit choppy, but you can see that this is a geometrically thick object. And they are essentially along the polar direction. There is very little matter. There is a lot of light. There is very little matter. There is a lot of magnetic field. So there the magnetization is very high. You can think that the magnetic fields are very strong. That's what produces the jet. As you can see, the equation is not a steady process. It's a bit like water falling onto from a waterfall. There are moments where there is larger amounts of water plasma being acquitted. This is the inclination at which we think we've seen the jet in MAD7. And now you have to imagine of going away from GLMHD and looking at emissivity. So this is the actual radio map you would see at that inclination if you had eyes which were sensitive to the radio. It looks pretty much like a ring, okay? But it's not exactly a ring. It looks like a ring because we have this very special inclination. If we were to go around, you would see it's a far more complicated radiation field with holes which I'll try to explain. And this is if you now take the very same imaging and now allow it for the way it would be observed by a radio telescope. You can see there is an image which is flickering and the flickering timescale is of the order of days. And that's what we have observed, okay? So what we have observed is very close to what we would expect to see from a plasma of a creading onto a black hole. But I will explain a bit more in detail why we are so confident. So the second step if you remember is understanding what happens to light, to photons. So I have a pointer, I cannot use it. But if I press the pointer, I get, you all know there is a beam of laser beam that is shot from here and then reaches the wall and then from the wall reaches your eyes. The reason why we know how to use this is because light propagation on Earth is trivial. It just goes on a straight line, okay? But in a curved space time, that's not the case. Light can go all over the place. And so you may get light from regions where that are not directed to you. Let's make it an example. This is, imagine you have a black hole and you have a thin disc of light, which is emitting light, and you wanna take an image at a given angle. Then of course you're going to see all of the photons that come straight to you from the direct image. In principle, photons that go in this direction, they would not receive them. They would go straight if the space time was flat. But in a space time which is curved, you can actually see also what's behind the black hole. So this is the part of the sheet which is behind the black hole. And to make things even more interesting, you can even, sorry, you can see even the lower sheet of the disc, okay? So this is essentially the way you would see a geometrically thin sheet of light. And if you are interested in science fiction, this is Interstellar, the image of Interstellar, which is a very accurate, although not realistic image. And you now understand why Interstellar looks like so. So there is the part of the disc which is in front of the spaceship. This is the part of the disc which is behind the black hole and this is a part of the disc below the disc. And this tells you that if you have to go and hide, don't do it behind the black hole, it will not help. Okay, now this was accurate but not quite right. As I explained, the plasma that is accreting is actually very hot. And this means that it's geometrically thick. It is optically thin but geometrically thick. So you have to think that in general, we look like so. Now I'm just rotating my camera so you can see. And now you can see there are two holes. There is a black region and another black hole. There is another black region here. If I'm looking at it phase on, it's perfectly symmetric. But I lose this symmetry as soon as I move away from that very specific inclination. Another thing you can always, also appreciate, it is always a bright side. And that's because there is always a part of the disc which is moving towards you, just like there is a part of the disc which is moving away from you. And so this is called a Doppler effect, a Doppler boosting. And so this explains pretty much why what we see is a donut. It's a half a donut because we are the inclination such that only the lower portion of the donut is coming towards us and so is amplified. Now, let me give you some, the ABC of black hole imaging. So this one is, as I said, the upper sheet behind the black hole. This is the lower sheet. This is the part which is boosted and so amplified. And this is what is called a light ring. I will explain what is the light ring in a moment, but it's a very important, maybe the most important part of a black hole image. Okay, so, yeah, I should say that this part here is the shadow, okay? This part here and this part here is essentially the shadow. So what is the shadow? People often misunderstand the shadow with the event horizon. The event horizon is this surface, mathematical surface which absorbs photons. If you want the surface which cannot emit photons. The shadow is the projection of something which is related to the event horizon, but it's not the event horizon. So to explain this, I created this movie. So imagine that you have a black hole and you have a source of light, okay? So you can imagine that this source is just producing photons, light rays, and there's going to be light rays which are going to be immediately absorbed by the black hole. They would be just hitting the event horizon. And so if you are an observer here, you simply will not see those photons. But there are also photons that do not enter the event horizon directly. They get very close to it. They are below what is called the unstable photon orbit. And these photons, they will eventually go on to the, and be absorbed. So at a large distance, an observer will see a region which is here, which is essentially devoid of light or very suppressed, with a larger suppression of light, simply because all of the light that should have arrived here has been absorbed by the black hole. And to give you the sizes in case of non-rotating black hole, this is twice the mass. The photon circular orbit is three times the mass and the projected size at infinity is square root of 27. So 5.3 roughly, okay? So what we see is the actual shadow. And you can also appreciate that the shadow should not be perfectly dark. It's not the event horizon. Because if you take a photon here, where my pointer is and you emit a photon, this photon wouldn't have any problem reaching an observer, okay? That is why the shadow is not necessarily dark. And this region over here, sorry, this region over here, this... I'm trying... Okay, never mind. The edge of the shadow is a very important surface. That's because it is a perfect sphere. And so a circle in projection, if it is a non-spinning black hole, but it's not a circle, a perfect circle, if the black hole is rotating. That's because there are relativistic effects which brings in photons from this side closer to the rotation axis. So if we were in principle able to measure exactly the shape of the shadow, we could tell what is the spin of the black hole. Okay, Mr Chairman, you keep an eye on how I'm doing. Okay, so... Now, of course, we don't know much about what is happening near the M87 star, okay? So in principle, we have to scan all the possibilities in terms of the physical parameters. So we need to be able to change the black hole mass and spin. We need to consider black holes in other theories of gravity. We need to consider alternatives to black holes, something that looks like a black hole. It is a compact object without an horizon and as or does not have a surface. We don't know exactly. No one has gone at the center of M87 to tell. We also don't need much about the plasma properties. And I haven't explained this in detail. Actually, I haven't explained it at all. But the reason is just a single way in which plasma can accrete onto a black hole. Depending on the initial conditions, you may have two fundamental classes of accretion. One is called Sain, the other one is called MAD. MAD stands for magnetically arrested and Sain stands for standard accretion. Basically, they differ in the amount of magnetic field that is accreted onto a black hole. And MAD tends to have a lot more magnetic fields being accreted, so much so that sometimes the magnetic pressure can be so large that even the accretion is prevented. And whether one is preferable over the other, it's hard to say. It's a matter of boundary conditions. And so unless you know exactly what are the conditions near the horizon, you can't tell. It's the observation that reveals whether nature prefers one or the other. And then, of course, there is light, dynamics, and the properties. We can study in great detail how matter evolves, but how light is emitted, that's part of our modeling. And that's because our simulations in magnetotodynamics, they model the inertial part of the plasma, the heavy part of the plasma, the ions, the protons, and not the light part, the electrons. And the two are, in principle, related but not one to one. So there is a lot of freedom in determining the emissivity properties of the source. What we know for sure is that if you get radiation at 1.3, that's to come from synchrotron radiation. Synchrotron radiation is the radiation produced by relativistic electrons going rapidly around magnetic field lines. As I was saying, we evolve ions. So we need to have a distribution, energy distribution of the electrons. But this is undetermined. And the simplest thing you can do is you can create some relation between the temperature of the electrons and the temperature of the ions. You can say, again, this is the simplest hypothesis that you can make, is that the energy distribution of the electrons is a thermal one, a Maxwell-Yutmer distribution. But you still have to get the temperature. And the temperature, you can say, the temperature of the ions, Ti, is related to the temperature of the electrons in some shape, in some analytical prescription. And one which we have used in the simulations I will show you now is very simple and goes like so. Essentially, we have a single parameter which relates the temperature between the two species. And you'll have another parameter, which is what is called the plasma beta. Essentially, it's the ratio between the gas and magnetic pressure and allows, essentially, to put more light in the disk or in the jet. And these three parameters are pre-parameters, and we just change it in all of its possibility. So this can go from 1 to 160, and essentially, allows you to recover the two extremes. One is you have most of the emission coming from the jet and the other one coming from the disk. And this is a very crude, handmade recipe, but you can do much better, much more sophisticated, involving turbulence and reconnection. And you'll find out, after all, this very simple recipe works pretty well and can reproduce much more complicated energy distributions, even non-thermal energy distributions. So it looks crude, but works. OK, so we have then run a number of simulations, in particular in Frankfurt, we have done about 50% of the simulations in the whole EHT. It is a high-resolution, three-dimensional simulation. Then for each simulation, we change the emissivity profile, which we painted differently the electrons. And in this way, we got 400 scenarios. And this is just a movie which should allow you, in principle, to see a small fraction of the library of scenarios we have considered. So there are situations where you have black hole going counterclockwise, you have shadows which are very small, or very large, your emission comes mostly from the disk or mostly from the jet, and so on and so forth. And out of these scenarios, you can produce images, because each of these frames corresponds to a given time interval, which is the one associated with the observations. Now, I would like to use to think a little bit about how involved and the geneticist problem is. So these are four images that have been produced. The first two come from a MAD, and they are high, 160 and 110, so they essentially are the extreme of the MAD regime. And these are the same again, the extremes. And you can ask yourself, okay, well, so of these images, we know everything, because we know all the properties of the plasma, we know where photons are emitted, we can trace them, and if we ask ourselves, on the basis of these images only, whether we know where the light is emitted, the answer is no. Once you have just an image, it's very, very hard to tell where the light is actually produced. So to explain this, I convince you, let's imagine we decompose these images in three parts. As I've shown you, in general, you have a torus, a disk, we call it torus, a disk, and then you have two jets, one which is coming towards you and one which is receding. And the one receding can actually also be the dominant source of light, because as I explained, light can be bent. So you see it here that depending on the, now this is a matrix representation of the different parts, so this is the mid-plane, this is the near side, or if you want the approaching jet, this is the receding jet. And then you can see, essentially, you can fill any of these cells in this matrix, you can have, that in this case, most of the light comes from the mid-plane, but in this case, most of the light comes actually from the jet which is moving away from you. And this is just because we're considering this specific model with this specific high. So don't ask me where the light comes from in the image, because we simply cannot tell, it's highly degenerate result. And then, as I said, we built 60,000 images and we had to find the best match. And luckily, this is not such a, you know, this is the easiest of all of the problems, because there are algorithms which are very accurate and fast and allow you to make, you know, this comparison rather easily. To give you an idea, again, this is an example, imagine that your data is the blue, okay, this is in the visibility, and those tracks you can decompose into closure phases and visibility amplitudes. And the blue lines are the data. And for each image, this is an image of a simulation which is then deconvolved to consider that your telescopes have a certain, you know, limited resolution. And what you can do is, you know, you can run this, all of your simulations, every time you run a simulation, you can have a fit of the data, you can calculate that chi-squared, and then out of the chi-squared, you can have a distribution of images that matches the observations. When I try to explain this, you know, half of my audience understands, and the other half doesn't understand what I'm talking about. So here is another way of thinking about what I'm doing. Imagine you are at a stadium and you have an image of a person, this one, which is very blurred. You don't know what this person is at the stadium, and you don't know actually who this person is. But the stadium has CCTV cameras that take pictures of all the people that are going to the stadium. So what you can do is, you can scan across all of the images in the stadium, and the software will return a distribution of images. These are all the images that match very well the image that you have produced, because the principal components are the same. And of course, you cannot tell whether the person is in the stadium or not, because you have at least 10 top-class matches. You can have many more if you decrease the tolerance. But that already gives you a lot of information. First of all, you know that this person, most likely, is a woman. And because all of the top matches are women, and the second thing that you know is that this is a woman with long hair. Again, because all of the top-best matches are women and with long hair. So although you don't have a perfect match, you can extract already a lot of information. And that's what we've done essentially with the chi-squared and the distribution in parameters. This is an example. So this is the observations, and this is the theoretical model. So out of these 60,000 images, there is one, this one over here, that matches to this level of precision. And because the one on the right is a theoretical model, you would think, OK, I know exactly what's on the left, because the one on the right is exactly the same as the one on the left. This is a flawed logic, because I have lots of images on the right that are looking exactly the same as on the left, with the same level of bounty. And that is why there are certain aspects of this problem we cannot model. So to give you an example, these are three real simulation images. These are the images before the convolution. And they give you the same match with the observations. But they correspond to three completely different objects. This one is a black hole, which is counter rotating. This number is the spin, so it's negative, it's counter rotating. This is a black hole, which is maximally rotating, but it is in the other direction, spin 0.94. And this is actually a black hole, which is not spinning at all. And yet they give you the same quality. So how do you take this? Well, from one point of view, this is good. That means that your model is so robust that it will provide you with the right answer no matter what. On the other hand, it is bad, because essentially you are not able to distinguish the parameter that distinguishes this black hole from this black hole, so the spin. And that is why we have not published a spin of M87 star. This is a property we cannot measure yet. OK, I may need maybe 10 more minutes if I can. So all I've shown you is there is a very good match between general relativity, Einstein's theory of relativity, and what we observe. So there is a consistency which is what you would expect. But you may ask, because this is an observational science, and there are degeneracies. It's impossible to avoid them, because there are many different theories that can explain the same observations. As long as the observations are very few and not very precise, there are many possibilities. And so what you want to do is, can you really use this image to test gravity? Can you tell A, this is a black hole, and B is Einstein's black hole, or is it a black hole in a different theory? For those of you who are not experts, you should be aware that there is a lot of work nowadays in trying to find alternatives to Einstein's theory of relativity. So there are lots of people who are trying to use the observations to say, oh, Einstein is not right. This other theory is better. So we have tried to follow these possibilities and see whether or not it's Einstein's theory or any other, or even maybe it's not a black hole at all. Maybe it's an object which is sufficiently compact as the ability of producing a shadow, but it's not even a black hole. It doesn't have an event horizon. And there are objects of this type that are possible to build, even within general relativity. So in order to address this problem, what you can do is you can have an other, an agnostic, that is, you simply don't know and you parameterize your ignorance, or agnostic approach where you say, OK, I test my observations against a specific alternative and I check whether or not this alternative is good or bad. So it's either positive or negative. While the first one just says, OK, given the observations, these observations provide me certain parameters. OK, so because I spent quite a lot of time in this, I'll give you a flavor of what this amounts to, this agnostic approach. So essentially, what you have to do is study particle plasma motion and photon motion in a curved spacetime. So you need a metric tensor for those of you who have taken a course in general relativity. And this metric tensor is a tensor which depends on coordinates. And we know very well the form of this tensor, the Schwarzschild solution, the curved solution, so there is no problem about these. But what you would like to have is a spacetime, which is more generic than that, that has additional parameters, a's and b's, such that if all of these parameters are zero, you end up with Schwarzschild or Kerr. And if they are not, then they measure the deviation away from general relativity. So together with other collaborators, we have derived these metrics, these are called the RZ or the KRZ, or Retz-Lagidenko, Konoplia, Retz-Lagidenko. And I will not go into gay details into this. But this is a very powerful way in which you can build any metric, black hole or compact object metric, and then determine what are the values of the coefficients. And this series converges very rapidly, so you really need the very first few coefficients to measure to percent precision. The other approach I was mentioning is, okay, kind of binary decision. Can we distinguish Kerr black hole from something else? And what we've done is we consider either a dilaton black hole, which is a black hole in alternative theory of gravity, or a boson star, which I will explain in a minute, or black holes within general relativity or other theories which have certain charges. Normally we tend to think that black holes have just mass, spin and electric charge. And even the electric charge is essentially zero. But in other theories, there are possibilities of having black holes which have other charges. These are not electromagnetic charges. These are properties, if you want. And then you can compare whether your measurement sets constraints on the size of these alternative charges. So let me give you a flavor of what we're doing. So here we have a Kerr black hole and we have a disk in Kerr space time, and this is a dilaton black hole. You can run exactly the same simulations on a Kerr space time or a dilaton black hole space time. And of course you have to make sure that some of the properties are the same, for instance the mass, but also maybe the size of the horizon or whatever. And then you perform the simulations, and you can start convincing yourself that, okay, they look different, but they also look very similar. And of course we're not going to do plasma simulations, what we will end up with are images. Unfortunately the quality of the movies is pretty bad because of the zoom. But what you can do is you can take the image coming from Kerr or coming from a dilaton black hole and look at it very, very carefully. And then you consider yourself, well, they are different, okay, but they are also very similar. And if you then add the fact that, you know, if you refer to the Sagittarius A. Stath to the center of the galaxy, then there is additional scattering, then you really have to compare this guy with this guy over here. And of course they are different because, you know, the relativity provides you uniqueness of the solution, but they are so close that the conclusion that you obtain is that it's not possible to distinguish a dilaton black hole from a Kerr black hole with the present position of the results. Okay, another popular alternative of a black hole mimica is called a boson star. So this is really not a star, it's actually a huge object. But its core is very, very compact, so compact that it looks like a black hole, but it's not made of, doesn't have an horizon and doesn't have a surface. You have to imagine it's a condensate of bosons and, you know, mathematically there is nothing wrong with them. Physically they are allowed to exist. So the issue is really whether or not at the center of our galaxy there is such a boson star. And if there was, or at the center of M87, what would it look like? And so once again, what you do is you carry out simulations. You have to imagine here there is a boson star, you don't see it because this field doesn't interact with matter in any matter, it just interacts in terms of gravity. And what you see is that as the simulation proceeds, matter, because of the surface, there is no surface of this object, and there is no event horizon, matter can go very deep inside the boson star, almost to the center, not quite at the center because once it gets very close to the center, it has a lot of angular momentum and will just feel a centrifugal repulsion. But you can see that as compared to a black hole, there is a lot more matter and luminosity, therefore, at the center. And so the size of the shadow will be different. And this is a curved black hole, this is a boson star, you can see how the image is much smaller. This is particularly evident when you look at the Deccan-Vold image. And you can see that the shadow, or the dark, this is not a shadow really, but the dark image here is much smaller than the image here. And of course, you measure the shadow, so you measure a size, you measure a mass, and so you can convince whether or not what you're seeing is a boson star. And what we have shown with our observations is that at least in the case of M87 and the simplest cases of boson star models, what we observed cannot be a boson star. So we have excluded this object from the possible explanations. Okay, I want to come to the conclusion. Eventualization Telescope has provided the first evidence of supermassive black holes. And of course, as both see that we understand of strong gravity. Because of having to deal with this and explaining this observation, we have carried out in maybe 18 months more simulations and more understanding of what happens on our Christian onto black holes in the previous 10 years. That's also because there were a lot of people working together on this. We're starting to study alternative square black holes. Boson stars can be distinguished from black holes. Other black holes cannot. And if you want the event horizon as transformed really an event horizon from a concept something we've right on a blackboard when we explain generativity over to a testable object. And the last the last is if you're interested in this you are not a general relativist but you want to know more about this. There is a book which has been just published that explains all of this in a bit more detail. Thank you. Maybe I can be. For the year. Sorry. Maybe this is changing for the questions. Okay. So are there questions from the audience here? Yes. My question is you use different schools from the. How do you decide that the different different stations. Right. So this is when you use the the timestamp at each telescope you measure a given electric field at a given time to the position of a nanosecond. And then when you then you take the data. So this is maybe something I omitted each telescope records the data and then all of the data from all the different telescopes are brought together into a correlator. These are supercomputers where all the data streams are put together. And of course before you put it together you have to align it so that the time axis is exactly the same in all telescope. Once you have the correlated signal then you can you can do the next step which is doing interferometry. So that's how you know you are certain that you are really measuring the same way.
I will briefly discuss how the first image of a black hole was obtained by the EHT collaboration. In particular, I will describe the theoretical aspects that have allowed us to model the dynamics of the plasma accreting onto the black hole and how such dynamics was used to generate synthetic black-hole images. I will also illustrate how the comparison between the theoretical images and the observations has allowed us to deduce the presence of a black hole in M87 and to extract information about its properties. Finally, I will describe the lessons we have learned about strong-field gravity and alternatives to black holes.
10.5446/57508 (DOI)
So, good afternoon everyone and thanks to the organizers to give me the opportunity to share you my research. And yeah, this research is part of a project named FreshPak which focuses on mathematical modeling and simulation of post-harvest supply chain, which means packaging storage of fresh produce. As the last step, we are developing an R-DUNE based control system to control the gas concentration inside the container of fresh horticulture produce. Fresh produce are specific amongst food because they are highly appreciable and this appreciability is because of their physiological activity. And the main physiological behavior is respiration, which is the oxidative breakdown of the substrate with molecules such as starch or sugars or organic acids to simpler molecules such as H2O or carbon dioxide. And also this respiration rate, this respiration generates some heat which causes some evaporation of moisture from the fruit surface or vegetable surface and causes transpiration. Those of these behaviors are because of a loss of quality in fruits and vegetables and respiration causes senescence, loss of firmness and also oxidative mass loss and transpiration also causes loss of virginity, wilting and trivuling of produce and that makes the highest contribution in food loss for the horticulture produce. And to prevent this, we must manage the main factors that affect the quality of fresh produce. And the first factor to be controlled is temperature because respiration and transpiration have a direct relationship with temperature. That's why we decrease the temperature in the storage. And another factor is gas composition. Produce and vegetables are aspiring and then the oxygen concentration is low. The respiration rate is also lower that prevents the senescence and the consequences of respiration. And also in a higher humidity atmosphere, less transpiration occurs and that's another reason for controlling the environmental conditions. But what happens in the supply chain is that we cannot control the environmental conditions through all the supply chain because we have temperature variations or gas composition variation. That's why we must prevent this somehow and there are some concepts to preserve the quality and extend the shelf life of fresh produce. As you see here, fresh produce is producing CO2 and consuming O2 and also transpiring. The first concept is to provide a control atmosphere condition so that we have low temperature and also low oxygen and high CO2 concentration that prevents the quality loss and also high relative humidity. But these control atmosphere facilities are sometimes of high cost because they have complex infrastructure. And that's why we try to go to modified atmosphere packaging. That means if we put fresh produce in a rigid package, which is sealed, the own respiration of fresh produce modifies the gas composition inside the package. That means it decreases the O2 because of respiration and it uses CO2. Both are desirable for us. But if respiration continues like this, we will have anaerobic conditions. And that's why we need to have some permeation in the package so that we can, with this permeation, we can adjust the gas concentration, which is desired for us. And the first step in packaging design is to adjust this permeability to gases and water vapor. But the problem is that when temperature is changing, this gas composition is also lost because respiration changes with temperature. Respiration rate changes with temperature, but the gas permeation normally does not change too much. I mean, they are not doing the same thing. And it makes problem when temperature is changing. The other concept is modified atmosphere containers. There are different types of this technology. One is gas membrane systems. These membranes are for gas and water vapor permeation. The second one is controlled ventilation systems. That means we provide a hole into the container, but we try to control the gas exchange through this hole as we want so that it can fix the gas concentration inside. This is one sample of the first of the membrane containers. You see that the fruit is harvested and then moved to the containers. But this concept is used normally just for storage. So that based on the type of fruit and the amount of fruit inside, we use the desired number of membranes. You see here that there are some caps on the membranes. We can open or close them as we want to have the desired gas concentration inside the container. But still the problem is that if the temperature changes, we have to change the number of membranes here. That's why they are just used for storage because it's not, in big scales, not easy to always change this. But our idea was to use controlled ventilation. This is studied before. And here you see that there is a gas diffusion tube in a container which controls the gas exchange through the whole container. And it's an algorithm based on the respiratory rates in terms of auto production. We have an auto sensor inside which measures the respiratory rate for us. And in the calculations, we somehow translate this auto respiratory rate to the opening ratio of the tube. So that the algorithm gives us the opening ratio time. And once we have more fruit inside or the temperature increases, we have to increase the opening ratio. That means the time that the tube is working because there is a blower inside the tube. And this controls the gas exchange through the tube. But again, the problem of this system is that it's relying on the oxygen sensor. And oxygen sensor is sometimes adding more cost to the system. And instead of getting feedback from the oxygen sensor, we are thinking of just using the simple temperature sensors. That's the system that we have developed. You see fresh fruits inside. And the system is based on the gas exchange through a really small lure. Here you see. And the system is based on the gas set point. Like when you work with switchers, it can be that the desired gas concentration for switchers are about 10 to 15 percent CO2. If you use the midpoint, like 12.5 percent of CO2 inside, and you run the system, fruit start to expire, and the concentration of CO2 is increasing after the set point. When we reach the set point, the blower starts to work and again, decrease the concentration of CO2. But it must be stopped in some point and start again to keep the gas concentration in the desired range. And yeah, for that, we are getting help from mathematical models to predict the blower on frequency for us. Blower on frequency is the time that blower is working in a one hour cycle. And yeah, for example, here are the simplified equations for numerical simulation of gas concentration inside. The case study was on switchers, 20 kilos of fruits inside. And as you see here, the total change in CO2 concentration is the sum of change because of blower exchange and also the change because of fruit respiration. And oxygen also works the same way. But at one time, we can just control one gas concentration and the other one would be controlled automatically because there is also a relationship between CO2 respiration and O2 respiration. As you see here, the mass of fruits inside the box are important because they determine the amount of respiratory gas production. And for that, we have been using these respiration rate equations. These equations are Michaelis Menten type equations. As you see here, another factor that is important in these equations is the O2 concentration inside the container. And the other equations are related to gas exchange of the blowers. We have developed some lamp capacitance models for gas exchange through the blower. And these equations also relies on the gas concentration inside and also the permeation coefficient of the blower. And the right hand, you can see the algorithm for numerical simulation of gas concentration inside. And as you see here, we have some inputs. Whatever gas concentration inside is and the mass of product and also the coefficients for respiration rate and gas exchange rate through the blower are used as inputs. And we do the numerical simulation in one second resolution. That means we check the gas concentration every one second in the calculations. You see here, we have the initial gas concentration. And then if the gas concentration is meeting the set point, which is 12.5% here, the blower will be switched on to decrease the CO2 value by injecting some fresh air from outside to the container. And if we have not met the set point yet, the blower will be up. And this will, this continues through the time for one specific temperature so that we will have such a result. You see here, for example, at six degrees, the first calculations on the upper graph shows that the blower is always switching on and off. Actually the switching on cycle is just one second because you see the gas control range here is really small, less than 0.001% of CO2. And in the lower graph, I have separated one cycle, one on and off cycle. You see that the blower is off for one, on for one second. It decreases the gas concentration because it has met the set point. And it needs about 110 seconds to, again, reach the set point because of the fruit respiration. But it's not, in real application, it's not possible to switch on and off the blower for such a short time. That's why we convert the calculations to one hour base. That means we calculate the blower on frequency as seconds per hour here. And you can imagine that it's the total, the sum of total opening cycles, which happened in one hour cycle. And here, in the case of six degrees Celsius, it's 32 seconds per hour. And these simulations are repeated for the whole temperature range that we want to have. And from zero degrees to 30 degrees, you see that the blower, the lower on frequency changes like from near zero to, yeah, 3,000, near one hour cycle. And yeah, this blower on frequency is then introduced to the microcontroller system to control the gas concentration inside the container. And in the case of sweet cherries, we have done experiments with 20 kilo of sweet cherries and the set point that we introduced. And here are the results of the gas concentration control at three different conditions. The first condition is low temperature. The second is high temperature. And the last one is changing temperature. And the aim is that, and to see if the gas concentration is keeping the same, even the temperature is low or high or changing. And here you see that the dots are the measured values and the lines are the predicted values. And there is a good agreement between predicted and measured values. In the first graph, it's in lower temperature. So these fluctuations are not that visible. In the high temperature, you see more fluctuations or bigger fluctuations that are visible. But still the gas concentration, the immediate gas concentration keeps constant through time. And in the last graph, you see that we have changed the temperature, which is the green line here. And blower on frequency is a function of temperature as I introduced. And this blower on frequency is changing throughout the time. And you see, although the temperature was changing or lower on frequency was changing, but it could keep the gas concentration constant. But from left hand to right hand of this graph, you see that the magnitude of these fluctuations are decreasing because the temperature is decreasing. So less lower operation time is needed to keep the gas concentration constant. And of course, the blower could operate just part of the time, part of an hour, or fully operational for one hour. But if the blower on frequency is more than one hour, so we will lose the control. And this is the limitation. That's why we need to think about more blower in the container or using blower of higher flow rate. You see here for 20 kilo of fruit, it could control the whole range of the temperature range. But once we increase the fruit mass to 100 kilo, we will lose control for some temperatures. I think the maximum temperature here control is about 20, I'm not sure. But the design is flexible. We can use more blower or blower of higher gas exchange rate and control the gas concentration in the container. But again, the respiration rate equation used here for assuming that the respiration rate is fixed through the time. But if we are working with some fruits, we call them climatic fruits, that's for them the temperature and the respiration rate is changing as a function of time because the ripening affects the respiration rate, so we need to develop more complex models for respiration rate and fit them through the model design. I guess contact system. Thank you. Okay, thank you, Ali. Questions, anyone? Okay. Okay, so I just might start. So thanks for this talk. On slide eight, if you can just go to that. We saw and you also explained that on the top left, we've got this, yeah, with high frequency oscillating. So this blower is turned on and off again. And you said like, so we said 12.5% a set point because the optimal is somewhere between 10 and 15, right? But now your condition is quite strict, right? You said just, okay, whenever it's above 12.5, you just turn it on and if it's below, just turn it off. Have you tried to allow for a larger range, like we turn it on if it's above 13 maybe, I would say, and we turn it off if it's below 12. So not being that strict with the, yeah, with the condition. Yeah, have to try this. It's possible, but it increases the tolerance because, yeah, sure. Imagine here that we are having this small amount of change less than 0.01. But for six degrees, if you convert this to lower on frequency in seconds per hour, the tolerance would be here like 32 times of this tolerance. You see? No. I mean, when the blower is on for one second, the gas concentration change would be this amount. Yeah. But if we let it work for 32 seconds. Yeah. So we will have 32 times of this magnitude. And this is for lower temperature. If we increase the increase it to the higher temperature, like 30 degrees, the gas change rate would be, I forgot to include that into the result, but the tolerance would be around three persons. Okay. Yeah. So we must calculate somehow we have the less tolerance at the first step and by converting, see how it continues. Okay. Thanks. Yeah. Thanks. Yeah. I have another question regarding the blower signal. So it's a on off signal. It's like binary signal, right? Have you considered because that's all so a possibility to have, for example, an analog signal or PWM signal sent to a blower and then having a regular circuit which regulates the temperature. So you can have instead of having a free like a varying CO2 concentration have a more constant level of CO2 concentration. Actually I've been zooming and this graph and if you compare it to the full scale like 0 to 21% of gas concentration, like 21% actually there in the air, you will see this as a straight line. You see here the magnitude of change is really small. And I have actually zoomed and so that variation is visible. But of course you cannot prevent this variation because you have cyclic blower operation. But the variation is here quite small. I think one second would be enough. And if you want to use less than one second, the result you have to convert the final results to one hour base calculations. And it doesn't matter. I mean finally you will have this value if you want your blower just works once per hour. And this is less than that is not meaningful because yeah, a microcontroller, every time that the microcontroller controls the gas concentration by switching on and off, you will have some error because of starting and stopping the blower. And if you use it more frequently to have really constant gas concentration, you will add this error to the result. But it results in nothing because here in 30 degrees still you have less than 2.5 or 3% tolerance and it's quite good. And of course this 30 degrees is not normal in storage, but it may happen just at the hardest point or something. But the normal range would be less than 20 degrees. So the tolerance would be enough and you don't need to increase more, decrease the blower operation cycle to have more constant value of gas concentration. Okay. So that's an answer. Yes, thank you. I would imagine that if as containers get larger as the massive produce start increases, this model at some points that having difficulty is not only because of limits of fan performance, but also because of the homogeneity of the gas production and distribution. How so you would need, I don't know, more fans in different places and more sensors or something. How much of have you observed of these effects at the scales that are interesting for produce transportation and storage? I guess a good question, but even in 150-liter, 190-liter container that we had here, we have not observed any homogeneity of today. Not being homogenous gas concentration because the gas diffusion right into the air is quite high to give you some imagination in modified atmosphere packaging. For example, we have one liter tray to put fruits inside. And if you make just really a small hole in the film and in the packaging film, like one millimeter diameter, you can modify the gas inside. If you increase it to three millimeter diameter, you will lose the gas modification and you will have the gas concentration inside near the outside there. So the gas diffusion is quite high. Thank you. Okay. Any other questions? I'm wondering if somebody buys this box and then puts fruit into it. So does this system kind of really work for any fruit or do I have to have a small console and then enter? I use this type of fruit. How much input from the user do you need for this to work? Actually, it's the next step of the project to commercialize this concept. As you see here, the input, the numerical model to the differential equations, one of them is mass of product. So if you change the mass of product, you will have another door unfrequence. And also, if you change the container volume, I can say that it does not have a big effect on the below unfrequency but has an effect on the equilibrium time. That means you have more air inside the container if you have less amount of fruit inside. And you need more time that this gas concentration reaches, like for CO2 reaches from zero percent to 12.5 percent. But yeah, the main factor is type of fruit. Of course, if you change the fruit, you will have another respiration plate. And you will have another door unfrequence. So the user has to input all of this? Like you have to enter? Actually, we have to develop the database with respiration plates and also the application range so that we get easily the mass of fruit inside and the type of fruit and the volume of the container. And it calculates the below unfrequency and fits into the... Thank you. Okay. Anyone else? Nope. Okay. So I think we then... Thank you again. Okay. All right.
Temperature is one of the most important factors affecting quality and shelf life of fresh fruits and vegetables. Fresh produce is exposed to changing temperature conditions during the supply chain. This makes a big challenge in designing storage transport containers, which are supposed to provide optimum modified gas concentration inside. In this study, a system was developed to actively control CO2 and O2 concentrations inside a storage container under constant and changing temperature. A mini blower exchanging air between the container and external atmosphere controlled the internal gas concentration. This was done with the help of a thin and long tube, which prevented air from entering the container but facilitated air exchange when the blower is switched ON. The Blower ON Frequency (s h-1) was modelled as a function of storage temperature, taking the type and amount of fruit, blower and tube properties and the set point of O2 volumetric concentration into account. The model was then used in programming an Arduino microcontroller to control the blower in response to real-time measurement of storage temperature. The developed gas control system was then validated by storage of sweet cherries. The system could control the CO2 concentration at the set point level (12.5 ?for constant temperatures of 6 °C and 17 °C and changing temperature from 17 °C to 9 °C by applying the required Blower ON Frequency. There was a good agreement between the measured values of gas concentration and predicted values from the simulation so that the maximum RMSE value of predictions was 0.24 ?elated to O2 at changing temperature condition.
10.5446/57515 (DOI)
Okay, so a bit tricky. So it's always a little bit risky. I'm taking even more risk. I'm trying to do the presentation using the notebook. So let's see how this is. So, so yeah, thank you for the possibility to have this talk. Thank you. Thank you also for the previous talks. So this was, I'm using this example of the solution of the Einstein equations and detecting the codes in my introductory lecture for scientific computing as one of the recent highlights and scientific computing at all. Okay, so in this MMS business, which we have, we have, let's say different, let's say, threats of activities which we are doing. And one part is of course, let's say, a change on scientific results and possible operations on CFT GFT or on material science, but we also have this discussion about software quality about tools for software development. And this was packed. We also organized several summer schools. And one part of this discussion is then also that we are just discussing, let's say also new software tools and I'm talking about the Julia language, which I'm got involved, let's say since the last five years. And I, so the talk says, okay, I will give an overview of the language. So this is of course a quite hard task. I can just highlight a couple of aspects of the language and hope that you will just hear something new of get interested in this language. Okay, Julia a fresh approach to numerical computing. So this is the title of a paper and Simon review by the, let's say, founders of Julia. And essentially what they did from my point of view is they use modern knowledge language design to design from scratch, new language, which is just let's say focused on tasks and scientific computing. And makes our price to make scientific computing and these things more accessible than it used to be before so syntax that means that we want to have simple syntax. And everyone kind of teaching for instance with metal up because it's very easy to access linear algebra and so on and metal up. You want to have similar syntax like this and Python numpy as an alternative to metal up essentially is in the same category, Julia is the same and it's not right. So on the other hand we want the possibility to have high performance. And this is not let's say to writing course computational course and C++ or C. And this is facilitated and Julia by just building Julia around library l lb m, which provides the possibility to have just ahead of time compilation to native machine code. And just Julia code is executed it is before translated to the machine code or pure particular computer and run just in this machine code. Yeah, so it has like for instance metal up or Python with no pie has built in performant multi dimensional arrays. It has comprehensive linear algebra available. So this is makes it for instance much easier to work with Julia then, for instance, this is C++ for my experience as C++ still doesn't have the standardized multi dimensional arrays. At the other hand we want to be able of course to reuse existing code. So there are different ways to integrate C C++ Python are in other languages. Paralelization is available in Julia. You have a composable package ecosystem which is focused mostly on data science and scientific computing and it's open source. Yeah, so I just can show you here. Also, things representability is better is interesting. So this is just the Julia homepage, which I'm showing here in this this window. And there are some aspects in Julia for instance it's called multiple dispatch, which make this language also quite special and I will talk a little bit more about these things. Okay, so if you want to use Julia, you need to install it and you need to have some some workflow. You can best way to install Julia is just downloading the Julia and vinyl distribution for your particular system. And then you can install Julia homepage. It's also possible to install Julia via the system package manager. In that case, you will be a little bit more conservative concerning Julia versions and there's still quite some pace in new Julia versions. First, you version 1.0 was in 2018 and now we are at the current version 1.7. And then you have some breaking changes or numbering changes. And that's, yeah, so, and it's interesting to use the new features which I think. And then you need to have a workflow. I'm here showing there's a page that seems to be quite interesting. I discovered this five and preparing this talk spot from from zero to Julia, which has a lot of introductory material for Julia, and has this nice coding workflow image that shows the visual studio code Julia plugin. This is probably nowadays the best way to start with Julia. You can, of course, use Julia, the source code of Julian and editor and then one the code and Julia, the Julia command line utility. You have Jupiter notebooks in the browser, or you can have, which should probably be a Jupiter you many people using Python notice very well and you can have Pluto notebooks in the browser. So browser was based no book interface for the Julia language. It allows to present Julia code and computational results in the title question in your browser. And it is easily installed just by installing this as a Julia package along with other Julia packages in the package manager and this presentation is also good. Pluto notebooks in difference to do by the notebooks are reactive. That means that if you change data in one cell, all dependent cells are just recalculated so I can try to demonstrate this here. If I change the code in the cell. I just executed and the next cell which does the plotting is automatically recalculated. So that means you don't need to kind of keep into in mind the hidden state of Jupiter notebooks if you work with Julia and the bits is Pluto notebooks. Okay, the particular feature of Julia is it's built in package management. It's okay like if you work with Python you of course also move package management you have to skip install or something else and Julia just comes from sketch which is on package manager which of course Julia package is the same as the standard structure. And package radius trees provide infrastructure for for finding packages via the package name so this is like a domain name service in the internet and you just have a package and then the registry it's found and downloaded from the particular YouTube. Before a general registry now has 7500 open source packages and Julia supports package environments, which record version compatibility in the file called project dotomal and which also can record exact exact package versions in the manifest on the files and these files are completely transferable. And essentially this the idea is that this should allow for 100% of the usable code. The package to an environment is just has just two parts one needs to add the package to the environment and the second part is you can using this package for doing something Pluto notebooks like this presentation include their own environment package environments, including the very first file that means recording the exact versions of codes of packages which are used. And they have their own package manager so you don't see in the examples which I have this statement. So that means the Julia people made a, let's say significant effort in order to support, let's say this is fair research principles, making codes software findable. So we have this registry we have also website is community infrastructure which allows to communicate about code and there's also on this course like the different, let's say, communities are existing are active. The code is accessible of course by open source licensing and also by package hosting on GitHub or good lab. So we have a commenter package for automatic documentation generation and would packages of course have quite some documentation interoperability is quite important. And Julia could design those around some informal interfaces like an every interface or it will interfaces and use something like duck typing you notice people are working with C++ know notice very well. Allow interoperational packages which haven't have been developed apart from each other I will demonstrate this also later in this talk. Okay, reusable. Julia packages have a predefined structure and have a standard test, let's say meta data interface, which makes them transferable and easily installable just within the Julia ecosystem. So the particular feature of Julia is that it also can have binary packages. There's a foreign function interface which allows to call binary code which conforms to the application binary interface of the sea language. There are also tools to web C++ code, which is similar to pi bind 11 and the Python word. And there's a tool which is called binary builder dot gl which provides container like infrastructure to build and contain binary packages for all architecture supported by Julia I can run this. I can for instance I'm maintaining some binary packages for mesh generation I can generate the whole binaries for all supported infrastructure on my laptop and test these things out. So you can leverage software libraries from the world of compile languages, then have access to good to kids and so on. And the binary package are also handled by the package manager as well. So here's an example I just include the new scientific library binary package. And this is a GLL package. And I just can call just the function with DC application binary interface by describing by having using the sequel method of Julia, and by describing the interface so this is essentially the library which needs to has been the library package. This is the name of the symbol in the library which then is accessed. And here we describe the parameter types which need to be passed on here finally we this past the parameter. And this is a way you can use to just also build your load points in the shared object, which you generate by yourself and see and then you can can use the sequel in order to access. And this is the same package which wraps around this binary package, and which makes it more, let's say Julian Bay of accessing the same function here and just normal let's say function. This demonstrates the possibility to reuse lots of scientific computing code for instance from from the word of C and C plus plus and how these things are handled and can be provided also by just a developing can be provided to all other packages, Julia supporting. So of course, that's the new scientific library or some other pages easy if it involves them all kinds of graphical things is more complicated but there's quite a bit of stuff available with respect. Okay, Julia has a very, let's say, a particular type system. So Julia is a dynamical type language. It feels if you're as if you write Julia code is like in Python, which is also dynamic type language, but it's also strongly type language. Each data, each, each object in Julia has its own data type which can be detected by the way it is represented yeah I can the floats I can have booleans. I can have ranges and so on and Julia just if you write code and Julia is fixed the type and knows the layout of the data and the knowledge about the value of this of a particular variable in memory is encoded in this type and is the previous which for for compiling to the common machine code. Yeah, and I demonstrated you some concrete types. Yeah, every value in Julia has a concrete type and Julia also has a concept of abstract types which essentially label concepts which can work for several concrete types together with regard to their memory. And so I don't go into specialties here. So Julia functions can have different variants of implementation depending on the types of parameters past and each in these words are called methods. So independent. So in C++ and an object oriented languages. Objects or classes have methods in Julia functions have methods. And the act of figuring out which method of a function to call depending on the type of parameters is called multiple dispatch. And then, for instance, define here for a function test dispatch with four methods. I can have a more general case they are just printed type I can have a special case for floating point and for integer data. And then I can have a special case also for the abstract data type abstract area, which then and these print out some data about the variables and say okay is this a general case is it a special case or is it the abstract area case. I think I can, for instance, say okay so if I, I didn't define. So if I call this this better just variable faults I just fall in the situation that I just use the general method. If I put a different parameter here. I find okay is a special case in 64. And in a range, I just fall back into the abstract area interface. And I can collect this range into an area. I get a vector and of by default of integers, which also appears to the abstract area interface, and so on. So, and after these calls test dispatch has been called compiled to several instances. So I need to rerun this. And now we have four method instances for this test dispatch function. Each has been now compiled for these particular data type. Yeah. So that means Julia generalized specialized code for function methods depending on the type of function parameters if these types can be let's say determined. And this is an interesting paradigm for code structure and code extension users can add new methods to given functions for their own data type, for instance. And the knowledge of parameter types is a previous it also for optimization and performant code for functions can be tailored to the particular parameter types. So let me show this also a bit I define the functions square of x, which is just called let's x times x. And I can hear plot print the code native output, which then shows that I have an integer multiplication going on. If I just call the same function which 3.0, I get the code native output where I have a double multiplication double precision multiplication. So compare to this with float 32. I get a single precision multiplication Julia horse also has a float 16. Oops. That's the danger of the interactive presentation float 16 code. And here we see something more complex. So this means that float 16 is not supported by the hardware and so not supported by the assembler and so it cannot be compiled directly to machine code but we need to have a quite a bit of let's say voodoo in order to handle 16 with floating point numbers. And that means code this float 16 will be significantly slower than once it floats. Yeah, but this just demonstrate how you do this and if you try more complex functions will be get the same thing that they are compiled and instantiate it to become particular data types. And that's what we're using. We're using so called dual numbers, your numbers are something like or let's start a different way complex numbers and extended real numbers by introducing some imaginary unit I which is minus one and your numbers extended real numbers by introducing the number epsilon which epsilon which epsilon square is equal to zero. So for instance look how the result of a polynomial evaluation on real numbers and works, then we can collect the data at one and then epsilon and then we'd see that at one we get just the value of the function and at the prefect of epsilon the derivative of the function is popping out. So that means you can automatically evaluate function and derivative at once. And this is one way to implement forward mode automatic differentiation this can be generated generalized to partial derivatives. And this can be leveraged to for instance, the show this later in the talk to assemble for instance Jacobians of complicated non-linear functions. So how we can use this this just shows how to define user data type in Julia so we have a struct my goal I call this and this is to define numbers, B and D V for the value and D for the derivative part. And this is a parameterized data type, like a template type in C++ it's parameterized by the type of these values here. So just for if I just have a double number or just a number I can make a dual number from this by setting the derivative part one, and then I can define operations some operations on doing numbers like you addition subtraction multiplication so this is sufficient for this talk to implement evaluation of doing numbers on polynomials. So if we take a dual number like here, be once again, if you call our polynomial, our square, this is dual number, you once again we'll see this this is just compiled to a couple of assembler statements and which makes this as efficient as possible. So if we want to use this further, we can check with a polynomial and implement it's the relative then we compare. It's, for instance, different value. So we can evaluate the polynomial at my dual of x then we see the test these elements. And here we of course we have a different grounding and here we just have a couple of pfx and dp of xp can compare this gives the same results. We can use this. But it's a rudimentary example, Julia has the forward package which has comprehensive implementation of operation of special functions and so on, where we can use the same dual number stuff with forward to dual. So we can use a function which is called forward if your relative, which we have a can define function and then we can define the derivative of the function and we can put them here and these are kind let's say of whatever complicated functions we can just modify them and don't have to change the code for the derivative in order to to explore this. So, and this for instance allows us to just write a simple Newton server just for demonstration I just skip this mostly. Just we can, if you have the Newton method we need to be able to calculate the Jacobian of the nonlinear operator and solve the linear system. So we can write inside and use this as an update, and we can have a Newton just write some function here. I just change the data in order to demonstrate it in the right way then we call our Newton server. And this looks as the sizes of the update we see nicely the quadratic convergence. So I choose this function of course so that you can convert as well and we can check the result. So, in general, if you analyze two large systems of equations, Julia has tools also for sparse systems and automatic sparsity detection, this can forward if also supports sparse matrices. That's also package and software which uses this. So this is one part of the Julia ecosystem, another part of this Julia ecosystem which I shortly want to like cover because it's a really great tool is the package differential equations.gl, which provides tools for solving all types of differential equations and we showed this you later. This is just an example of a Lawrence attractor which I just lifted from the documentation of this. Here you describe the function described the launch attacker here you solve it and plot it. So once again we can change some data. Let's take it 20 here. And now we recalculate the solution and get a different factory and this can can be assessed essentially. It's a very comfortable comfortable. And so this is a multi language street for high performance servers of differential equations and scientific machine learning. So there's a big effort to include also let's say neural networks into these things. The package has servers for systems of ODEs differential like equations to classic differential equations. The later French equations, the main author of this is Chris R. caucus. So it's the guy here on the right, who was also present at the lightness and MS summer school 2019 about modern programming languages for science science and statistics are in Julia. So we had a very nice. That's a communication with him. He gave lots of hints what we can do with the language and he's a very nice guy and very present in the Julia community for everyone who tried this out. I just give the list of the e-sauce which are available here, full list of methods. And it appears that here really most of the state of the art differential equation servers can be accessed via this package. Yeah. So multi step methods sundials can be used and so on. Yeah, so that's that's quite interesting. So, and that means if you if you do some let's say time dependent simulations you don't have to implement time serving by yourself you can can try to use this package. Another package is one of the m.gl that's package which we are using developing a bias was instituted in order to make available the final volume method approach, which we are developing many years which we're also using in semiconductor and device simulations and other fields. So here we can look at the system of partial differential and partial differential equations, which have a reaction terms flux terms and storage terms and shortly we can write this as the type of, let's say, vector function. And essentially, we can describe the physics of the systems by storage term reaction terms flux functions in the source term, and which also can depend on space and time of course. And then we can subdivide our domain into something like your presented representative elementary volumes which we also can call control volumes in a particular way so that we can essentially go one step back into the derivation of these type of conservation equations by using Gauss theorem and look at the species balance for each of these control volumes. And this allows us to write a discord system of equations which essentially shows the evolution, the evolution of the values at the collocation points. And, and which takes into account direction terms, maybe the boundary terms, and the storage terms which we already had, and which replaces the interface along just through the interface between the two neighboring control volumes by discrete flux functions, which describe, let's say finite difference fluxes between two collocation points. So, and then we can use this kind of parameterization of the physics of the system for an application programming interface. Here I just demonstrate the so called was a later problem, and which takes two species which moved you to fusion and which interactive I like to do to some reaction terms and here I set some parameters. I set up one dimensional or two dimensional discretization grid. And then we can write the storage function, a diffusion function and the reaction function for describing the physics of the system. So here we use just simple finite differences for the flux between two neighboring control volumes the storage functions trivial because we have just the unknowns on the time derivative and here we have this reaction part, and then we can put this together with a grid system which we then can try to solve. So we set up some initial value. And now we set up to solution methods. So I started to develop this bed to develop this package using implicit Euler method because we very much are focusing on stability of discretizations and so on and implicit Euler method and gives us a sense of the solution of stability and this was so far, our working horse, which is of course then linked to the fact that each time step you need to solve a non-assistant of equation sensor on which essentially uses these functions and these things are for the solving of non-assistance we're using Newton's method and we put these functions through the automatic differentiation part of Julia and can assemble at the same time with evaluating these functions for calculating the right and side for the Newton method. And then we can assemble just these past metrics for the Jacobian form the derivatives which we automatically calculate from this data so no need to specify derivatives in your code it's just all done automatically and then we can use these forward if I should perform and then they couldn't put this into an implicit Euler server which is of course kind of easier to implement the use of some adaptive time stepping. So this was the starting point but then I learned okay so differential equations might be not so bad to try this out and this would give me the possibility to use all kinds of servers. And so now it's possible to turn the system into an ODE problem in the sense of the differential equations such a package I can solve this that package and reshape my solution into the solution of my system. So, this looks easy here and it's not if you understand it right it's also not the time it's maybe I don't know 150 lines of code to write in order to have this. And this demonstrates quite well the ability of Julia to interoperability so essentially everything communicates via the abstract interface that includes handling those past mattresses and any way of solution vectors. And, okay, so I started, let's go back to dimension one first. And let's start with the implicit Euler method first. So that's now I'm calculating the solution here. Yeah, in the notebook and I can just show the evolution of the solution here with some let's say structural creation due to this was a later system so I'm using the system just as a demonstrator because something cool seems to happen in that system. I can use the different code from the differential equations server and get just the evolution like this. And so, that means that means I have solution time 0.7 something seconds for for this evolution. I can switch. So, after six seconds. And let me once go back to the one dimensional case the side effect is this that the my implicit Euler implementation is also quite a bit so it seems faster because of more efficient methods impossible higher order methods and allows to to have different ways to solve this and the efficient efficiency is really quite high I was amazed that it really also fallacy to dimensional case, or even 3D case implementation in this differential equations package for these larger PD systems is faster than what I did by providing my my implicit This is very efficient and works amazingly well with all these past matrix handling and so on. So this is only one aspect of that package. So I just made some very nice bold statements and people also do quite a little bit quite a lot of bold statements about Julia, which are all true but also there are some, let's say things which you, let's get confronted with if you use Julia which are just let's say not that nice Julia is relatively young language. The one point is of course, because the compilation time, which is the price for this multiple sketch and this essentially generic programming, which you also have seen this demonstration. People who started early on with C++ and started to use templates in the 2000s and so on, also had very long compilation times C++ compilers, then you had heavy use of templates. And then you had boost library or whatever. And in C++ this improved by the bit. Yeah, and the very similar problem. Essentially, it's behind of the compilation time problem and Julia so we have just these these general programming and then we need to detect the types and then we need to specify the particular types of retyping and so on. And this is one every time we start some Julia. So that means, pre-combination time is something which sometimes can hurt you. People are very well aware of this and this approved already quite a lot and I hope that you further improve. So some, let's say tricks to, to, to, for instance, if you work with the Julia command line tool, there are some tricks to having packages which try to avoid large parts of recompiling by just detecting only those functions which have changed. So, there's a plotting package available in Julia. But for covering all plotting needs, which means, at the same time, you want to have speed, portability and feature completeness for plotting, it may be necessary to use more than one. So you can do many things in one package but some something will be just kind of not be there or it will be so it still needs to be worth a little bit. So publishing performance requires elements of advanced knowledge. So it's easy to write Julia code. Let's say at a level of Meta, but it's also easy to hit some situations where you to the way for instance types are recognized. And performance significantly can can can can be great. And it's still there's no good ways to that these things are explained on an elementary level so I'm trying also to think about how to explain this. So we had some communication with Oswald Knurth, who can tell you probably more about this, these, these problems he got also a little bit hit by this. Interfaces are informal. That means we have an abstract area interface or we have an iterator interface, but they are not described by some language tools. Yeah, and so that means it's quite hard to. So the interface needs to be documented in the documentation and then it's quite hard to have a comprehensive description of an interface and if you just describe implement an interface for your own data type. You just can miss parts of the interface which are needed to be implemented. And then you might might get to some for big methods which are not that efficient. It's like for instance if you want to have your own sparse metrics structure you can implement lots of methods for handling your own sparse metrics structure and if you miss parts of the the interface. So you can put the foot picks in and turns the sparse metrics into a full metrics and you lose all your performance advantage and this is not described in a formal way. So C++20 has these concepts which formalize interface different I like to wish to have something in Julia but Julia people are saying okay so formalizing these things might kind of narrow the space of possibilities and they're fearing to do this. So, we will see how this will play out so this is just a discussion or maybe Julia 2.0 will have something like this. Julia is a much younger language than for instance Python so the package ecosystem is much smaller. And the focus is on computing less often general purpose language. And of course at the other hand we can reuse Python packages. So I just show you some part of some, let's say, elements of the some some people are interesting Julia packages I just most of them I didn't try out by myself, but just to know that there are several packages for gmk.gl.gl.gl.gl.gl. There's a modeling language for mathematical optimization and jump.gl, turing.gl allows to do bias and base and inference with probabilistic programming. And gmk.gl is a great package for plotting with GPU support so we can really have really fast plots. And at the other hand, this is for instance suffers a little bit from the compilation time. And gmk.gl, which gives you the complete functionality of Python MacBotlap and Julia with a very similar syntax so we can essentially then if you look for for information you just look at the MacBotlap and write the code and Julia. This works very well. And it's very comprehensive. At the other hand it's inherently slow. Yeah, so but it produces high quality can produce high quality plots for publication and then very easy way and also is documented and there's another package besides the forward if it's the good gmk.gl. There's on top of differential equations. Yeah, it's so called modeling toolkit which just tries to provide tools for handling and systems of differential equations and so on, including also parts of neural networks, and with the support of symbolic manipulation. So this is essentially focusing on this now. Probability is distributions associated functions. There's quite good support of CUDA and GPU arrays you can do calculations on GPU this code which looks as your CPU code. So, there's a finite element packages get a p.gl. And then there's a application kit for verification analysis. There's also a nice static website village I'm using for my homepage. That's that's some some of the let's say, send me the packages of Julia. And you see this this focus, maybe not on generic general machine learning but on let's say integrated machine learning with this classical scientific computing. I don't know myself but hopefully we'll find time to try out. So we are we are started Julia activities around 2018 2019. First, Julia activity by the way was the habilitation talk of Alexander Lincoln, who talked about Julia at the time and to also triggered me quite a bit in order to go in this direction which I did only one, Julia 1.0 was available in 2008. I already mentioned the MMS summer school in 2019 years provides, let's say, space to have talks for the Berlin user group. That's another activity and then we are doing package development maintenance maintenance focused on PD tools for finite element and find a final volume of the search code. He tried to, let's say, transfer parts of our knowledge accumulated in this respect to to Julia because it's easier accessible there. So the PD server packages like the one I have a much demonstrated. On top of this it's a charge transport server. And Christopher Merdan gave his talk at this workshop and he essentially created the results in his new results and using his code or gradient robust multi physics. And these codes charge these codes and share some, let's say, machine infrastructure, which includes some some good handling packages. And there also had to maintain two binary packages for the triangle and touch and mesh generators and their corresponding and Julia interfaces. And there's some sparse metrics package which we're using packages and these are open source of religious or Julia packages in the general registry. And I also have a visualization hobby. So this means that I am created Pluto vista dot gl, which provides the possibility to have hardware accelerated visualization visualization and put the notebooks and my examples with the buser later have been the graphics have been generated with this. And it uses the other hand, JavaScript libraries in order to do to do the real rendering and puto has a facility to efficiently transfer data from Julia to JavaScript and then they have essentially can use. And there's a particular feature of Pluto by the way. There's another so many people here know the Python world, but the JavaScript world is maybe even larger than the Python world with respect to whatever packages you can have and JavaScript is also incredibly fast, because it's also just in time to have a particular plot Lee and be the cat or just packages for visualization which also then can be integrated in this Pluto. No, it's so. Okay, so that's something that we are doing. And besides, yeah, hopefully being. And then we are just working in various projects using these codes, not only for doing these codes but they are becoming now let's say working courses for for several, let's say simulation projects and semiconductor device simulation and actually simulation from Christian has a new project with the Saba Institute on modeling of catalytic reactors and we are trying to do this and activities now in Julia. So that means also we have some, let's say collected some experience with Julia and we also could provide more of this future MMS events. So I have a short summary for this talk. I would say Julia provides us interesting new possibilities. We have generic programming without syntax overload that we have normally would have in C++ multiple dispatch instead of object orientation, informal interfaces supporting good possibilities of algorithm coupling coupling particular easy access to automatic differentiation. We can achieve high performance. If you do everything right with metal level syntax. We have an emerging package ecosystem and then great community which is focused on scientific competing data analysis to less open source and also gives great support structural support for the fair research program. So, thank you. Thanks a lot, as we are, you know, shall we have one quick question or shall we move into the coffee break one. Okay, just one quick question to you guys one in the back. So, inclusion of external libraries was an example of the new scientific library. As we know from lapargan so on they the performance of those depends strongly on the optimization for the underlying architecture. So is it as a possible to use pre compiled libraries, which are not within the Julia ecosystem. Okay, and the second one. Since the focus is on scientific computing. Can you say something about the inclusion into the high performance computing ecosystem. Yeah, yeah, yeah, okay, so yeah, I shortly said this okay so Julia particles supports multi threading in its own way. Yeah, it supports as I said GPU and also there are two ways to do distributed computing there's an API package that just can can do parallelization just using API as an interface to MPI, and there's also its own way to do distributed computing. So maybe this short answer and this is technical. Okay, thanks a lot. I would suggest to resolve any open questions over the coffee break so let's move into the coffee break conclude this morning session will reconvene at 11am.
The Julia programming language is gaining increasing attraction in scientific computing, data analysis, machine learning and other fields. Some of its outstanding features are: - open source license - easy-to-learn syntax - powerful abstractions and generic programming features - high performance potential due to just-in-time compiling - extensibility and reproducibility via sophisticated package management - portability among operating systems - re-use of existing codebase via interfaces to other languages The talk provides an example-based introduction into Julia and some of its basic concepts and discusses examples where recent research at WIAS was able to take advantage of Julia.
10.5446/57307 (DOI)
So, we have an afternoon packed of presentations about OJC, OJC MPIs in particular. Can you hear me? Could you write in the chat window if you can hear me? Yeah, I can hear you. Okay, okay, good. So, as I said, we have lots of interesting presentations today about OJC standards and in particular about OJC APIs. And we are starting off with a presentation about OJC APIs by the Director of Product Management Standards at OJC, Dr. Gobi Obonas. He has been one of the driving forces for the development adoption of OJC APIs. And many of you may know him from the OJC API code screens or from the standards working groups. So, without further ado, I give the floor to you, Gobi. Thank you. Okay. All right. Thank you, Joanna. Thanks for that very kind introduction. Hi, everyone. So, my name is Gobi Obonas. I work for the Open Geospatial Consortium. And for the next few minutes, I'm going to talk to you about OJC APIs with a presentation titled OJC APIs Background, Current State, What's Next? The presentation has been prepared in conjunction with my colleague, Athena Truckers. So, first, an introduction to the OJC. If you're not familiar with the OJC, OJC is a global consortium representing over 500 industry government research and academic member organizations. We serve as a hub for thought leadership and innovation for all things related to location. We offer a neutral and trusted forum for tackling interoperability issues within and across communities. And we are seen as the go-to consensus-based standards organization for location information. Actively engaged community stretches across commercial businesses, governmental organizations, academia, providing unique benefits to each stakeholder community. Members include a variety of organizations that work together to create and use standards through OJC's many working groups and apply them across a variety of domains ranging from research and development to full-scale operations. Now, if you're wondering what an OJC standard is, well, an OJC standard is a document established through consensus and based and approved by the OJC membership that provides rules and guidelines aimed at optimizing the degree of interoperability within a given context. OJC develops standards and best practices as well as other documents through a consensus-based process involving the OJC membership, so more than 500 member organizations. And to develop those standards, we take into consideration community requirements, market trends, technology trends, as well as other key aspects. We take into consideration all of those aspects and design the specifications to address the interoperability challenges for which those standards have been developed. What you can see on this slide is a photo taken at an OJC member meeting. The meetings are held quarterly and this photo was taken back in March 2018. The meetings are held quarterly and we typically have participation from most of our active working groups as well as other partners. So while we're developing specifications for web APIs, why OJC APIs done this well? OJC APIs are a very effective and very popular enabler of records of development. It's difficult to think of a solution in current times that does not offer a web API to enable interoperability with other products. Now what we've seen with the proliferation of web APIs is that the variation in how those APIs handle location information can to a degree-degrade interoperability. And the OJC has responded to that challenge by initiating a program of work to develop standards for web APIs. These standards are collectively known as OJC API standards and they are designed to enhance geospatial interoperability between web APIs. We take into consideration a number of principles as we develop these standards. Some of those principles are described in the special data on the web best practices. The best practices were developed collaboratively between the OJC and the World Wide Web Consortium also known as the W3C. The development of OJC API standards also takes into consideration, we also apply a principle to make use of the open API specification. So we leverage the open API specification wherever possible and that makes it possible to, for web APIs that implement OJC API standards to be described using the rules described within the open API specification. So that is open API definition documents can be used to describe implementations of OJC API standards. Another key principle is that of implement our friendliness. So we focused on the developer experience and have endeavoured to ensure that OJC API standards are as usable and as developer friendly as possible. We are designing the specifications to be modular. So design the requirements are organised into building blocks that make it possible to access special data and to integrate multiple building blocks into single solutions. And all of the development of these web API or these OJC API standards has been done in the public while developing the specifications and public GitHub repositories, encouraging both members and non-members to participate in the standards development process. So what are those OJC API standards? Starting from the top left and working our way down, we've got OJC API discrete global grid systems which specifies an interface for accessing data and other resources that are organised in discrete global grid systems. We also have OJC API records which specifies an interface for accessing catalogs of metadata. OJC API maps which specifies an interface for accessing electronic rendered maps and charts. OJC API styles which specifies an interface for accessing styles and symbology and similar portrayal information. OJC API tiles which specifies an interface for accessing tiled resources such as map tiles and tiled future data also known as vector tiles. OJC API common which specifies the foundation requirements on which other OJC API specifications are built. OJC API routes which specifies an interface for accessing routing information such as is used for transportation planning. OJC API environmental data retrieval which specifies an interface for accessing environmental data resources and other forms of spatial temporal data such as trajectories and corridors. OJC API features which specifies an interface for accessing vector feature data. OJC API processes which specifies an interface for accessing implementations of algorithms that handle geospatial data or produce geospatial data. OJC API coverages which specifies an interface for accessing coverage data such as satellite imagery and other forms and some forms of meteorological data as well. Now out of these specifications that you see on the slide only OJC API features OJC API processes and OJC API environmental data retrieval have been approved. With OJC API features both part one and part two have been approved. Part one specifies the core requirements and part two specifies an extension for handling any type of coordinate reference system. We'll continue to work on these OJC API standards so over the next series of months you will see several more OJC API standards announced as completed. As well as publishing the standards documents we're also developing and making available executable test suites to enable developers to test whether their products are compliant to the standards. And what you can see on the slide are examples of two OSGO projects that have products certified as OJC compliant. In this case that's PYJU API and Degree. All products that are certified as OJC compliant are listed on the OJC product database and they receive a compliance badge such as is shown on the slide. And to get a product certified as OJC compliant it's a five step process. That product is tested using the OJC validator, the organization responsible for that product then submits the product for certification, OJC staff review the application and then a certification mark is issued. And finally that product is listed on the product database. So it's a very simple five step process that organizations go through to get the product certified as OJC compliant. And we're seeing significant impact of OJC API standards across the globe. For instance the international organization for standardization also known as ISO has published ISO19168-1 which is based on OJC API features part one. And also the Inspire community has published a good practice for download services that is based on OJC API features. So we're seeing impact in several different communities across the globe impact of OJC API standards. To encourage and facilitate the implementation of OJC API standards we've developed, we've made available a website on which various resources such as links to the specifications, information about code sprints, videos and other information can be accessed. So OJC API.ogc.org I'd encourage you to visit the website. As I mentioned earlier, a variety of information is available on GitHub repositories. From those repositories you'll find an issues board on which various discussions take place including questions. So if you have questions about the specifications feel free to ask the standards working groups through the issues boards on those GitHub repositories. We've also made available example open API definition documents that illustrate how you can implement the OJC API standards and you'll find links to those open API definition documents on the OJC API.ogc.org website. Since the development of these standards began we've seen a variety of deployment models. One of those models involves integration of implementations of various OJC API standards into single solutions such as illustrated on this slide. But we've also seen implementations of the OJC API standards as microservices. We've seen some solutions adopt this microservices oriented architecture using OJC APIs within containers for instance implementing them within containers such as Docker, Kubernetes and similar. And to facilitate the development and prototyping of the OJC API standards we run various innovation initiatives such as test bits, pilots, plugfists, research projects, interoperability experiments, prints and hackathons. An example of a previous code sprint is the 2021 joint OJC OSGO ASF code sprint which was held back in February. The code sprint was organized and hosted by OJC the open source geospatial foundation as well as the Apache software foundation and it served to accelerate the implementation of various OJC standards across the developer community. So not just the open source software community but also others as well from the commercial space. The joint code sprint was sponsored by Ordnance Survey and Geocat as well as several organizations that support OSGO. And illustration, a high level overview of the architecture that was implemented in that code sprint is shown on this slide and you can see quite a number of OSGO projects for instance, JOSERVA, QGIS, PICSW and others. You can see many of those projects took part in the code sprint but also quite a number of Apache software foundation projects for instance, Fuseki which is part of Jena, Kafka, ActiveMQ, they took part in the code sprint as well as other open source projects such as LD Proxy. So it was great to see all of these projects come together within the same environment for three days and work on implementing various open geospatial standards. On the middle of this slide you also see the web API standards that were implemented, the OJC API standards, OJC API features, OJC API maps, OJC API coverages, processes tiles, EDR records and styles. So it was a very large code sprint in terms of the variety of standards that were implemented. So we are continuing to run code sprints. The next OJC API virtual code sprint is going to be next month in October, running October 26th to the 28th. It will focus on three specifications, OJC API routes, OJC API discrete global grid systems and OJC API common. And the code sprint will serve to advance the development of those draft specifications. And we'll also have the TESPAT 17 API experiments participants, sorry, API experiments thread participants take part. They will be telling us about some of the experiments and inviting the various code sprint participants to take part in their work. So it's going to be a very exciting event indeed. Then in November, we'll run a code sprint focusing on OJC API features and ISO 19168-1. That code sprint will provide a platform for us to try out various developer resources. It will be led by Dr. Joanna Simos, developer relations lead, and look out for an announcement about the date for that code sprint. So if you're wondering which open source projects, which OSGO projects have implementations of OJC API standards, we've got a slide here. I suspect that I might have missed some implementations, so apologies if I've missed your project. But you can see that quite a lot of open source projects are implementing OJC API standards. And therefore, there's quite a lot of resource that you can have a look at and use as a reference for your own open source projects. So in summary, OJC API standards are becoming a key requirement for web APIs offering location reference information while seeing implementations across various open source as well as commercial and proprietary products, while seeing implementations across the globe quite literally. And early impact is being noted across government, private and academic sectors, whether it's the inspire community or whether it's other geospatial data policies in North America and spatial data infrastructure. We are seeing quite a lot of programs implement and require OJC API standards to be implemented. So our advice to organizations is that they should start planning now for how they're going to specially enable the web APIs through OJC API standards. And for open source projects, we're encouraging you to implement OJC API standards in your software products. And reach out, let us know if you have any questions about how we can facilitate the implementations of OJC API standards. And that's it. Thank you. If you have any questions, I'm happy to answer them. Thank you very much, Gobi. And this is great. You just finished before the scheduled time, which gives us some extra time for questions. And we do have a lot of questions. So maybe I go over them by order. So the first one is, what is the difference between OJC APIs and web services like WMS, WFS, SOS, et cetera? OK. So with the previous generation of OJC standards, we had OJC web services such as WMS, WFS and others. They implemented an approach that was based on OWS common, web services common. What we're doing with OJC API standards is that we're designing the standards to take or to implement some of the contemporary and modern web architecture approaches. So for instance, the use of several IETF standards, the use of content negotiation approaches, the use of the open API specification. Many of those contemporary approaches are being adopted by OJC API standards. So the interfaces are completely different from those of classic OJC web service standards. And for a time, you will continue to see OJC web service standards such as WMS and WFS. You will continue to see them. But in the meantime, what we're doing, we're encouraging software vendors, software developers to implement OJC API standards alongside the implementations of OJC web service standards. OK. Thank you. So the other question was, it starts with an apology. Sorry if this is a dress layer in the talk. Is there a true, not 2.5D OJC API for 3DGU? Well, we are just about to begin a new activity to develop the OJC API for GUI volumes, for 3D GUI volumes. That API will offer access to 3D visualizations. And much of the work has actually been prototyped in a previous OJC innovations project. So you'll find some information about that on the OJC website. So look out for that announcement. The standards working group hopefully will be active within the next few weeks. It's going to work on an OJC API specification for access and 3D visualizations. OK. Thank you. So the next question is, is or will be a UMM app server also certified for OJC API features? Well, I, so I'm not involved in the MAP server project. However, what I can say is we're currently encouraging all software projects to explore and look to certifiers being compromised. Now, of course, that means that somebody has to fund the certification of those projects. So if you're a supporter of MAP server, I'd encourage you to get in contact with the developers of that project and offer to support them or even fund the submission for compliance certification. Thank you. The next question is also about software. Are those software OJC API software libraries available for Python? Yes, there are quite a number of open source. In fact, let me start off by saying PyGee API is, it's a Python based software product that supports quite a number of OJC API standards, including both approved ones and others that are still in draft form. PyCSW also implemented using Python and it supports a number of OJC API specifications. Now I think there might be some GDAL wrappers as well, that some Python based GDAL wrappers as well. So I probably cannot name them all, but the short answer is yes, there are some Python libraries available for implementing OJC API standards. The next question I really love, the next question is what people do during the virtual code screens? Test APIs, use the APIs for their own developments? Okay, yes. So they do a lot of exciting stuff. I mean, a lot of exciting stuff. So what they tend to do is to bring either bring their own implementations of the APIs and typically those implementations are perhaps at prototype stage, so alpha beta stage, and they use the code screens as an opportunity to refine and improve the implementations. Even if the implementations are production stage, they use the code screens as an opportunity to implement additional capabilities. And it's really a collaborative environment. So you tend to find participants, share observations, they'll encounter a bug and notify the maintainer of the software products, and some of them will even volunteer to help fix the bugs. So it's always an exciting experience. And then we also tend to have a final demonstration on day three of the code sprint, where the participants get to show off and showcase their implementations. And that's always a good opportunity for sponsors and government agencies, for instance, to have a look and see what's possible using those OJC API standards and also those software libraries as well. So it's a really exciting event to take part in. Okay, Gobi, I will ask you an extra effort to answer the next questions very quickly, because we don't have a lot of time, but I would love to hear your answers. So the one is, are OJC APIs difficult to implement in home microservices? I'm not sure there's a short question. Well, I mean, I will obviously say that very simple, yeah, they're very simple to implement. But let me give you some statistics. I have seen in a code sprint, one of the developers, one of the participants take an open API definition document for OJC API processes, and within two hours he had a working implementation. I have seen that happen, and I've seen several other examples of, for instance, OJC API features implemented within a few hours. So they are very simple to implement, very capable and very easy to implement. Okay, thank you. And the last question, what do you think will be the biggest driver of mass adoption, I guess, for the OJC APIs? Yeah. I believe it will be a case of getting the wider community to serve datasets. So the more datasets authoritative and cross-source datasets we actually get out into the public, I think the better we'll be able to get more end users, more stakeholders to acknowledge and recognize the importance of this moment of having OJC APIs enter the marketplace. So I think that will be getting all those datasets out there through implementations of OJC APIs.
The OGC Application Programming Interface (API) suite of standards is a family of Web APIs that have been created as extensible specifications designed as modular building blocks that enable access to spatial data that can be used in data APIs. This presentation provides an insight into OGC API activities, developments and an outlook on what to expect in the last quarter of the year and 2022. And it will give an update on the "hot" topics around OGC APIs and OGC open standards, the collaboration of OSGeo and OGC and how we can further develop open standards together. This presentation provides an insight into OGC API activities, developments and an outlook on what to expect in the last quarter of the year and 2022. And it will give an update on the "hot" topics around OGC APIs and OGC open standards, the collaboration of OSGeo and OGC and how we can further develop open standards together. Authors and Affiliations – Trakas, Athina (1) Hobona, Gobe (1) (1) Open Geospatial Consortium Track – Software Topic – Standards, interoperability, SDIs Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English
10.5446/57258 (DOI)
Okay, Bartuck, you're up next. We have about three minutes, so people will start joining. We're on a delay if you're watching it using the app. I think the broadcast is about 15 seconds afterwards. What means that when I'm talking something, after 15 minutes, you will hear it? 15 seconds, sorry. You can second. So I hear you, you and I hear each other in real time, but when it goes to the room that the audience is, there's a 15 seconds delay. So I will add your slides again. And in about three minutes, I'll introduce you and we'll be off and running. Okay, thanks. Okay. Have you been watching some of the other sessions this week? Yes, yesterday. I observed several. But you know, after noon, I had my usual activities. I needed to leave the meeting, but yesterday I was looking for something that was interesting for me. And yes. Yeah, same as you, I've been in and out, but the things that I've seen have all been really good. Have you been presenting now or you are today? It was yesterday. I presented yesterday at my time. So sorry, I didn't watch. No problem. It'll all be online, I think in about one month. So if you miss anything, you can always circle back. So we'll start. I'm going to start at 7.31. We'll give people one extra minute to join the room. Okay. So my time is 1pm 30. Yeah. That makes sense. Okay. Okay. I think we're going to begin now. Welcome to the second session in Puerto Aguazu. I'd like to introduce Bartek Burkut. He's going to talk about dynamic content on the fly. Bartek is currently an engineer at Motorola Solution Systems in Poland. He's a graduate of the University of Science and Technology from Krakow University. He's been working in GIS software development since 2005, has also been doing GIS consultancy. And as we all are, he's an open source enthusiast and has always worked at companies and projects that are supported by the Phosphor G or open source GIS ecosystem. His hobbies include buses and trams and live location in two of the large Polish cities, Krakow and Warsaw. And this is his second trip to Phosphor G. He was in Bonn in 2016. And please welcome him in giving his first Phosphor G presentation. Thank you, Michael. So, hello, everyone. So my name is Bartek Burkut. And today I'm going to show you the topic about how to create dynamic content of the map in the web mapping service. In today's agenda, I will explain how we in usual way create a map using the static data sources. Then I will explain what does mean to create a map in the dynamic way. Then I will show a simple implementation of the dynamic map using SQL query layer. After that, I will show you how to create dynamic layers using dynamic maps using maps server, map script library. In the end, there will be time for questions and answers. Okay, so in my presentation, I will talk about developing a client server web mapping system. It means we have a client which will ask a map and the server which will create a map and respond to the client. This is in the static option of this client server architecture. We have a server where we access its resources. The server has data files, has databases, and the data which will be inside the map are existing in time when we are asking for the map. So like usual WMS, we are asking about the map and the map is existing on the server and the responsibility for the server is to create, render a map and to cut a piece of the data from the data, render a map and deliver to the client. Like usual WMS server. Unlike dynamic data which does not exist in the time when the server wants to produce a map, sometimes we have more sophisticated requirements. Sometimes we need to create a map which content does not exist at all. We need to calculate somehow the content. We need to make the content dependent from some factors, from some variables, some other conditions. We need, for example, calculate something and then put in the, and then return the map to the client. Here I will show you a simple example. For example, the usual situation, we are looking for the fastest road between two points. The map, the route, does not exist, does not exist in time when we are asking about the map. The second example is, for example, current weather forecast. If we have in the field a sensor which detects the conditions of the weather and we need to see the current situation in the field, we are accessing in the runtime of producing a map, accessing the sensors in the field, taking the data, preparing a map and then returning to client. The next interesting topic is flood map. If we make such map dependent on, for example, the water height of the water in the rivers, then we can pass a custom parameter to such service. We can calculate the map, which regions will be under the water and we will send such map back to the client. Next last example is, for example, we are passing to such service point in the field and we are calculating the areas which are accessible by 5, 10, 15, 20 and so on, minutes, regarding the current traffic, those are so-called catchment areas. The server, the map in the time of requesting such map does not exist. We are taking the position, we are taking the minutes and we are calculating such areas. Okay, so I am going now to show you some example of this paradigm. So let's imagine we have on the server side table, which consists of the points. There is geometry table inside this table. There are points. And we are producing a map using simple SQL statement. This SQL statement will produce a new content, new content, because we are producing a buffer around the geometry of this table and we will make this radius of this buffer dependent of the B variable. The B variable will be passed from the client in the request. And depending of what client will send, we will produce a map. And here we see how the map will change if we will pass different values of the B. This was a simple implementation of the, what does mean dynamic content. And now I am going to show you how to implement very sophisticated case. Maybe we, the previous example was only one value dependent content, but here we wanted to make the map dependent of different conditions calculations and we want to change the layers, change the content of the layer. We need to calculate something and render the output. And I think one of the good ways how to implement such dynamic map is to use the map script library. Map script library is a part of the map server project. I think is very good known in open source world. And it has two mods, the CGA mod, which works like normal WMS service, and the map script library, which could be linked into your favorite language in your favorite environment. And then you have full control over what you will render, what user will see on the map. To explain, maybe the first step using map script is a little difficult, the programmers. That's why I wanted to explain the idea. And after having the idea, maybe you will see you will have an open window for your needs. So map server needs a map file, which is a text file. And this text file, we are defining what will be in the map. So this map file has a structure of the object. It means we have the root object, which is map. Inside the root object, we have the properties of the map, like format of the map, which will really create the extent which we will present resolution in the pixels. Then we are defining the content of the layer in structure of lists of layers. We have the first layer, which we configure. We are defining this, in this case, this will be the post GIS layer. We define the connection string, and then we are defining content. And here will be the geometry column from building's layer. Then we are styling this layer, and this will reflect such structure of the map file, which will reflect this image. Then second type of layer will be raster, where we define the data as the URL to the geotiff. This will be the usual photo map, which you see on the picture. The next layer will be the vector layer, which will be configured as the path to the shape file. And we will see in the image that we have the road's axis and the names of the roads. The last interesting type, which I wanted to show you, what is I think power also of the map script, is the type WMS. What does this mean? The last layer will be configured as the URL to the external service. The external service will provide us the content. Let's imagine we have a NASA server, which provides us the current weather forecast conditions, and we are taking clouds from it. Then the layer will be visualized like here. If we are passing such map file to map server in the CGI mode, the map server will create the map like we see on the picture. Till now, everything is static, almost static. We created the map from the resources which were on the server, but now we wanted to take control over the content and produce and use map script to manipulate the map. Now I will show you how to use map script library. After we are linking the library to our program, we can use the API, the methods which were provided by this map script library. The idea is that we are creating an object in any language I provided in the previous slides. This object will be a programmatic object, which will reflect the structure exactly from the map file. If we created such an object, then we can run the method draw, for example, to instruct map server, map script to draw the image and send it back to the client. This is the overall idea. In the middle of our application, we will manipulate the map. How we manipulate it? Map script is delivering us a lot of methods which we can run over the map object. For example, we can change the rotation, we can change the piece of the map which we are presenting. We can insert, remove layers from it. We can change the output format, change the size, the resolution or zoom to a particular point. There are a lot of methods. Those are a small piece of example, but there are a lot, if you see in the documentation, we have a lot of possibilities to manipulate the map itself. We can also manipulate the layers and its content. We can create new layers using the constructors. For example, if we have SQL layer, we have control about how the SQL query string will look like. We can change the definition, what will be produced in the database as a source of this layer. We can change also or manipulate the connection strings, for example. We can change the data sources regarding other parameters. We can change, for example, the scale denominator when such layer is visible. We can change the opacity to make other layers visible. And even we can manipulate, for example, add a feature and manipulate the content of the map, a content of such layer. We can, a lot of methods, methods which MAPscript provides, which allows us to manipulate what the user will see on the map. Let's see a small example how the MAPscript could be used. I will show you a piece of code and explain how we can use the methods to manipulate, to have control over the map. Let's imagine we have a fire event and the 911 service received the fire event in the mission of a new 14 San Diego. We are starting using the MAP server by MAPscript by creating a new map object and we start manipulating over it. So this map object reflects this image, then we want to know where is mission of a new 14. We are using the external geocoding provider and we are geocoding the address to know where is fire point. If we have a fire point, then we create a new layer. We are adding this fire point as a feature to this layer and we let MAPscript to render this point as the red marker. So we see on the map. Let's read this code as the pseudocode, only to have the overall idea what is the MAPscript. This will for sure not work, this is only to explain it. Then we want to know which building is burning, which building is involving. So we want to know what building is it and how many floors it has, how many elevators and so on. Then we are creating a new building layer. We are adding a new layer and changing the data of this layer as the SQL query. We are adding the building which where the point is inside the geometry. We instruct MAPscript to color it in the red color and we render this layer on the image. Then we want for example to know which buildings should be evacuated because they are threatened to be burned as well. We have very specialized condition because we need to evacuate the more buildings, the stronger wind is blowing, the higher speed the wind has. So we are accessing the weather forecast provider to get a wind from the city and to interpolate the current wind from the fire point. Then we are taking the speed and calculating the buffer which we need to take, the radius which we need to take to create a buffer around the building. The next step we are creating a geometry which will be actually the buffer around the building and then we will be able to take this geometry and create a new layer and select those layers which intersect this buffer. If we instruct MAPscript to render it, to color it in the yellow we see following structure, following buildings to be evacuated. Then we want to add the firefighters, who wants to know where are the hydrants and the current, the fresh information about the position of hydrants is stored inside the external service. The city infrastructure of San Diego provides a WMS service where we can take the hydrants from. And we are adding a new layer dynamically on the fly and we are presenting the hydrants as a layer in the map. The last example is this red line here, what it could be. This will be the road, the fastest road to access from the fire station to the fire point. We are creating a root geometry, creating new layer and adding this root as a feature to new created layer. In the end we are rendering an image and sending to particular services. So if you are looking for a solution to create a map server or map service, which content will be produced on the fly, you need to manipulate the content somehow and you are looking for a solution for it. I think the MAPscript would be a good option to you. I found several times that such condition is very frequently, such requirements frequently occurs in the teams, in the development teams and they are looking for something. When they are starting to use MAPscript they are frustrated because they could not configure some parameter on not knowing what is the idea behind it and so on. I hope that this presentation gave you an overview and the concept of MAPscript library and I think you have imagination how to use it. Thank you very much. There is a time for questions and answers. Very nice job Bartek. As someone who hasn't coded in a really long time I appreciated how you sort of mixed the real code and the pseudo code to show the logic behind it. Very clear presentation. So thank you for that. I am getting some love on the application. Applause. Well deserved. I am going to hop over to questions. I didn't see any earlier. If anyone has any questions for Bartek please add them in the application question area and I will pass them on. We will wait about two minutes and then we will be moving on to our next presentation. There has been a couple of requests. If you have a link to your slides if you can put them in the chat of the presentation. You can join this. If you open the application that lets you view presentations you can go to our room and there will be a chat there and as appropriate you can put your link into that. People can grab it. That is the only question I see right now. So we will wait one more minute. What is chat room? It is just in the side bar of the, let's see. I think this is in the link in the presentation. I don't know what is going to happen here. I will try to share my screen that is showing. If you go from here in Guazu room. You can just mute that. You can just mute that. Thank you everyone.
This presentation will introduce a high level idea of creation of a web service that returns the map content generated "on the fly". It will show how to implement any custom logic, analysis, data calculation, data interoperability and present the output on the map within the request's time span. In the heart of the service you will see Mapserver-Mapscript library. Nowadays building a web-mapping system to show maps from static data like vector and raster files seems to be an obvious task. For more complex solutions we need to make the mapping system “more intelligent”. The content of the map should be calculated, processed, transformed, fetched, created “on the fly” depending on some specific logic. To implement it, we need the proper architecture, technology and solution. Not each way is optimal, fast, smart and easy enough. Sometimes technology exploration is time consuming but the solution could be very simple. This presentation will give you some idea of creating a web service which returns the dynamic map content using Mapserver-Mapscript library and Geoserver. Authors and Affiliations – Bartlomiej Burkot Requirements for the Attendees – To understand the web mapping system structure: Understand what is WMS, basics of programming, networking. Track – Software Topic – Software/Project development Level – 2 - Basic. General basic knowledge is required.
10.5446/57259 (DOI)
So here we are at the next talk in this session about real world deployments actually of mainly OSTO software and here we have Andrea Amme and some of you have seen his previous talk and now he will talk about some technologies that may be more familiar with you. He will be talking about creating maps in GeoServer using CSS and SLD and I will still introduce him as I may have not been at the previous talk. So Andrea Amme works at the GeoSolutions group and he is an open source enthusiast with strong experience in Java, development and GIS and his interest is to range from high performance software, huge data, volume management, software testing and quality, spatial data analysis, algorithms and map rendering. And he is a full time open source developer on GeoTools and GeoServer and you may have watched this presentation yesterday with Ian Turton how he actually manages his time as an open source developer. And if you have not seen it I recommend watching it. You will be a different person as an open source developer. And Andrea received his OSTO SolCats award in 2017 and you may share your screen. If you share your screen it always starts backstage. So that is usually not the problem with this tool. So there you go. I will add your screen and I will go backstage. And okay, the floor is yours Andrea. Thank you. So in this presentation I am going to talk about making maps with GeoServer using SLD and CSS. But before I do that let me do a quick recap of my company, work with GeoSolutions. GeoSolutions offers services around a number of open source projects such as GeoServer, MapStore, GeoNode and GeoNetwork. We have offices in Italy in the United States and customers worldwide. We offer support services, deployment, support, customized solutions, training, bug fixing and whatnot. We are strong believers in open source and open standards and such we are involved in both OSGeo and OGC and looking after standards which are critical to GeoInt. Now let's start with a quick tour of the styling languages. Plural because GeoServer supports multiple styling languages. In the beginning we were supporting only SLD10 core. It was the one and only language and to an extent that's still true today in that the representation of a style in memory, the one that is used by the rendering engine, is still an object model which is strongly inspired by SLD10. Then we have the support for SLD11 and YSLD and MB styles and GeoCSS and all of them end up translating their syntax into that common object model which can then be dumped again into SLD if we want to. They all share more or less the same concepts. It's not surprising since they share the same object model. So we have layers, we got rules. The rules have filters or selectors that decide what should be painted. Scale dependency is depending which side, whether or not we see something at a certain zoom level and the symbolizers that apply a particular type of depiction for point lines, polygons and text. SLD10 and 111, they are the only OJC styling standards. They are XML based verbose, hard to hand edit, not to OJC's fault. They were designed for machine to machine communication and not for humans to edit. Still we have an editor for the MG server with a bit of auto complete and the SLD can be generated by multiple external tools with some needs for tweaking when you import them to GeoServer. This is an example of one style that catches all the alpine hats, shows them at least at 1 to 100,000 and uses a particular PNG icon to display the point. And if you think this is verbose, consider that IOMated boilerplate at the beginning and at the end. YSLD is SLD rewritten in YAML syntax, filtering has been rewritten into CQL, which is way more compact, it's like SQL where closes, can define reusable variables and block and its verbose it is between SLD and CSS. N has an ocean of zoom levels if needed and this is an example of the very same style using a full definition in YSLD. As you can see, this is a full style and I have not omitted any boilerplate so it's definitely more compact but still a number of lines to type. Then comes GeoCSS. GeoCSS is a derivation of the CSS language for the web with properties and functionality geared toward map making. So map filtering is still based on SQL but we also have rule nesting and rule cascading which help to keep the styling more compact. And our original style ends up being written in three lines. Type equals our point hat is the SQL filter, scale denominator less than 100k is the scale dependency and we refer to the point symbol by a mark property. However, the CSS cascading sometimes confuses people and we added a way to turn it off. It can be very powerful, it can make for very compact styles but people sometimes do not understand well how the rules combine and override each other due to the cascading machinery. We also have MB styles, aka mapboxgl which is JSON based and designed for GUI editing. It's geared only for web marketer usage. The symbols are all coming from a sprite that is a single large file with all the images inside that remain a center for video games designs. And unlike the others, it doesn't have any styling extension. The nice thing about this one is that it can be applied both on the client side and the server side so you can set up a system like using vector tiles with client side rendering for clients that can do that and fall back on rendering PNGs for clients that don't have vector tiles capability. And this is an example of the same style with a little boilerplate omitted. We can see the sprite, the sprite is this image. Imagine that this is one PNG and then we have an index telling us, oh, the hole is at this position in the PNG and the fish is at that position and so on. And so we basically refer to the sprite and then say, oh, yeah, please take the image alpine hat from the sprite. The filter is written in post-fit notation which wraps me the wrong way. I always find post-fit notation or Polish reverse notation difficult to read, but whatever. That's what it is. And yeah, it's meant to be mostly edited by GUIs not by hand. Now we are going to explore a few styling concepts comparing two of the four languages, SOD and CSS. So first off, scale dependencies. Scale dependencies are kind of the easiest filtering subsystem. There are two ways to make an app scale dependent. One is to start omitting details as you zoom out. So for example, in this example here, we have buildings that disappear when I zoom out and they appear when I zoom in. Or we can decide to symbolize stuff in a different way depending on the scale. So for example, changing the thickness as the zoom goes in. How do we express scale dependencies in SLD with the mean and max scale denominator properties? In CSS, we have this sort of filter that uses that SSD scale denominator property. And we compare it with a number to decide at which scales we want to display the data. It's also nice that we can use suffixes like K and M to make large numbers more compact and easier to spot. I don't know how many people can say at a glance that this is one million and not a ten million or a hundred thousand. I typically have troubles. I have to go and count the zeros. Okay. One way to make things scale dependence is to use a real-world unit of measures. So say that the road is five meters on the ground, thick. So in CSS, we just say, well, the struck width is five M, five meters, so we could say five FT, five feet. In case of SLD, we have this unit of measure property with a long URI that eventually means meters. Another way is to categorize based on the current scale. If the scale dependency is not linear, in that case, we cannot use on the ground units, but we can say something like, okay, the struck width is a categorization of the scale denominator and less than 400K scale denominator use two pixels, between 400 and 800 use 1.9 and so on. This is sort of a table, which allows us to make the width of the stroke scale dependent. This is used a lot in OSM-Bright kind of styles, so to do the normal rendering of OpenStrike map. This is the same table expressed in SLD, using again the categorize function and an environment variable that we call the double message scale denominator. Okay, let's go to point styling then. Point styling can be as easy as pointing to a single image in CSS. In SLD, we already seen it. We have to say point symbolizer, graphic, external, graphic, online resource, and then point to the image, provide the format, and eventually it's a size if you want to. One interesting thing that GeoServer does is to allow SVGs to be used as marks. So we take the content of the SVG as a fillable and strokeable shape, and then we can provide the fill for it and the size that we want in the map, because the SVG per series is scalable as much as we want. We get from this to that icon. It's interesting to see this pseudo selector in CSS that allows us to specify the fill inside the mark. While for SLD, we say, okay, the mark is this and the fill is that, among the other things. Another thing that we can do, this is a more complicated example, is to combine symbols or switch them based on the scale. So in this example, we want, which is taken from OSM again, we wanted to depict fountains. When we zoomed out, there are basically two blue circles, one inside the other. When we zoom in, it's a fountain icon. So in CSS, we say, okay, if the scale denominator is less than 6k, then display the fountain as two superimposed circles with the size of 10.3, and then we use these strange pseudo selectors to say, okay, inside the first mark, the fill is this color, inside the second mark, the fill is that color. And instead, when we switch below 3k, then we are going to use the SVG and fill it blue. Marks have a lot of options, a lot of sources of mark symbology because the system is pluggable. So we allow using TTF fonts, but also we have a dedicated mark names for wind barbs when you are doing meteorology, or you can specify the geometry of the mark by doing some WKT, and you can plug in your own because there's a plugin system if you want to create more types of marks. Let's switch to fill in polygons. Fill in polygons in the easy way, just solid color. In CSS, it's done by using the fill property, and we say light gray. This is the usage of CSS name of colors. We can also use the hex specification just like in SLD, and if a color is not among the many name colors, of course you will go X, but it's, well, not everybody would understand that this is light gray. This is easier to pick up. Now we can get more complicated. This is an example of doing cemeteries, and we wanted to paint the cemeteries green, which is this shade of green in hexadecimal, but then we also wanted to characterize them and use different types of overlay symbols, repeated over and over on the map, to qualify them as Christian, Jewish, or generic, and we can do that by doing some SQL filtering using nested rules. In default, we start with the green fill, but below 50K, depending on the religions, we switch to a fill which is a base color plus a repeated symbol, which gets repeated over and over and over. Another way to repeat symbols, which sometimes is surprising, is hatching, so creating crossfields, diagonal lines, fields, and so on. To do them, we actually take a little symbol like the X here, and repeat it over and over and over, giving the appearance of a net of crossing lines. In this case, we use shape times with given sides, and we specify the stroke with a particular color. The structure in SLD is pretty much the same, just longer to type. Let's look at painting lines. This is an example that I picked from our rendering of OpenStreetMap. Administrative borders that show up at different zoom levels depending on their admin level. More important, borders are showing at low zoom levels, and I zoom in, I get more and more detailed borders, and the property is pretty easy, stroke, and then the color, and eventually stroke capacity, stroke width, and the like. We can also do something more complicated like doing dashing, so line, and then a space, and then a line, and then a space. In this case, I went overboard and actually did first a dash array doing the lines, so 10 pixels of red line, and then 14 pixels of space. Then I superimposed a circle and used the dash offset here to synchronize the two dashed lines so that one ends up in the holes of the other. Let's go to labeling. Labeling is, well, it would fit the presentation of its own because it has so many vendor options, because it's really hard to get a good labeling on a map, and not as SLD as very little properties to control that. I'm going to just show you a few examples out of the many properties that we have. One example here is polygon labels, so we say, okay, let's pick the name of the label from the property full name. We do it in arial, 14 points mold. We place it at the center of the polygon with the anchor point. We fill it black, and then we use prioritized labeling to make sure that these labels are more important than the straight ones, for example, so that the straight ones have to move away and the polygon ones stay in their centroid. We use autovrap, which is a vendor option to make the text go on the next line if it is too long, and goodness of it to make sure that the text is at least 90% inside the polygon that it's labeling. What I think that we can do is to turn labels into obstacles, sorry, point symbols into labels, obstacles for labels. Let's start from this map. Some of the labels are difficult to see because they overlap too much with the point symbology below. We can use mark label obstacle property in CSS and unequivalent property in SLD to say, oh, yeah, but this mark is an obstacle for the labels, and so the labels will not overlap. We get less labels, but more label. When it comes to road labeling, road labeling is always a bit of a challenge. We specify the usual label from a property, fonts, and so on, and then we add some vendor options to make the map better. That is, label follow line to make the labels follow the line, eventually curving along the line if necessary. Repeating them if the line is too long, grouping them because most of the roads here are actually split at intersections, so I would get one tiny label for each and every bit of it, but with label group, we say with LG server, no, please take all the segments that have shared the same label, fuse them together, and then label the result to get a better visual result. Let's move to raster styling. Raster styling is all about either choosing the bands or going from numbers in the raster to colors on the map. This is an example of a color map that goes from values in the data to colors on the map. We also have a shaded relief capability now. It's not shown in the CSS, but it can generate an output like this. It's still a bit experimental at some zoom levels. It doesn't look completely good, but it's a good start and we are looking for people interested in making it better either by coding and sharing the code or funding the effort. Another thing that we can do in GeoServer, besides just selecting the band, is doing contrast enhancement, and we stole a page from QGIS here. Normally in SLD, we can just say, oh yeah, normalize as the contrast enhancement to be used, but we added a bunch of vendor options inside of it to specify what kind of normalization algorithm to use and the eventual parameters to control the normalization algorithm. For example, here we are stretching from minimum to maximum, where the minimum is 50 and the maximum is 800 and anything outside will be clamped onto those two values. Other sort of features, yes, we got more. These are just to give you some quick ideas. GeoServer can do color blending and alpha compositing between two layers with vendor options called composite. Here we are, for example, taking two maps. One of the United States with labels and colors and the other with just thick borders around the states, and we do an alpha compositing to generate this kind of result where we retain the color of the borders just close to the border itself. We can do all sorts of fancy stuff with the compositing as well. Another thing which is pretty, pretty interesting is Z-ordering. Say we have one layer or even multiple layers that need to be sorted when painting to match a given real-world sorting. This is an actual intersection somewhere in Germany, I think. I think it has like 15 levels. It's kind of crazy. You do the sorting by providing this sort by property and eventually grouping the layers if they share the same sort by group. Even if the features are coming from two different layers, they will stack up as if they were just one in the map. This is what's actually happening here because roads and rails are two different layers, but in reality they variously overlap with each other. They are in the same sort by group. We can do geometry transformations. Take a geometry and turn it into something else like move it to do a drop shadow effect or extracted vertices or extracted the start point, the end point. There are a bunch of variations that you can use to drive your mapping. If that's not enough, we have rendering transformations which take the entire raster layer or vector layer that you gave to the SLD and turn it into something else. It's basically a WPS process called on the fly and some of them are actually optimized for on the fly rendering and they are pretty fast. Contouring is one of them. Another one which is pretty interesting is the GFALMAP algebra which you can call from the SLD or from the CSS and say, well, okay, I got the 13 bands and map, Sentinel-2, and I want to compute an NDVI on the fly and display the results and yes, we can do it. Embedding the GFALMAP transformation right into the file. One thing that we added in 220 is Legend and Map control for the rules. Sometimes to get a certain result, you have to stand up a complicated set of rules which when depicted in the legend, they just look ugly. Here I have a simple example. In the typical topstates map, we have one rule which is called boundary which is not particularly informative. I can see that this is the boundary. So we can say in the SLD, vendor option name inclusion map only, which means it's going to be used only for the map generation. And we could also do the opposite and say legend only and create rules which look good on the legend but they wouldn't be useful for map making. So you can mix and match them to create an SLD that targets both nice map generation and nice legend generation. While retaining the ability to do bound invokes filtering and just displaying, for example, the rules that are visible in that one area that you are displaying. Okay, enough about writing styles by hand. What about point and click editors? There are a few options. One of them is the QJS SLD export. You can edit a style in QJS and then go into the properties style save as SLD. It's going to generate an SLD that you can import into your server. The result is not going to be quite the same but very close, especially if you are using a simple symbology. For more advanced bits, we are probably going to lose some of the rendering either because SLD does not support the particular type of symbolizer that QJS has or because the exporter is not good enough to turn it into an SLD. For example, if you are linking a property to a line width to an attribute, SLD can express it but the QJS exporter cannot do it. It doesn't know how to write it. I don't have a slide for this. Jody Garant from GeoCat actually gave me a screenshot but I forgot to include it. But I can tell you that there is also a plugin called QJS Bridge for QJS that simplifies the workflow of taking styles and transferring them to GeoServer. You just give it the URL of a GeoServer and the administrative passwords, and you say, okay, take this QJS project and publish it in GeoServer and it's going to transfer all the styles for you automatically. So it makes the workflow quite a bit quicker. Another option that we have as a community module in GeoServer is GeoStyler, which is web-based style editor. You can see a screenshot here. It can integrate into the style editing page of GeoServer as a tab or it can be around standalone if you want. If you look for GeoStyler, you can actually play with a demo online straight away. Another option that we have today is the MapStore Styler. MapStore is the web client that GeoSolutions maintains its open source. And since a few versions, it has a point and click styler that can do quite a bit of symbology, maybe not everything, but it's getting more and more comprehensive. And it can also categorize the classifications and the like. And it's going to use the GeoServer REST API to fetch a style, edit it, and then save it back in GeoServer. And that's actually something that is being used in GeoNode for my editing. And that's all. Ah, thanks very much. So actually we have three minutes for questions that, wow, this is overwhelming, Andrea, all the options possible with styling. Remember starting like 15 years ago, editing SLD in VI, go through all the mistakes and now you can even use UIs. Not too many questions because it was so clear what you were telling us. I see just one question basically, which is, do you have a link to the slides? People are very eager to get started. We will publish a link to the slides in the next few days in a blog post at the GeoSolutions website. Okay, and of course this presentation will be available as a recording in, well, hopefully a few weeks or a few, and then, yeah, people will have all the information. What else is there to talk about? I think it was very clear this, and then, yeah, what would be your personal preference? Because you basically talked about SLD versus GeoCSS, but if you make a map yourself, would you use GeoCSS or does it depend? Always GeoCSS. Unless I am forced to use SLD. But I'm biased because I'm also the current maintainer of the GeoCSS extension. I am not the original creator of it, David Winslow put it together years ago, and when he left the project, I took over the maintenance of the module. But for example, I know that Jody Gardnet, which is another core developer, likes a lot of YSLD. I, for one, cannot stand YAML in general, so it's like torture to use any sort of YAML in front of my eyes. Okay, I have the same with Tommel, which is even, but we get used to it. And well, Andrea, thanks very much again, and we'll go over to the next speaker.
The presentation aims to provide attendees with enough information to master GeoServer styling documents and most of GeoServer extensions to generate appealing, informative, readable maps that can be quickly rendered on screen. Examples will be provided from the OSM data directory GeoSolutions shared with the community. Several topics will be covered, providing examples in CSS and SLD, including: * Mastering common symbolization, filtering, multi-scale styling. * Using GeoServer extensions to build common hatch patterns, line styling beyond the basics, such as cased lines, controlling symbols along a line and the way they repeat. * Leveraging TTF symbol fonts and SVGs to generate good looking point thematic maps. * Using the full power of GeoServer label lay-outing tools to build pleasant, informative maps on both point, polygon and line layers, including adding road plates around labels, leverage the labelling subsystem conflict resolution engine to avoid overlaps in stand alone point symbology. * Dynamically transform data during rendering to get more explicative maps without the need to pre-process a large amount of views. * Generating styles with external tools Authors and Affiliations – Andrea Aime (1) Stefano Bovio (1) (1) GeoSolutions Group (https://www.geosolutionsgroup.com/) Track – Software Topic – Data visualization: spatial analysis, manipulation and visualization Level – 3 - Medium. Advanced knowledge is recommended. Language of the Presentation – English
10.5446/57260 (DOI)
So, welcome, Andrea. You're still muted. It's the most spoken sentence of 2020, I think. You're muted, I guess. So Andrea, he will give a talk about crunching data in due server with discrete global grid systems. I had to read the abstract several times, and I'm very interested in learning about this. I will shortly introduce you. So Andrea, he works at the Geosolutions Group. He is an open source enthusiast, and strong experience in Java development, GIS, and his personal interest range from high performance software, huge data volume management, software testing and quality. And he's a full-time open source developer on GeoServer and GeoTools. You may have watched his presentation yesterday with Ian Turth and how he actually manages his time as open source developer. And he received the OSGeoSoulCats Award in 2017. And probably if you pose a question on the GeoServer users mailing list, you get an answer from Andrea, even in the weekends. But we're very curious about your talk, crunching data, discrete global grid systems. So if you share your screen, I'll give the floor to you, Andrea. Doing that right now. Here you go. Yep, I see it. So I go to the back, and the floor is yours. Thank you. So yeah, we are going to talk about crunching data with the DDGSS. So before I start, just a shootout to my company, GeoSolutions has offices in Italy and United States, customers worldwide. We are a technical, strong company with 25 engineers out of 30 people working at the company. We support various open source projects such as GeoServer, GeoNode, MapStore, and GeoNetwork, and we offer support, development, training, and customized solutions. We are strong believers in open source and in open standards, and as such we support both OGC standards and standards critical to GeoNode. Now let's talk about the DDGSS. First off, we need to understand what I'm talking about because, well, the first time I got an introduction to them, my head was spinning. I hope yours doesn't end like that, but I'll try my best. So this credit global grid system is a partitioning of the earth in areas which are called zones, and each zone has a unique identifier. It could be as easy as P1S2D123, or it could be something really hard to understand like an exact decimal number like the ones used by H3. But the fact is, each zone has a unique identifier. Zones by the definition of the DGSS that OGC has given, there is a paper about it. It's pretty interesting to read it if you have time. Should have the same area, but not all implementation actually do have this property. The partitioning has no arbitrary limits, so all map projections have some boundary. DDGSS do not develop the earth in a seamless way, so there is no problem with the poles, no problem with the data. Also they are multi-resolutions, so zones have a parent-child relationship, they contain each other. The structure of the DGSS can be pretty hard to implement from a mathematical standpoint, so there are libraries that do implement the mathematical underpinnings of the DGSS, and they typically provide three basic functionalities. One go from a zone identifier to their polygonal geometry, one go from a point or a polygon to the list of zones that cover that point or polygon, and given a zone parent, given a zone get its parent, its children and its neighbors, so neighborhood properties, neighborhood operations. DDGSS has been implemented in GeoServer using two DDGSS and the equivalent libraries, RealPix and NewBirdsH3. RealPix is based on a square model, so the first partition covers the earth in, I don't know, 9 tiles, 10 tiles or something like that, and then they split into 9 over and over and over. Each parent contain exactly 9 children, so you can sum the children and get the geometry of the parent. Each cell has four neighbors, so the neighborhood is the one of a rectangular grid, and the zone identifiers are really easy to reason with. So P is the parent of P1, P1 is the parent of P12, and so on, so you add numbers to go from parent to children. On limitation, it only has a Python-based implementation, which means we had troubles integrating it with GeoServer. H3 is our Nexagon-based system with a few Pentagon mixed in that typically are over the seas to avoid issues with their deformation. Each is on our six or seven children. This one is interesting because some of the children do not make up the parent, but they overlap only partially, and it's not equal area. However, it's equal distance, a property that doesn't have real peaks. So if I take the distance between one cell and each neighboring cell, they are all at the same distance. Instead in real peaks, we got four cells that share the same distance, and other four which are on a diagonal and are farther away. So it's better suited for problems where distance between the cell is more important than the area of the cell itself. The zone identifiers are pretty hard to reason with because they are 64-bit integers encoded in hexadecimal. The implementation is not good. It's excellent. It has a very tiny C core and bindings for many languages. Some of them are actually native-recoding like the JavaScript one. Now, the first thing that we did was to try and view the DGGS. As I was learning, my first need was, okay, let's try to figure out what these DGGS is, see how they look like. So based on the libraries, I built a DGGS geometry data store. The data store just takes the type of DGGS that you want to implement and generates features that which are the cells themselves, sorry, the zones themselves. So here is a GeoServer WMS rendering real pics on a plat-carre at a resolution level zero and at a resolution level one. So you can see how the parent-child relationship work, how the splitting is happening. And the cells also have this color coding which is embedded into the H3 system and, well, it's nice to look at. H3 is, as I said, pretty different, is based on hexagons. And you can see that the hexagons have shapes which vary quite a bit. The darker cells are the pentagons which are needed to close the structure. And we can see the structure at resolution zero and at resolution one. Once we have the store generating features, we can also use WFS to download the data and use it for display in other systems. In this case, I used the WFS to generate a shape file and then I displayed it in QGIS, showing me resolution level one of real pics, again in plat-carre. Okay, now that I had a better understanding of how the geometric structure of DGGS was, I started working on representing data with DGGS. So representing data with DGGS means that instead of encoding a geometry, you encode the place of your information, the location, by using zone identifiers. The thing is, as you can see, we are splitting by powers of nine or splitting by powers of seven. The number of zones can grow very large, especially at higher resolutions. So it's not difficult to end up with hundreds of trillions of zones to cover the entire planet at the maximum resolution. To handle this kind of volume and to get a quick response, we looked at an open source all-up database called Click Outs. Click Outs is interesting because it retains the familiar structure of our relational database, so tables, columns, but the tables are partitioned by default and the queries are spread out over the partitions by default. So they end up using automatically all the cores that you have, and if you have deployment that goes on a cluster, the queries also get fun out on all the nodes. So it can run very complicated queries very fast. We did a sampling of Sentinel-2 data on the Australian capital territory, which is an area in Australia that contains Canberra. And we sampled Sentinel-2 at resolution 11, extracting this table with the zone ID of one zone, resolution, date, and the value of the bands, and other properties that we competed such as the NDVI and other indexes. We stored the results in Click Outs and started serving data out of it, and this is a map with a get-feature info below powered by this system. The database is interesting because it's multi-resolution. So we ingested the resolution 11 dataset, and then we started computing the lower resolution levels by just doing simple math over parent-child relationships, which is pretty fast. And we ended up storing each resolution in Click Outs. Now that we have a database, we started looking at an API. So we implemented, we discussed in the working group and implemented a DGGS API, which is reminiscent of OJC API features, but adds notions which are unique to the DGGS. In particular, I can list zones, and I have to say at which resolution I wanted to fetch the zones, and then some special filtering based on bounding box or geometry, or a list of parent zones that I wanted to get the children off, which is the most efficient access. That's one recurring theme in DGGS. If you try to mix a concept from the normal geography and go towards DGGS, you're going to get slow queries, but if you start from DGGS identifiers, the keys for special searches, then the searches are blazing fast. Another resource that we have is the neighbor of a particular zone, so we can identify a particular zone and give search radius and get all the nearby cells by a given distance. These are two examples, search radius of two sets in H3 and Reopix. Reopix is a bit weird because its connectivity is just up and down, left and right. Instead, the H3 system has hexagons and it looks more round. We also have resources to gather parents and children of a given zone, so we identify the parent zone and we say at which resolution level we want to extract the children, and we get the list of all the children, with the data attached, of course, if we want to. Also accessed by point and polygon, the point is easy. We say this location, this resolution, we get a cell. The polygon can be interesting because we can say, okay, the target resolution is this one and by default we would get a list of cells at that resolution, but we can get smarter and use compaction to get a shorter list leveraging parent-child relationships. These two screenshots show a compacted representation of the Australian capital territory in Reopix and in H3. Now that we have also an API, it would be also interesting to do some analysis because that's one of the things that DGGS is really good at, analysis, fast analysis. This work was carried out during testbed 16 and there was another thread that was working on DAPA, the Data Access and Processing API. DAPA is another evolution on top of a GCAPI features but also a GCAPI coverage that allows to extract quick summaries, quick aggregates, like a minimum, maximum, medium, standard deviation by giving a box, a search box, like by time, by area, both, and deciding whether to aggregate completely or produce a time series and stuff like that. We turned the DAPA queries into Clickhouse queries and especially when the filter was expressed as a set of DGGS zones, the aggregation was incredibly fast. To give you an example, we aggregated the max min and count of cells for all the bands of Sentinel-2 over the ACT, crunching 9 million records in less than a second on a spinning disk. It wasn't even an SSD. That's how good Clickhouse is if you set a reasonable query for it. DAPA fits very well with DGGS because we have a multi-resolution implementation, which means it's sort of an analysis-ready data structure in that you can start to toying around and playing with your analysis at a lower resolution and get very fast responses back. Say you are setting up a Jupyter notebook talking to the DAPA API, for example, and once you are satisfied with the general results, then you can amp up the resolution and get accurate results waiting a bit more time. If this is interesting to you, you can look at the engineering reports for DGGS and DGGS API at OGC. There are a couple of them. There's also the DAPA one, which also talks about the DAPA with the DGGS flavor added on top of it. If you want to try it out, the source code is under the OGC API community module umbrella. If you want to just download the binary, it's part of the OGC API community module. The OGC API community module basically contains all the OGC APIs that we have implemented so far. That is it. I think I did it pretty quickly. Maybe too much. Yeah, it was quite quick. We have still around 13 minutes. But yeah. Plenty of time for questions. My head was crunching. So but luckily there were some listeners that have some interesting questions for you. The first one is a longer question. I read it. Do you have estimated the increase in storage space to go from a classic roster like TIFF to storing that as a DGGS click house? Yes, we have it. It's in the report. I don't remember the value. It's a few times larger. But I don't exactly remember how much. I'm sorry. Two, three times, something like that. But I'm guessing. I don't remember. But if you look into one of those OGC ERs, the answer is in there. Okay. And the next question is, since H3 does not nest perfectly, did you check how much storage space is available? How precise the aggregations of H3 cells in click house was? Again, the answer is available on the internet. But I don't remember the exact value. Anyways, generally speaking, H3 and RealPix target two very different classes of problems. So H3 was created by Uber to solve a problem of calculating fares for their taxi service. And so it is particularly well optimized to work on land, on distance-based problems over small areas because, well, they typically work over cities. Small areas, good geographically speaking, that is. The fact that the system is not equal area implies that you shouldn't be using it for any sort of statistical analysis where area is important because the size of a cell can vary as much as 100% as far as I remember. Also, all the pentagons are located on the seas. So using H3 to do analysis of anything that happens on the seas is probably not a great idea because those are high deformation areas. Or at least that's where the high deformation areas are located. Instead, RealPix is very well suited to anything where area is important, but distance is not important because the neighboring cells are not equally distance to the center one. Wow. OK. That's fair enough. The next question, the resolution levels are based on the zones. Is that correct? It's sort of the other way around. When you pick a resolution level, you get a generation of certain partitioning zones. So R equals 0 generates these cells and this is Europe. And this is Europe again at R1 with the parent cells drawn in orange and the children cell drawn in black. So you say resolution one, you get a certain set of cells. And as you change the resolution level, you change the set of zones that you have. OK. All two questions to go, what they're streaming in. So could you please share a link to the OTC paper you mentioned in the chat? Yes. And I can paste it also here on screen because I don't know if the chat is still available after the conference, but the recording will be available. OK. So one is here, I'm giving you in the private chat and I'm also giving it in the public chat. Yeah, you can place it in the public chat. I've pasted it from there to here. And this is one. And then the other one is DAPA, OTCR. Yes, this one. Again, here. And there. So I show it here as well. OK. And, well, OK. But people will find it. OK. So people are just getting used to the new OTC APIs and you're bringing that one, well, I should say a giant leap further, it looks like. There's another question. Was there a reason why you did not implement S2? Yes, there was. And it's timing resourcing. The testbed has a given time frame and we have a given hour allocated to working on it. So we needed to look at at least two DGGSs. RealPix was a sort of mandatory because we had the creator of RealPix working in the group, which meant we could pick at all the details and the functionality and the math behind it from his head. So it was like a treasure trove to have. So it was a good opportunity to pass and we had to implement RealPix. And the other one is H3. H3 has become very, very popular. And we also had a request from outside to try out H3. S2 is kind of old, I don't think it has been pushed further anymore by Google. And also not sure that it has either the equal distance nor the equal area property. So it's true that it's a complete partitioning of the earth, but I think it lacks some of the basic properties that give DGGS that make a DGGS what it is. Okay, that's clear. Let's see, yeah, well, we have, you have answered all the questions and I think you're also the next speaker, right? I am also the next speaker. Yes. Okay, but we still have five minutes. So it's not too bad to take a break here and you can get a cup of coffee or should say espresso probably or something healthy. And well, thanks very much, Andrea. I will certainly dive into this and I think many of the viewers as well. And while we see you in the next talk already. Thanks and see you later. Bye bye. Bye bye.
Discrete Global Grid Systems are a way to tessellate the entire planet into zones sharing similar characteristics, with multiple resolutions to address different precision needs, allowing integration of data coming from different data sources, and on demand analysis of data. Come to this presentation to have an introduction to the DGGS concepts, learn when they are a good fit for a specific problem, and get an update on their implementation in GeoServer. GeoServer is a web service for publishing your geospatial data using industry standards for vector, raster and mapping. It powers a number of open source projects like GeoNode and geOrchestra and it is widely used throughout the world by organizations to manage and disseminate data at scale. Discrete Global Grid Systems are a way to tessellate the entire planet into zones sharing similar characteristics, with multiple resolutions to address different precision needs, allowing integration of data coming from different data sources, and on demand analysis of data. The presentation will introduce: * Basic DGGS concepts * The Uber H3 and the rHealPix DGGSes, comparing and contrasting their structure and use cases * A OGC API exposing DGGS for data access, and another, DAPA, for data analysis * GeoServer implementations of the DGGS concepts and APIs, based on a ClickHouse OLAP database. Come to this presentation to have an introduction to the DGGS concepts, learn when they are a good fit for a specific problem, and get an update on their implementation in GeoServer. Authors and Affiliations – Andrea Aime (1) Simone Giannecchini (1) (1) GeoSolutions Group (https://www.geosolutionsgroup.com) Track – Software Topic – Software/Project development Level – 2 - Basic. General basic knowledge is required. Language of the Presentation – English