Dataset Viewer
doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/59320 (DOI)
|
So, okay, what I'll talk about is a subject that, again, I have been working on for the past two years and slowly improving various results. And I spoke about basically the same results in Oxford, so I apologize to people who have been there. And let me start right away by stating a problem. I hope people in the back can see the board. So this is the problem in its most generality. Let G in GL and R be real algebraic. Let gamma in G be a lattice. Now, I will not need it, but this is a discrete group. Eventually, we'll specialize to a simpler case, but this is a discrete group such that the hard measure which is induced on G mod gamma is finite. And let pi from G to G mod gamma be the usual map, which is pi of G equals G gamma. Now let's fix for the rest of the talk, R bar, to be O minimal expansion, expansion of the real field. And we take X in G to be R definable. Can you see in the back? So wrap down here, not so much. Let's do narrow boards. We have X in G R definable. Now we are looking at the image of X inside G mod gamma and of X inside G mod gamma. So gamma is not definable. No, no, no. Very much not definable. It's a discrete group. And the question is what is the topological closure of pi X in G mod gamma? OK, as we will see in a second, this is two general. But I should say that this is already, in some cases like this, we know this even from, well, recent work in model theory. For example, we know that you can look at this and onto an abelian variety. And if you replace the question, you start, let's say, with an algebraic variety inside CN. And instead of asking about the topological closure, you ask about the Zariski closure, then this is what's called the X Lindemann-Weierstrass theorem, because we know that if the variety was irreducible, the image is a coset of an abelian sub-variety of A. And then about a year ago, maybe two years ago, Umer and Yafayev looked at this problem and they asked the question, what can we say about the topological closure of the image of an abelian variety? And here and here, of course, the lattice is like R to the 2n in general. Z to the 2n. Z to the 2n. OK. And first of all, this is two general. The problem, as I stated here, in the sense that we cannot give good answers in general. And the example usually, you take G to be S of L2R. You take gamma to be S L2Z. And already here, we have very simple, definable subsets of G whose image is the closure of the image is something like a fractal, like a counter-set. And for example, if you take D to be the diagonal group and you choose G properly and you look at the definable set being just the coset of DG, then you can find G such that the closure of pi x is very complicated, this fractal. So x is definable. It's definable. It's a very simple set, but the closure of the image is very complicated. So what about D itself? No, the image of D, I think that this will be a lattice in D. So the image of D probably will just be a circle, the closure. I would think that this is closed, I think. OK. Now, of course, the problem, this type of problem, a problem is coming from egodic theory. So this is a dynamical system problem. And the classical theorem of Ratner, 1 from 1994, and I'm forming it, OK, I will say something later in the closure version of the theorem. The theorem is about, is formulated in terms of measure. Also I'll come back to this, but I'm forming it in the topological language. If H in G is a unipotent group, then the closure of the image of all orbits is nice. And then there exists another algebraic group, there exists F in G, algebraic group, such that the closure of pi, so then, sorry, then for every little g, then for every little g in G, there exists F, algebraic group, such that the closure of pi H G is just pi G. So the closure is itself the image of another orbit. So the closure is an F, an F flow, I can say. So the closure of pi of the orbit is itself an orbit, an orbit under F inside g mod gamma. Is there any unipotent group? Maybe they can do, it's enough that it's generated by unipotent groups, but for us this is what we do. F depends on H and on G, but the dependence on G is up to conjugate, not more than that. We tend to be looking at cosets, yes, yes indeed, indeed, we'll see why. But this is the starting point of what we need. Now what I want is to start working inside G and not in the image, and well let me start with that. So if we take G to be just the identity, then we will get that the closure of pi H is just the closure, it's just, sorry, pi of F. And if we pull it back into G, which will be more convenient for us, then instead of working in pi, in the quotient we'll work in G, what it says is that the closure of H gamma is exactly F gamma. Instead of talking of the closure of pi H, I can talk about the closure of H gamma and then take pi, it's the same thing. So now this is the closure I should say inside G. So we're in a situation where the closure of H gamma is a nice, it's a group times gamma. Now notice that if G was a billion, like in Cn or in Rn, this is very easy, why? Because H gamma is a group itself. Right? It's in a billion group, H gamma is a group, so its closure is a lig group. And then you just take the connected component of the identity and this will be your F in this case. But of course once we move out of the a billion case, this is not a group anymore and so there is no reason why you should be able to describe this as a group times gamma. In this case, all connect, okay, so you're right. But in the case of a unipotent group, which we will have eventually, all groups, all connected groups are real algebraic. Okay, now I want to give a name for this, so I will not go F actually, it can be seen. One way to read F is the smallest subgroup, real algebraic group containing H such that gamma is a lattice in F. What do we mean by that? It just means that again when you look at the measure restricted to F mod gamma, you get a bounded measure. But I will not go into that, I want to give a name for this F. It's uniquely determined and I want to write F from now on as H gamma. So given H, we find an F containing it, which is nice in this sense. The closure of H gamma is F gamma, so this is what I want to take out of Rottman's theorem is the existence of H like this and a name for it. Containing H. Containing H, yes. That H is unipotent. Being here is for unipotent H. Contains F, contains H, yes, contained. Because once you work with the cosets, it's not true unnecessarily. Anymore you have to bring conjugates into. Yes, yes, yes. Okay, now I'm going to simplify much the situation. We're going to leave the general Lie group situation and assume from now on that G is a unipotent group itself. So in particular, all the real algebraic subgroups of G will be unipotent. So from now on, and this is the setting of our theorem, G itself is unipotent, which for us will mean that G is a real algebraic subgroup of the upper triangular matrices with one under diagonal. Up to conjugation, this is true, which for us will mean that G is a real algebraic subgroup of the upper triangular matrices. So this is just 1, 1, 1 here, 0, and real algebraic. Actually all connected subgroups are real algebraic. Another way to characterize a group like this, if you don't want to work with matrix group, these are connected, simply connected, and unipotent groups. And now, maybe I'll put it here because I want to leave the theorem on the left, it is a fact that for unipotent groups, lattices are what we think of lattices in the Abelian case. So fact, a discrete gamma in G for G unipotent. In the lattice, which means the measure is bounded if and only if G mod gamma is compact. So from now on, we may as well assume that G mod gamma is compact whenever I say lattice. And now I will state the theorem that we'll be proving, we're not proving, talking about its proof. Can I have some water? Is this water? No. Is this some water maybe? Sorry, I should have prepared in advance. I think my mouse is slowly, slowly disappearing. Yeah, thanks very much. Great, much better. Okay, so let's say I want to fit everything into this board, so let's say that R bar is again O minimal and we have X in G definable. And to make it easier, let's make it closed already, because anyway we're going to take closure in a second. So let's just assume that X is closed because we're interested in closure anyway. And I don't have a lattice yet. Oh, great. Thanks very much. I don't have a lattice yet, but already we can extract information from X. Okay, then there exists a number R in N. There exists finitely many real algebraic subgroups of G, positive dimension. There exists C1, CR definable sets closed. Might as well take them closed. Closed. So such that all of this before gamma was chosen. For every lattice gamma in G, we can now describe the closure of, and now I will describe it inside G and I will say something about the projection, but it's more convenient for me here to describe it, the closure inside G, describing the closure of X times G. Okay, I can write it here. The closure of X gamma is the following finite union. First of all, you take X, of course you need X. And to X, you add the following finitely many sets. I goes from one to R, of CI HI gamma, so now I'm using the notation from there, so every HI, depending on gamma, I take the smallest gamma rational group containing HI and you need to add. I'm actually using the graph that's given me. This does not make sense otherwise. And I guess we have to multiply by gamma. So in some sense, what we are doing is reducing the closure problem on arbitrary definable sets to groups. As we will see, these will be exactly groups that sit on X at infinity, somehow that's stabilizing some sense, are affiliated to X at infinity. If I wanted to present it in the projection, then we will take the projection here and then we'll just get the projection of this. Okay, so the closure of the projection is just pi of this. Notice one thing, which is, I think, not obvious, that even though we have this gamma on the way, this is a definable set. It's families, you have finitely many groups and you have finitely many families of these cosets of these groups, so this is itself R definable. Of course, the closure is not R definable because we have gamma, we cannot avoid gamma, but it's an R definable set times gamma. And moreover, it's important, we can say the following, for each i from 1 to R, the dimension of C i is less than the dimension of X. Here, I multiply everything by gamma. Ah, ah, ah, this is not necessary. Thank you, this is not. It's good, someone counting parentheses. It's important. The second thing is, I'm not sure if we'll get to it, but if you take the maximal, so some of these have no h i which is contained in some have contained h i above them, but for h i's which are maximal with respect to inclusion, the C i's are actually bounded and then this set is actually closed set already. So some of these sets are not closed, only they become closed when we take the union like we're adding boundary components, but the maximal ones are closed, so C i is bounded and so it's easy to see. When you take a compact set times an algebraic group, you get still closer. Okay, let me say some remarks about this. Maybe we'll get more than remarks, maybe not. In the collection? Yes, in the collection, yeah. Why do you actually take h i and that they are real algebraic and h i gamma is also real algebraic, so why don't you take h i gamma to start with? I could, but then it would be a weaker theorem because now the same h i work for all gammas. You just have to take the gamma closure of the h i. So you could have formulated the result for each gamma separately, but there's something strong about that because you have like an a priori family that we will see we extract from x and we just have to apply the Ratner results to each of the groups to get the closure. Okay let me make some comments. Let's take the case when x is a curve. Let's look what this theorem says when the dimension of x is 1. So x is just a curve. We'll see an example, I'll put an example in a second. In this case, by one, the c i's are finite. So then we really, all we are doing is taking the curve and adding to it finitely many cosets and the closure then is obtained by adding to the curve finitely many cosets and that's it, you don't need more. Then because the dimension of c i is 0 is finite and then the closure of x gamma is just x union some finitely many maybe not the same r from k g i h i gamma. So you only need to add finitely many cosets and all of this. So let's do the example that if you've heard Sergei and I talk about it, this is the example I usually give, so let's look at this so we can embed this in g. This is not unimportant but it's algebraically isomorphic to an important group. Now let's take x, the hyperbola, all the x y, x y is 1 and to simplify let's just look at the first quadrant. Let's take gamma to be just z square. So we have this square and it's not hard to see that when you take z square closure you basically moving along these directions and what you will be adding is these two groups. So this is like h1, there is no point here, it's really a group and this is h2 and the closure let me write it additively because we are now inside r square is exactly x union h1 union h2 well plus z square. Here's however, okay I will stick with it, we'll do maybe another example later. I should say that of course what helps here is misleading that h1 and h1 gamma are the same because h1 is already z square group, the intersection of z square with h1 is a lattice, the intersection of h2 with z square is a lattice so in this case there was no need to take h1 z square or h2 z square. If I change the lattice and make it an irrational lattice then h1 and h2 both will become the whole of r2 so if I replace this by an irrational lattice then the closure of x plus the lattice will be everything, will be the whole of r2. So again notice that this operation is missing here because this is the same as h1 if you want z square and this is the same both of them are rational with respect to the lattice z square. Okay second remark. Actually turns out there's a theorem like this exists in the literature of robotic theory of well more or less there's a theorem sorry of Shah from 1994 actually maybe Ratner is not 1994 maybe slightly earlier 1992 which is more general than what I would say now but for us it will be enough. He did not even need to say that G is unipotent. Assume that you have p of x a polynomial map in some variable xd a real polynomial map and you take x to be exactly the image of the polynomial map. R2G yes thank you so it's polynomial map in in it's not in R sorry it's several variables so I should say pij and maybe write pij a map from Rd things into G. Assume that the image of the polynomial land inside G. It's good someone is counting for it. No you're right you're right you're right. Okay then there's a very strong version of this result because what you have to do is the following then let gh be the smallest set of a real algebraic subgroup subgroup H of G containing x. So just take the intersection of all cosets of real algebraic groups which contain x then for every lattice for okay so let me just say whenever I write gamma I mean a lattice in G the closure of x gamma is exactly the closure of gh gamma which is just gh gamma gamma. So actually when you take the image of a polynomial map as your definable set which is obviously definable then all you need is one coset and it captures the whole closure. And if I have time we'll come back and see how we can deduce this result from our work. But I should say that its theorem did not even assume unipotent but then the notion of a polynomial map you have to be slightly more careful about. Sorry yeah all my gammas now as I said it all my gammas will be lattice. Third remark. This was two I guess. This will be three and I'll say it in words maybe and then I'll say something more precise. As I said the theorem from a Goddick theory both of Scha and of Ratner are not formulated so much in terms of closure actually in Scha you have to extract the closure but it's formulated in terms of a Goddick theory and in terms of convergence of measure or what is called equidistribution results. And he says here that he makes sense of what it means for X to be equidistributed and then he says that X is actually distributed inside gh gamma. I'll just put I don't want to define equidistribution but still I want to talk about it. I know I know we don't like that. It turns out that how shall I say that that for definable sets in O'Brien's structure I'll give examples even without defining closure I'm sorry that I'm being vague and equidistributions are not the same. Differ of course you don't know what it is so what do you care if it differ. But let me give an example again without defining. Yes yes yes I didn't say equidistribution in the closure actually equidistribution in the closure this is what I should say. But let me give an example and whatever let's look at R2 square with gamma being Z square and X being the curve of flan T ln T greater than 0. So this curve of course is a geometrically a very simple curve is definable in Rx and it follows from what we are doing that the closure of X plus Z square is R square but X is not equidistributed in whatever language whatever it means without going into it but X not equidistributed in R square. Again I'll put it and actually it was interesting I mean without even knowing we were in a meeting in Oxford and Alex will he spent the first part of his talk talking about equidistributions and immediately pointed out that this is not equidistributed so these are not complicated statements what we prove is well at this point again it's a funny to say that theorem when I didn't define it but let me just say without a theorem its observation in polynomially bounded case at least for Rn these are the same so as long as the set or the curve is defined in a polynomially bounded setting then closure if the closure of X is R square then X is equidistributed so I will just say very vaguely but same for polynomially bounded so these are all minimum structures in which function is eventually bounded by a polynomial. Okay so I want to spend the last I guess 15 minutes to say some things about the proof and so I will leave the result and at least maybe give one idea that helps to describe the closure. Okay so let me put some other theory so far there is not much let's take R to be elementary extension of R but to me elementary extension with respect to everything so if you want a big ultra product with respect to full structure not only the minimum structure I just need it in order to talk about the lattice in elementary extension and in order to because I will move from R to elementary extension so let me say put a notation for X in Rn let X sharp denote the realization of X in the big structure and now as usual we have the valuation of all the alpha in R such that is bounded by some n there is n in n and we let mu be the ring of the ideal of infinitesimal so the maximum ideal alpha in R such that for every alpha sorry for every n. But actually I am interested in O and U on G on the group G not so much in R and what helps us we could have managed without it but what helps us is that when notice that when G is in U, T and R then actually G is closed in Rn square M and R right because the diagonal is one we cannot approach the determinant is one we cannot approach elements with determinant zero and then let me write OG just to be O intersection G and mu G to be mu plus I, I is the identity matrix intersection with G. Yeah yeah O to the n square thank you O to the n square and mu to the n square thanks. And now we have the standard part map going from OG into G so G is the real points I remind you I am calling G sharp the extension so we have a standard part my going from here to here and the kernel is exactly mu of G and in fact O of G is well it is a group mu of G is normally in O of G and O of G is the semi direct product of mu G and G and the R points this is the R points. So above it should be G sharp. Here you mean? Yeah. I, I, yeah this should be G sharp this is your answer this should be G sharp I prefer the notation to leave it so G but you are right this should be G sharp thanks. Yeah yeah. And just to leave this to put this notation I am for why any set inside G sharp I will denote in abuse of notation standard part of Y standard part is a partial map but I will write standard part of map standard part of Y to mean the standard part of Y intersection OG. Okay so I should not write this because standard part is not a full map but I will write this and simple observation is so this is just simple facts that we do often when we teach this stuff that for any X in G one way to get the closure of X is to go to elementary extension and take the standard part. This is very easy to see this is the case. So now all of this was to go back to this problem we are trying to understand the closure of X gamma so now we are back to the problem so assume now again X assume X in G is R definable and the closure of X gamma can be written as the standard part of X gamma sharp but this is the same as the standard part of X sharp times gamma sharp and what we want to do and it comes out to have really very nice geometric meaning is to now we obtain the closure as an image of a map and what we want is to define to divide the domain of the map according to the complete types on X so I will write it like this over all complete types in the O minimal language now in the O minimal language of PR times gamma sharp. I am going to take the standard part of X sharp gamma sharp type by type and for each type we will try to understand what this is and notice if we understand for each type what is the standard part then we will get what we want so the problem the heart of the problem is to understand what is the standard part of one type times gamma sharp so what is of P R times gamma sharp and I will do a very very simple example first of all which of course we will not get anything but assume that P is the type of an element which is bounded for alpha inside O of G so it is infinitesimally close to some element of G this is very easy to see then we all what we get here is like the monad neighborhood of the standard part of alpha or not even full monad neighborhood we will get part of the monad neighborhood but we will not get anything more than that when we take the standard part of this PR gamma sharp is exactly the standard part of alpha which is element in the closure of X but we said X is close so it is element of X times gamma so this is obvious because when we take the closure of X gamma we in particular get so this is part of X gamma so here of course this is exactly the contribution of here this part so the bounded types the types which leave inside the standard part will come here will contribute X itself which is of course we have to have X so the interesting part is the unbounded part the unbounded types types which leave at infinity and here we introduce the following notion we will call it nearest coset to a type definition for any alpha in G sharp so in elementary extension and for G in the real world and H in the real world real algebraic we say that G H is near alpha if up to infinity decimal on the left you get there so if alpha is lies in mu G G H or we could put mu G here if mu G alpha intersect G H for example if we take the curve XX square X say 1 over X and we take non-standard element here alpha then of course the X axis is near this point no times the H star yes thank you yes it could be at infinity thank you yes G sharp and the result we prove first that the intersection ok I'll say it like this so one theorem is that indeed there is a nearest for any alpha in G sharp there exists a smallest coset G H near alpha notice that the whole group is near alpha ok so every element has some coset near alpha near it the only issue is what when can you get less than the whole group ok so there is one coset G H which is contained in all other cosets which are near alpha and we can denote it let's call it G alpha H alpha of course G alpha is not unique but H alpha is unique every representative can be chosen and it's easy to see that if alpha is equivalent over R in the O minimal language actually even the semi algebraically semi algebraic language then G beta H beta equals to G alpha H alpha so actually G alpha H alpha is the property of the type of alpha so we will denote it from now on for the next two minutes by G P H P where P is the type of alpha over R so it's really the nearest coset to the type I should say for example this is not true in SL2R if you allow any groups any algebraic group so you can have two cosets of algebraic groups which are near an element but the intersection is empty but not yet in a second in a second so the theorem that we prove and I will finish with that that for every complete type or O minimal type Px in P and G the standard part of PR gamma sharp is exactly well I do it in two steps the closure of GP H P gamma which is the same as GP H P gamma gamma so at infinity what matters is the nearest coset and the nearest coset it what is what determines the closure of PR gamma and as a result just going back to where we left it's part of the work is the same as the closure of X gamma is just the union an infinite union at this point gamma P realizing P type on X so R the realization of P in R P realizing X P type on X let me write it and let me understand V dash P V dash X P implies X okay I'll stop so at the end we have to go of course to here and right now it's a union which is not clear how you handle it it's a union over types so there is more work to do and more models here we actually to get this statement.
|
Let G be a real algebraic unipotent group and let Lambda be a lattice in G, with p:G->G/Lambda the quotient map. Given a definable subset X of G, in some o-minimal expansion of the reals, we describe the closure of p(X) in G/Lambda in terms definable families of cosets of real algebraic subgroups of G of positive dimension. The family is extracted from X independently of Lambda.
|
10.5446/59321 (DOI)
|
Joint work with Pedro Andrés Estevan. I think he has another name, but I think I put only three because this is the one I remember. So, okay. So, okay. And I made the slides last night, so I don't remember them very well. So, for me, this is also kind of... So, okay. So, we start with complete theory and we have some type and we want to know... We assume that it doesn't fork over some subset of its domain and we want to know what can we say about its restriction to the subset. And where is the time? Okay. And so, more precisely, suppose P is one of these properties, stable, simple, or NIP, I suppose to be capital letters, is the same true for the restriction. So, let me give all the definitions for those who don't know. So, these definitions work for partial types. So, Pyrovex is stable. If every complete extension of it is definable over the domain in which it is defined, and equivalently, there are no sequences A, I, B, I, such that A, I is realized a type and B, J are something and they witness the order property. Okay. And NIP is the same definition, so partial type is NIP. If there are no sequences A, I, and B, S, S subsets of omega, and some formula such that the A, I is realized the partial type and phi has the independent property with respect to these witnesses. Okay. So, the rules of A, I and B, S may be reversed, meaning that I can ask instead... Here I ask that the A, I realize the type, I can ask instead of the B, S, S, V, I, I, the type and it will give me the same definition. And the same is true for the stable part. For the stable definitions, I can replace the A, I, and B, J. But for simple, this is... Okay. I'll tell you the definition first. Pi is simple. If there are no things, there are no things which witness the property. So, no K less than omega, no tuples indexed by a tree, and some formula such that any branch is consistent with the type, and yes, but every row is gain consistent. Okay. So, unlike the NIP and stable case, it is not true that you can reverse the rules of X and Y in this case, and the triangle free unknown graph is an example where if you reverse the rules, you get one definition but not the other. This is Altium. So, right. Yes. So, the difference, yes. Okay. But I'm not actually going to talk about simple types. I'll just put it here. Okay. Now, let's talk about... So, I started the talk with forking, but somehow everything is easier when you look at co-forking. So, let's say that the type of A over B doesn't co-fork over A if this. Okay. So, this means the type of B over A, the type of B, capital B over A, A doesn't fork over capital A. And so, here's an exercise which I'm going to solve. Don't worry. So, if B is some type which over B, which doesn't co-fork over A, and it is stable on NIP, then so is the restriction to A. And so, I hope it's true. Let's see how we do it. So, yeah. And solution, yes. And it's also it's italic, and that's also not supposed to be like that. So, suppose that you have a partial type. Let's say that we're doing the NIP case. So, what I'm going to say is that the definition of NIP for partial type is equivalent to saying that there is an indecenable sequence, I mean, having IP. Same as saying there is an indecenable sequence over the domain of the partial type, over the realization of the type, and some B such that phi of A, I, B holds if and only if I is even. Okay. So, you can take this as a definition. And then, if you have that B is independent from little A over big A, and the type of A over A has IP, then you can make I, oh, sorry. You can take this indecenable sequence which written as IP for this type, for this type, and by not dividing here, you can make it indecenable over B and you get a contradiction. Okay. So, let's look at the following corollary. If you have a type over set B and M is a model contending B and P doesn't fork over M, then the restriction is stable. Okay. That's a corollary of the previous exercise. So, let me show you the solution. I mean, it's a proof. So, first we can extend P to a global type. Then P is M invariant. You can show for stable types. If you don't fork over something, over a model, you invariant over that model. And actually, also for NIP, I'll get 20 P in a minute, but it's very similar. So, it's M invariant. So, by stability, it means stable types are just types which are definable. Then we know that it's M definable, but then it doesn't co-fork because it's a narrow. Okay. It's a narrow of its restriction to M. Why are you proving the P is stable? What? I'm assuming it. Ah. Corollary. Ah, it's a corollary of the exercise. Right? If it doesn't fork and this one is stable and the big type is stable, then the restriction is also stable if it doesn't co-fork. Oh, it doesn't co-fork. Yeah, but in this case, it doesn't co-fork. Okay. So, again? M invariant, no sufficient. Because if it's M invariant, and you know then it's, since by stability, it follows that it's definable as well. But M invariant doesn't imply that it doesn't conform? No, no, no, but it implies it's definable. Definable implies that it doesn't conform. So, it's in general, right? Yes. Once you have a definable type, it doesn't co-fork. What about the binogilitude? In general? What is it? In the variant? No, no, that's not all. No, I didn't say that. I will use it here. So, it's false and it's true. Okay, so, here's the theorem that, okay, so, I guess, I don't know actually the history, but I guess that this theorem, you can see how this theorem is motivated by this corollary. So, a theorem of Adler as a novice in Pili, which generalized Hasson and Donchos. So, if you have, which I think they only did it in the NIP case. What? No? No. Oh. Who did NIP case? Sorry. These guys. Who did these guys? Hasson and Donchos. I'm sorry. I'm sorry. Okay. So, yeah, so, they proved that if you have a stable type in general without any assumption on the theory, if you have any stable type of a set, and now you have some subset, not necessarily a model, and P doesn't fork over the subset, then the restriction is stable. Okay. So, yeah. And the proof, their proof used to generically stable types. And, yeah, I thought about it like why is it so complicated to go for models, because for models it's so easy, like I just did it, and for general set is so difficult. I mean, why? But, okay. So, okay, I thought about it. And what? The stability theory is easy for models. Yeah. Okay. So, yes, I guess that's the reason. So, let's talk about NIP. So, like in stability, like in stable types, if you have a NIP type, yeah, if P is a global NIP type, and it doesn't fork over a set, then it is the scaling variant over that set. Okay. Like in NIP theories. Okay. And so, here's the theorem that we did. So, if P is a global NIP type, and so every Rascals-Torn-Mutomophism fixes the type. So Rascals-Torn-Mutomophism is one which fixes the Rascals-Torn types. So, any Rascals-Torn-Mutomophism fixes this type. Yeah. So, the theorem here is the following, if you have an NIP type which doesn't fork over A, then we'll already show. If it doesn't fork over A, then we know the restriction is NIP. But this has not that, but if you generate a Moli sequence in P over A, then the restriction to A union in the Moli sequence is NIP. Okay. So, to get NIP, you don't necessarily have to have, it's not enough to have A, but it's enough if you add to it a Moli sequence. Okay. But it's not true in general. So, there is an NTP2 theory, and in parenthesis, in fact, about the EIMP minimal. So, it's the simplest NIP, NTP2, sorry. With a NIP type, with a global NIP type, which doesn't fork over a model, in fact, it's a coil over that model. So, it's the best way of non-forking. Okay. Apart from the finability, but this we know you can get. And such that the type is NIP, and in fact, it is distal NDP minimal, but the restriction has IP. So, it seems like it's a very strong negation to any hope that this theorem could hold. Because even of the models, and even if you assume the theory is very nice, and NTP2 is the, by the way, NTP2 is the simplest case you can think of, because for simple theories, the original, I mean, for simple theories, it is true, the result that you can respect to a subset and it doesn't fork. Because if the theory is simple, then forking and co-forking is the same. So, you can just use the previous. Are dependent types stable in simple theory? And also that. Okay. But then you use, okay. But then you use something more stronger than an exercise. Okay. So, I thought, like for the remainder of the talk, I can do, I can do, I can give an idea of this proof, and some applications, and also describe the counter example. Okay. So, let's deduce from this how, deduce from this an easy proof of the stable case, okay, which doesn't use generically stable types. It's a bit cheating because I didn't tell you the proof of the other thing, but I can guarantee it doesn't use generically stable types. So, okay. Yeah. So, first of all, I want to say that the previous theorem, so this one, this one, I can replace NIP by stable, same proof will work. Okay. Okay. So, if you have a stable type that doesn't fork over A, then it's restriction to a moly sequence is stable. Okay. So, let us see how to get the previous theorem of other case numbers in P-line. Okay. So, suppose P is stable but doesn't fork over A. So, suppose towards contradiction that restriction is unstable. Let M be some model, and let such that, yeah, okay, let C0 be a restriction of, a realization of P to that model, a realization of the restriction of P to that model. So, by assumption, there is some any insurmable sequence, J indexed by the integers such that, such that phi of C0 be I if and only if I is greater than 0. Right. That's what instability gives you. That's what the other property gives you. Okay. Now, let I be a model sequence denoted by P over everything we have so far. Okay. So, by stability of the type, it's not very hard to see that this sequence I, the smaller sequence has to be in the same set. Okay. It's not only another sequence, it's in the same set. But then, we get the C0 realized at the time, realized, realized as the type over everything bigger than 0. And also, P is a scarring variant, I said it before, and the NIP type is a scarring variant, in particular any stable type is. So, you get a J is indescribable over, over this, but this is a contradiction. Okay. Because now we have some guy realizing P restricted to A and the model sequence, and a sequence J which is indescribable over this domain, which witnesses instability, which is impossible. Okay. So, okay, so this, this, and, ah, yeah. So now, I want to do. That's the idea to prove. Yes. Yeah. So, in the beginning, we didn't know what to use. But then, somebody came up with the idea of using the blackboard. So, um. Again. Is it your wrong use of blackboard, anyhow, used to have the last slide. So. But then, you're not going to use the blackboard. Yeah. So, which blackboard should I use? This one? I guess I'd use this one. So, um, so we assume that P, um, P is global type, and we assume it's NIP. And P doesn't talk over some set A. Actually, here, the, the proofs for models or sets, I, I, I can't make it simpler for models. So, um, and we have I, maybe it's a bit simpler, but not much. I is a model sequence. A model sequence over A. And we want to show that P restricted to A, I is an. In P, in P. So generated by P. P doesn't talk over A. It doesn't talk over A. It makes sense to generate a model sequence. Um, okay. So. So. Yeah. It means a sky. So what I mean by model sequence by model sequence. I mean, I mean an indescendable sequence, which is generated by P. So just P being NIP, not forking over A. You get the fact that it's invariant over. We know this, right? It's a sky invariant over A. That's going very, very. Not invariant, yeah. But you can, but still I can say a model sequence is just a sequence, which is indescendable and generated by P. So what, what, what the proof gives in the end? So maybe I won't really go into all the details, of course, but so you have some five. So suppose not. Okay. So suppose that's what you want to show. So if not, then you have some formula phi of X, Y, and some, let's say, A realizing P with six to two AI and some BI, some sequence such that phi of AI, B, sorry, AI, if and only if I even, right? Now, so this formula has IP. However, however, there must be, there is, oh, sorry, there is some sigma, some other formula in P, which implies phi is NIP, right? That's by compactness. So what I mean by that is that there is a, okay, that's like this quickly line is not very precise. This means that whenever you have an indescendable sequence over C, you cannot have some guy realizing this formula, which alternates with respect to phi. This is by compactness. You know that this such a formula exists. So, okay, and further, there is another formula also in P such that sigma is NIP. Okay. So we can go on like this forever, right, but we stop here. And now what the proof gives is that there exists A, I want to say, yeah, sorry, sorry, yeah, there exists some C such that maybe I should call it, there's already a C here, so maybe I should call it C star, but it's the same kind of, same length as this C such that sigma over AIC, oh, yeah, C star, and that's called, okay, maybe, is there an eraser as well? Yeah. Where? Oh, okay. So first of all, the first step in the proof is actually to assume that I is MOLI over AD. Okay? That's the first, you can do it. But if you don't want to do that, let's assume that D is the empty set. What's that symbol? Yeah, it's the same D as the car, the car, is it? Yes, this D. Yeah, yeah, this parameter, yes. It looked like an alpha. Yes, no, it doesn't look like an alpha. Yeah, the other one. This one? No. Ah, this one looks like an alpha. Ah, yeah. Okay, yeah, I agree. Sorry. No, this is fine. There's no alpha at all. Okay, so we assume that I is MOLI over AD. We can assume it without any loss of anything. And then what we'll get is we can find some C star which will incidentally have the same type as C, even the same lascar stone type as C over A, such that sigma of AI C star if and only if I is even. And that will be a contradiction because this sequence, I, is incidental over D, and all of it satisfies high, of course, but the choice of this formula will tell you that this is a contradiction. Okay? Okay, so now, okay, to do this, you have to do something, but it's not very hard. I can promise you. So it's kind of a very local, the proof is very local. You only use the fact that to show that you don't, I mean, if you don't want to show that the whole type is NIP and you only want to show that the restriction to phi is NIP, you need another formula to say it. It would be nice if we could, you know, have this completely local, but it still seems like we need, at least in this proof, a couple of more formulas. Or at least one more formula to be NIP. Okay. By the way, where do you use M being a model? Where is M being a model? Ah, just because it's an, to say that this is, otherwise, why would it be an indescribable sequence? I think that's it. When you generate a type, when you generate a sequence, I think that's the only thing I'm going to use it for. Because you need it to be invariant. It isn't really invariant over the model. It's only the scalar invariant over the set. And also, I would like to mention that this proof here, if the type is, I didn't write it in the slides, but if the type is generically stable, then the result is, so if the type is generically stable, then you can move I. Same proof. Can you move the model? You move the sequence I. Same proof, as for the stable case, gives you this result without the I here for generically stable types. So, generically stable type. If you have, yeah? You go back to that, just this proof here. This one? Click, click. Click, clicking. Yeah? Ah, okay. So, I'm going to click more now. Okay, so, I used the blackboard. So now, let me describe the example. So what the example is, actually, I remember like, okay, anyway, the example is, you look at the theory of trees and you put a random graph structure on the open cones starting at this point. So this is, this sounds like something Pierre would say, but I'll go into a little bit more details. So, let's call this theory the DTRR, if you have a better name. There. So, okay. Here are the axioms. So first axiom is the language. First of all, what is the language? The language is you have a meat tree. So you have less than meat and R. And the theory DTRR is the model completion of the following axioms. So this is going to be a first-class. So first of all, the redact is a meat tree. And this is the part about the random graph. So for any triple of points, x, y, z, so I want to think that x here is like the base, okay? So, and y and z are connected. So you have a random graph associated for each x. That's what you should think of, okay? And this, for instance, this axiom says that it's a graph because you can switch y and z, okay? Now this axiom says that you want that in the tree, x is below y and z, and x equals to the meat, okay? And that's the stuff you put the graph on. That's the axiom. Okay, yeah. And then, next axiom, is that if x, y, z are connected, so if y and z are connected with respect to x, and x is less than z, the meat of z and z prime, then also y and z prime are connected. This means that actually the graph is on the cone, it's not on the points, okay? This relation, the relation x is below z and z prime, z meets z prime, sorry. This is an equivalence relation. Yeah, but vertices are not pointed out, right? Yeah, okay, if you want. But anyway, if you don't, if you don't like the interpretation I gave, you can just look at the axioms and show that. Okay, so yes. Okay, so these are the axioms. Now, yeah, okay. So, let me tell you what the type is. So yeah, I promised you a type. What did I promise? I promised you a type. So actually, maybe I should have said this. The theory is actually omega categorical. It's even better than, okay, so it's in addition to all the other stuff I said. I want to give you a type, which is nip, and it doesn't fork over a model, but its restriction has IP. Okay, so we take some model. So remember, it's a tree with this extra relation. And we take some branch. So branch is just a maximal chain. And we look at this type. This is a type over the branch. It says I'm bigger than everything. And notice that this actually gives you a complete type over the model. That's knowing you're bigger than everything, gives you a complete type even in the language with R, because of the cone's business, if you think about it. Okay, so this gives you a complete type over M. And now let's see any realization of that. And pi is a partial type. It just says x is below C. Then, of course, pi is a finite size 5 or in M, because even in B. Okay? And yes. Okay. And now we have to show that pi has IP. And sorry, P has IP. And pi is nip. Okay. Okay, so let's show that this, okay, so this is another place where I kind of plan to use the blackboard. So maybe I'll use this one now. So we have some D realizing P. And let's find some D realizing P. And some BI realizing P. So such that your BI meet BJ is D. So you have B0, B1, B2. And this is D. And this is B. Okay? So this is definitely, this we can definitely do. Okay? Because the type only says bigger than P, than B. Okay. And now for any set, I want to say that the formula says saying that, let's call this point, so yeah, so the formula says that the saying that R of X meet BI, the BI is connected to X with respect to X meet BI for I in S, while the negation for I not in S is consistent with P. Now we have to onboard. So yeah, why is it consistent with P? Because I can find some, let's say I want to find this, to realize this type. So I can find some C, this guy, such that, so it starts a new cone from D. Okay? So C meet BI is OSD in this case. Okay? And C is connected with the graph to the BI's I want. Okay? And not connected to the BI's I don't want. Okay. Good. So, okay, now let's prove that, let's prove, sorry, that the pi is nip. So remember pi says that X is below C. Well, the idea is that once you're below C, then you somehow, you're supposed to look kind of like a linear order. Okay? So, okay, so that's why it should have worked, but then, okay. So, let's say, yeah, I want to show that pi is nip, right? So let's assume we have some A realizing pi. So A is above B and below C. So maybe I should also do this on the board. Okay, so A is this is C and this is B. And we have some I which some sequence witnessing IP with some formula. But the point is it's indescribable over MC. Okay? Because what I want to assume is that pi has IP, which means it's indescribable over the domain of pi. By quantifying relation, and as I know the dense trees are nip, then we can assume that pi looks like that. So R of T1 of XY, T2 of XY, T3 of XY. Well T1 to T2 and T3 are terms in trees. Okay? Okay, so, yes. So I can also assume that I is indescribable over MA, there should be a C there, I think. In the language of trees, I can assume that. Because again, because this theory, just the trees, is nip. Now let's see what the axioms imply. What's the whole thing imply this? The T1 is the meat of T2 and T3. Because why? Because I know that it happens sometimes. Because R holds sometimes. And maybe I should have written the axioms on the board, but one of the axioms is that R of XYZ implies X equals Y meets Z. So if R of these holds sometimes, for some I's in the sequence, because we can assume that the sequence is indescribable over A, then we get this always. Okay. Also we get that A appears in T2 here, but not in T3. Okay, all vice versa. We also get that T3 is not comparable with T2. It also follows from these axioms. Okay. So what we get is that T1 has to be smaller than T2, because A appears here. And the term, maybe I should have said it. If A appears in T2, then this term is below A. It's less or equal than A. So T1 is less or equal than that. So what we get is that, so even if you didn't understand all this, you can still, you have here T1 is the meat of T2 and T3, and you have that, here you have C, here you have T2, here you have T1, and yes, here you have A maybe. And that's T3. Okay, that's what we have. So T1 is the meat of C. So by the drawing you can see, by the drawing you can see this. Okay. So, but then we know that, yes, T1 is smaller than T2 meat C because it's just T2. So by the axioms, we get that, you see, so these two guys are equivalent, model this guy. I told you this is an equivalent solution. So it means that I can replace T2 here by C. Here I place T2 by C. But that's impossible now because this I is MC indescendable. So I can't have this guy alternates. Okay. Yeah, so this is the proof. The example. Yeah, to prove that it's all the nice properties I said, you have to work a little bit harder, but not much harder, surprisingly. And right. Okay. Okay, so let me just end this talk with some questions. So first of all, Artem even asked in his thesis what happens in simple types. So I kind of hope that using this new proof for stable can help solve this question, but so far I couldn't. I didn't think about it too much, but still would be nice to know for simple types the same question, what exactly happens for simple types? What do you expect to happen? I guess I expect this, I guess the same is also stable, I guess. But the problem is that somehow for simple types, you also have inconsistencies. So I mean, you have the three properties, so it's not like in stable. Okay, the same proof cannot work somehow. Is that true in the search and theories? The question? I don't know. Yeah, you can restrict it to, yeah. Good question. Yeah, I don't think we know anything on forking really. Even the models, the model is nothing, even the model. Yeah, I think so, yeah. Even the models. Next question is, okay, that's a little bit, I thought about it during the flight. So in the example, we need one element of the model sequence to get NIP, right? So we had a type, in the example that we had, we had this global type P, and once we realized one element in the model sequence, namely C, then the restriction is NIP. So the question is, maybe the examples, it will be, well, is it always the case that you only need one element from the sequence? I mean, I guess not. But can you find actual examples that you need more than one, you need two or three? Yeah, okay, that's it. Thank you very much. All right. Questions for Ptai? Another question. So your model sequence is, maybe one element of the model sequence. Go ahead. One element.
|
Adler, Casanovas and Pillay proved that if p is a complete stable type over a set B which does not fork over a set A, then the restriction of p to A is also stable. I will address the analogous question, replacing stable with NIP. In addition I will present a new proof for the stable case which uses elementary techniques.
|
10.5446/59322 (DOI)
|
So this is joint work with Christoph Kruppin's, get at least most of it. Pretty much all of the concrete stuff I will say will be joined with Christoph. So first, maybe I should say that we have a blanket assumption that the theories we're working on are always countable. Sometimes other things will also be countable and we'll not say it out loud, but I don't think you should worry too much. So the main goal of this project is to understand strong type spaces. If you don't know what they are, I will explain in a minute. The idea is that in my previous work with Christoph and with Anand, we studied these spaces and somehow it seemed like they behave a lot like quotients of compact Polish groups. But back then, we didn't quite manage to express them that way. So it had some ad hoc arguments for values things that would follow from such an expression. But now, so now actually last year, we and Christoph, we managed to show that in a very strong sense, especially on the NIP hypothesis, we can show that these strong type spaces as well as the Galois groups and quotient of titlefinable groups, they all behave like quotients of compact Polish groups. This observation and the theory that led to it, it can be used to recover essentially all known theorems about cardinality, and the so-called Borrel cardinality of strong type spaces, and quotients of titlefinable groups. So we start with a titlefinable set, C is my monster model, and we say that an equivalence relation on this set is invariant to just invariant on the automorphism of the monster model. We say that it is bounded, it has a small number of classes. So if your monster model is saturated and smaller than the cardinality of the monster model, otherwise smaller than the saturation degree. A strong type is simply a bounded and violent equation which in addition refines just having the same type over the empty set, which I denote by three bars. Okay. A strong type space is simply the quotient of a titlefinable set by a strong type defined on that set. So particular examples of strong types are the classical strong types, like the shellaxe strong type, kimpli strong type, leska strong type. But I won't really focus in particular ones, this talk. A related notion is that of a connected group component of a group. So something I will use a little bit. So if we have a titlefinable group over the empty sets, oh yeah, also here this set X will be titlefinable of the empty set. If I don't say what is titlefinable over, it's usually over the empty set. So we have a titlefinable group over the empty set. The connected component is simply the smallest subgroup, which is titlefinable over the empty set and has small index in G. So given a strong type space, we have a canonical topology on it. So if we have a titlefinable set and a bounded divination on that set, then we say that a subset of this quotient X over E is close in the logic topology if it's pre-image in X is titlefinable. It can be equivalent to set it's titlefinable. So here it's titlefinable with parameters, but equivalently just titlefinable over any model, fixed model. It's well known that this topology is compact, essentially because this set X is titlefinable, but it's housed off only if this equivalence relation is titlefinable. So in addition to topology, these quotients also have a well-defined Borrell cardinality. I will not define that notion. I'll try to walk around this somehow, not to get to technical. In particular, we also have this logic topology in the quotient of titlefinable group, but it's connected component. Just because the cost of the equivalence relation is bounded and the equivalence relation on the group itself. Maybe I should make a remark here that, so this topology is housed off if this relation is titlefinable. So in case where this equivalence relation is titlefinable, topology somehow gives us the full information about the quotient. Whereas if it's not titlefinable, it can frequently be completely used. The topology can be anti-discrete, and then maybe this Borrell cardinality is more useful. There's just a remark. Okay. So before I go to the main theorem, I want to look at some toy examples. So if you consider a titlefinable group G, and it's connected component, then this quotient is a compact house of Polish group with the logic topology. So the fact that it's compact house of, it follows from what I've said before essentially, because G is titlefinable, and the cost of the equivalence relation here is titlefinable, so it's house of. But in fact, it's also a topological group. So the group operations are continuous with respect to this logic topology. Now if we take any subgroup of this group G, which contains this connected component, well, I should also say that it's invariant over the empty set. Then if you look at the quotient G mod H, and the quotient G mod G 0 0 mod H mod G 0 0, this should be empty set here. Then they are essentially the same in very strong way. The important point here is that this group here, G mod G 0 0, it's a Polish group. So this is just a quotient of titlefinable group by subgroup, and this is a quotient of a compact Polish group by a subgroup. So like I said, I want you to, in general. So first for strong types, it's a bit more difficult. I don't want to say too much, but there's an object called the Kimpilagawa group. If you don't know what it is, you don't need to cut really, but it's just canonical compact Polish group, which is associated with given first of the theory. Given a complete zero type P and a strong type E, which is closer than the Kimpilagawa strong type. Again, if you don't know what it is, you can just take it as a black box of sorts. On the set of realizations of this single complete type over the empty set, then the Galois group acts transitively on the set of classes of this E. We can see similarly to what happened with the groups, is that this strong type space is essentially the same as the quotient of the Galois group by the stabilizer of any one point in this strong type space. But the problem with this approach is it only works if we have this type of findable thing below somehow. So if H contains the GZ0, 0 of an empty set or if E is course of the Kimpilagawa strong type. So if you know what the Galois group is, you could think of imitating this approach with the Galois group instead of Kimpilagawa group. But unfortunately, this group is not housed, in particular, cannot be polished. So we need to do something better. So as I've said, we can recover a lot of information about these type spaces using this reduction to compact groups. Because compact groups are much easier to understand somehow. So one other simple but relatively simple observation is that, if you have a compact Polish group and a subgroup which is analytic, this we just think not too insane, you can think Borrel if you prefer. Then we have exactly one of the following. Either this subgroup is open and then the quotient is simply finite, or the subgroup is close and the cardinality of the quotient is just continuum, or it's not close and then because the subgroup is analytic, it still has in fact the bare properties enough. But because of that, the quotient still has cardinality of the continuum. In addition, it's not smooth in the sense of Borrel cardinality. So if you don't know what it is, maybe you don't need to worry so much for now. How does Borrel cardinality important part of your talk for all Borrel cardinality? I wouldn't. It's important for the conclusion, but you don't really need to. For the ideas, I don't think so. I won't get into these details. Yes. So in particular, if you know what smoothness means, it says that this quotient g mod h is smooth if and only h is close, but also more concretely perhaps this index of h is always finite, in which case h is open or index is continuum. It cannot be LF0 for example. So we want to show the same facts for essentially the same facts of strong type places and quotients of tightly-finable groups. So using these observations and the toy examples I gave before, we can take a single type over the empty set, and take advantage of the translations or a strong type equivalently because it's just one type. On the set of realizations of this type, which is course that they keep it like strong type, so we can apply the things that I've said two slides before. Then we have very similar trichotomy. Namely, either this strong type is simply relatively definable, and in this case, the quotient is finite, or is type definable and the quotient has cardinality of the continuum, or it's not tied definable in which case, it still has the cardinality of the continuum and it's not smooth. So the idea is that we use this previous trichotomy in this way that we have this, we have x, we have x mod e, and here we have this group Gal Kp of the theory, and here we have Gal Kp mod something. By the way, when I said they are the same, we actually have a function here, which is just homeomorphism. In fact, we have a function here which comes from a group action of the Gal Kp group on x mod e, as I've said before. So Gal Kp acts on x mod e, and the stabilizer has the same properties essentially as e. So stabilizer of a point here is closed here, if and only if this e is type definable. Similarly, the stabilizer is open if and only if e is relatively definable. Using this and this previous trichotomy I just gave before, we can just, so what we do? We take this e, we look here and we pull it up upstairs, this Gal Kp group. So now we have this trichotomy from the previous slide. So h is open and index is finite, h is closed, index is continuum, or h is closed, index is continuum, it's not smooth. So I didn't say precisely what I mean by saying that these quotients are the same, but I will make it precise in a minute in the more concrete case for the fourth theorem. But it allows us to basically take this conclusion and push it downstairs to this. So if h is open, then e is relatively definable. If h is closed, then e is type definable. If the quotient is not smooth, this quotient is also not smooth. Okay. So as I've said before, we don't want to just look at strong type spaces. We also want to look at type definable groups and basically the same, by this in the same way, we can do the same, arrive as analogous conclusion for quotients of type definable groups by invariant analytic subgroups of type definable groups. Over the empty set. Just empty set. Yeah. I don't really care about connectedness. Then there will be continuum defined over shell-assroom type. Yeah. Continuum-a-classes. Yeah. So that's consistent with it. In this case, if you go up here, so if e is shell-assroom type, you go up here, you find some subgroup which is intersection of open subgroups, but it's not open and it's closed and has many cosets. So the second case can be definable. No, not definable. It can be intersection of definable. But no. So it's definable. Okay. Yeah. So as I've said, the theorem we actually proved was a lot more difficult in this thing. This approach is completely useless for arbitrary bounded invariant equivalence relations or quotients by arbitrary bounded invariant subgroups without this assumption or this assumption. Okay. So the main theorem, the statement of the main theorem, that's still not the full statement, but some approximation of it. So we start with x. As before, we start with a single type of the empty set, complete type with look at the set of relations. Then for this type, we can find a compact Polish group, G hat, such that given any strong type e on this x, we can find a subgroup H hat of G hat, such that we have those transfer principles which allow us to do this trick all over again. Maybe some other stuff as well. So H hat, this subgroup is closed, if and only if e is type definable. H hat is open if and only if e is relatively definable. H hat is analytic provided e is analytic. We don't have if and only if here. I forgot to erase this part about bare property, but this is also true. Moreover, we have this thing about brokerability if you don't know what it is, don't worry. But this quotient G hat mod H is borough reducible to x over e. If this type p has an IP, then they actually are borough equivalent. This is the main theorem for strong type spaces. But you also have analogous theorem for tidal-finable group G. So if you have a tidal-finable group G over the empty set, we can find a compact Polish group G hat, such that given any subgroup of G which is has bounded index, I forgot to write this should also be invariant and analytic, then we can find a subgroup H hat, which has basically all the same properties. So H hat is closed if and only if H is type definable, H hat is open if and only if H is relatively definable in G, H hat is analytic if provided H is analytic, and we have this kind of reduction as well. So as I've said, now we can just take the trichotomy. I've proved before and just take a change a few steps and basically we're done. So before we had coarser than can be less strong type, now we take bounded. So we take a type of the empty set, complete type of the empty set as before, the set thing is still the same. We have an invariant equivalence relation on this x, which is now not coarser than can be less strong type just bounded and analytic, and then we have exactly one of the following. When you say analytic, then inside you are still getting these coarser than the entire equivalence. No, it's fine. If it was coarser, I mean, it wouldn't make sense. It's invariant. Yeah. So by analytic, I mean that so E is invariant, so it corresponds to a subset of however you prefer to call it, you can call it Sx squared empty set. So just look at the set of types of pairs which are irrelevant and it's a subset of this type space. So it's a polished space, you can think of analytic subsets here. Or you can think of Borrel if you prefer. So we have this kind of equivalence relation, then again we have this trichotomy. So either is relatively definable and the quotient is finite, or is tight definable and the quotient has cardinality of the continuum, or is not tight definable in which case the quotient still has a cardinality of the continuum, but it may not be smooth. It is not smooth. As I've said, the proof is essentially the same. The only difference is that here instead of Gaul key p we put the g hat, here we have Gaul mod h hat. If you recall what I said on the previous slide, it allows us to take this trichotomy hat on this level and push it down here. For tight definable groups, we also have a very similar statement. Namely, we start with a tight definable group g, and we take an invariant subgroup which has small index and is analytic, then we have exactly one over three. h is relatively definable and the index is finite. h is tight definable, index is continuum, or h is tight definable in the index continuum and it's not smooth. The idea is essentially the same. Here instead of x, we will have now g and here g mod h. So it implies in particular that given an if we have a tight definable group and a subgroup of it which is analytic, then its index cannot be infinite but smaller than continuum. So for example, it cannot be LF0 as I said before. So maybe I should say that this trichotomy, the previous trichotomy I actually stated three years ago here, but now it's a different proof of that. It's more before we had the Nathog argument for each of those, and now we have just a reduction to the compact groups. So you mean the four was what we did in the joint? Yeah. Actually, we didn't do it for tight definable groups, but I think it could be easily extended with different definable groups. But the upper corral was the main result. So now there's different proof using some of these. Yeah. It's kind of similar but it's more somehow more direct, I think. Okay. I should also say that this trichotomy is not true if there's no assumption about H. So you can find sort of Vitaly sets, Vitaly subgroups of definable groups which have finite index but are not definable. So this assumption here that H is well behaved is somehow essential. Okay. So I don't want to say too much about the proof because it's quite complicated, but I will just show around some ideas that appear in the proof. Okay. So one of the more important tools, actually for the part that I did not linger on too much about this Burrell cardinality stuff, is this whole called dichotomy which we have is that, okay, it doesn't look like a dichotomy. So basically, yeah, there's something called Rosenthal dichotomy, but it's not immediately clear how it appears here. But okay. But anyway, so we have this. So now we're moving away from others here for a minute. So forgive me. But okay. So we take a compact polyspace X and we take a set of continuous function, continuous real value functions from this space to the reals, which is bounded in the supremum norm. Then the following are equivalent. The type over here, this should be point-wise closure in Rx, so real value functions. So we consider the set A of functions. So it has a supremum norm because it lies in the space of continuous functions in the Banach space. But also it's contained in the space of all real value functions on X with just point-wise converges topology. We consider the closure. So this closure A bar. So A bar consists of Braille function, in fact, of Bayer class 1 functions, if and only if this space A bar has a fresh out unison property, which means that given any subset of this A bar, the closure of that subset is just a set of limits of sequences. So that's kind of a strong property. It's kind of like big metrizable, but weaker. Another equivalent condition is that it does not contain an independent sequence. So this is something that should be a bit suggestive. I don't want to say exactly what an independent sequence is, but that's pretty much what you think if you try to think of a concept of NIP in Banach spaces. That's kind of what you come up with. So this is not a coincidence. This condition is closely tied to NIP. Another condition is that it contains no L1 sequence, which means, roughly speaking, that the closed subspace generated by A does not contain a copy of L1 as Banach space. And if we have such an A, then this closure A bar, equipped with a point-wise convergent topology, is a compact space, because these functions are bounded. So it's an inter-supnome. So this closure, in this case, is called a Rosenthal compact. Also, any topological space which is homeomorphic to such a set is also called a Rosenthal compact. That's one thing outside of a model theorem that is kind of useful in the proof. And the other one comes from topological dynamics. Again, I don't want to get into too much detail. I don't have so much time. Is this thing before was only used for the NIP? Yes. But it also makes the general case slightly easier. It will appear. OK. So if we have a group of homeomorphism, a compact house of space X, then the LA's group associated with this action is just the point-wise closure of gene in the family of functions from X to X. So we have G here in X to X, just functions. And then we have G hat, which is just EL, or EGX. And so this closure is not so hard to check that it's a semi-group. If you give it the function composition as a semi-group operation. And it's also not hard to check that it's compact house of. And a left topological semi-group, which means that we can vary the left argument in its, yes, left argument and discontinues this way. It's not continuous on the right. OK. And we say that the action of gene on X is tamed if this semi-group, so this compact semi-group turns out to be Rosenthal in the sense that I've just given before. What really matters for me somehow is that it consists of measurable functions, and it has this URSN property. OK. And this kind of semi-group, in this context, they also come with so-called ELIS groups, which I'm certainly not going to define here. But there are some semi-groups of the ELIS groups, but with identity, which is different for each one. And they come equipped with a compact semi-topological group structure. But it's not the inherited topology, it's a different topology. Which is not necessarily Hauser. However, they have a canonical Hauser quote. Each has a canonical Hauser quotient. And in fact, they are all isomorphic as semi-topological groups. So we usually say just the ELIS group, because it's isomorphic class, just one. OK. Yes, so we always have this canonical compact Hauser quotient. So how does it fit into this environmental translation business? So we call that what we wanted to do, we wanted to express this quotient x mod e as a quotient of g hat mod ch hat. Where x was the type of, was the centralization of a single complete type. OK, so that's what we started with. So what we do is we choose a countable ambitious model. So I don't want to write the definition, but it's just homogeneous in a weak sense. Why do you call them ambitious? Because something is an ambit. Yeah. Yeah. Yeah. Yeah. Yeah. So we choose a countable ambitious model, which contains our realization of this type p. Oh, there's something missing here. Yeah, we consider the action of the automorphous group of this model on the space of type smm. So smm is just, is the space of types over m, which realizes the type of small m over the empty set where small m, this enumeration of m. So if you were here three years ago, then I was talking about similar thing then, but then it was the monster model. So I think it's maybe kind of easier to understand them now. OK. So we have this space of types. And now for simplicity, I assume an IP because it's a bit easier. The next step is a bit easier. So because we assume an IP, I said it's very strongly related to this Rosenthal business. And then it implies that this action is tamed. So this elisemigroup of this action is actually a Rosenthal compact. And OK, this gives us several things. But among others, this implies that this group, um over hum, is actually itself already compact Polish. Because by this fresh hathurism property can be somehow transferred to this thing. OK, not exactly, but it implies that this has something called countable tightness, which means that the closure of any set is just the union of closures of countable substance. And for compact topological groups, compact house of topological groups, this is equivalent to metrizability. OK, so in this case, so yes, so to answer your question, an IP is here to make this easier to construct, because without an IP, you have to do an additional step to get a Polish group. Sorry, so without an IP, what is the Polish group? It's not this. Yeah, it's a quotient of this group. I don't know that it would be Polish without an IP. I don't think it would be. But yeah. It just means separable. Metrizable. Metrizable. Metrizable. Metrizable. Metrizable. So separable is not equivalent, no. So like 2 to omega 1 is. Yeah. OK. Yeah, so as I've said, without an IP, we have to work a bit harder to get this group, G-hat. OK. Yeah, so that's the construction of G-hat, which is actually kind of the easy part. And then we have to do quite a bit more to get all these properties of listed. So just to try to give some broad idea. So we can show that we have a commutative diagram like this. So here we have this Alice group, who have the space sm of m. And we have a continuous suggestion from this space onto xm, which is, so here xm is just the set of type of a over m where a realizes. Did I write? Oh, actually, I did write it. OK. I forgot that I wrote it. Yeah, so xm is just the space of types over m of elements realizing this type p over the empty set, or equivalently just set of types over m of elements of x. OK, and here most maps are not so difficult to understand. So this map, so EL, it was a family of functions from smm to itself. If you go back to the definition. So this map is just evaluation of the type of the model over itself, on the type of the model of itself. Yeah, this map, this vertical, this horizontal arrow upstairs, so it can be, you can give an explicit formula for it, which if you don't know the theory, you won't understand, but it's just, it's a sensor, certain natural epimorphism from the semi-group epimorphism from the Alice group to the group G hat. And the other arrows are kind of what you expect. Here you just take a type of an element and go to the class of this element. And here, so this x is realized in m, so we just basically take a sub-tuple of m, which is the realization of this type, act pp, and just restrict to a sub-tuple of variables. So this is not that difficult. But it so happens that this map from G hat to x mod e, actually it factors, actually the way we constructed this is that we obtained as a factor of the map. So we go through the orbit map from going for the Galois group, which I don't want to define here, but it's another group which is associated with a theory. It acts on x mod e. And so we have an epimorphism from G hat to the Galois group, such that this function from G hat to x mod e is the composition of a group epimorphism and a group action. And it follows that the group G hat actually acts on x mod e. And h hat is simply the stabilizer of class of this element that we chose here to construct these functions. And then after that, after we've done all this, we have to do, well, we have to work a lot actually to show that this G hat and h hat have all the properties that we've seen before. OK, I think that's it when it comes to what I want to say about the proof. I know it's kind of vague, but yeah. So some concluding remarks, concluding one that's missing here. So there's a wicked variant of the trichotomy which applies in the case when the domain is not so. So the theorem that I had for strong type spaces, it was always for types defined, strong types defined on single, well, the use of a single complete type. But there is a variant of this which applies in case when the domain is a bit larger, but it's kind of weaker. But we still have this equivalence of smoothness and type of findability, whatever it means. And we could also consider smaller sets so we can take a subset of the domain and see what happens to e restricted to this subset. And under reasonable assumptions, we still have essentially the same conclusion. So we have a compact group, we have a subgroup, and so on and so on. Furthermore this group G hat, so the way I described it, it depended on the type P that we chose in the beginning. But actually we can choose it in a way that does not depend on P. So there's some, in a way it's a natural way to choose it. But the downside is it still depends on some choice of a model. We have to choose this ambitious model at some point and there is no obvious choice of that. So in this construction is- Can you show that if you change the model you get different? Like I know. But I think it should be pretty easy if you look at something trivial. If you have theory in which the Galois group is trivial, we can take a rigid model and a non-rigid model or something like that. And so in my thesis I've given a more abstract treatment of this which allows us to prove all these results in some similar ones just as corollaries of something more general in the sense that it's not more than theoretical. Something that I forgot to write here is that actually you may have seen some hint of that in the way in what I presented, is that this group G hat, the construction of this group G hat is actually quite concrete, relatively speaking at least. So this can actually be used to compute these objects. Like we can take a particular theory T and try to see what is the Galois group exactly as a group using this whole construction. And conceivably it can also be used to somehow understand various aspects of this group, or strong type spaces which I did not consider yet. Okay, so that's the end of my talk. So thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
In recent work with Krupiński, we showed that strong type spaces can be seen (in a strong sense) as quotients of compact Polish groups, and as a consequence. I will give a brief account of the argument, as well as describe some applications, such as showing that a non-definable analytic subgroup of a type-definable group has index continuum (and in particular, that an analytic subgroup cannot have countably infinite index).
|
10.5446/59323 (DOI)
|
But to be fair, as far as I know, there is just one unique paper in the world that deduces stuff from metastability, which is the one I'm going to talk about. So anyway, I'm doing metastability in full generality, I guess. No, there's more than one paper that mentions this word, because there are also papers of mine where I'll prove that certain things are examples of metastable theories. There is a definition. Well, you're going to see one in five minutes. So this is joint work with Udi. But as many of my papers with Udi, this one has an obligated history, and some of you might have heard Udi talk about such things 12 years ago. There was a paper of Udi around since 2000. Yes, so that's the same paper that's now finished. What did you finish before? Well, there were mistakes and holes in the proofs, and now this is all corrected. Hopefully. So to be fair, it's joint with Udi, but it's really a lot of Udi's work, and then I came along and tried to clean things up. So yes, and also I apologize to people from Paris and Berkeley that probably already heard me talk about such things, but I think there aren't that many around, except also people from LA, I guess. But anyway, but I'm going to try to speak about things in a slightly different way from what I usually do, which is that usually I really focus on algebraically closed-valued fields, and today I'm going to talk more generally about metastability. But the theorem's essentially going to be the same. OK, so well, first let's answer Anand's question and define. Well, I'll define. You will still not know what it is. All right, great. So let's start with a few definitions. So the first definition is a stably dominated type. So if I take P, the type of some A over some C. So for all the talk, I fix a theory T, which actually minates imaginaries, and I'm going to work in that theory T in the monster model. So I take an A and a little C, and I look at the type of A over C, and I say that it's stably dominated. Oh, sorry. And I take an F, which is C defined. Well, C, actually it might be pro-definable. And defined on P, and we say that what pro-C definable? It means that it might have more than one component, that finitely many components. F is a function? Yes, it's pro-C definable function. I forgot function. So it's functionally infinitive and infinitive? Yes. Well, no, infinitely many outputs. The inputs are on the type. So if A is an infinite double, OK, it also has infinitely many outputs. And each component is a definable function. So in all the cases that I'm actually going to look at, you can take a definable, you can forget the pro thing, but. And it's like a double function, yes? Yes, it's a double function. It's also a function whose graph is pro-definable, which is all the same thing. OK, so I say that P is stably dominated via F. Oh, yes. If, well, first of all, F of A lives in what's called the stable part of a C, which is the union of all stable, stably dominated sets over C. Stably embedded, I mean stably embedded. Oh. Sorry, too many stable around. And for all set B of parameters such that B is independent from F of A over C. So here what do I mean? The independence here is really not fucking in the stable part. So this is really in st of C. And here, really, what I should put is not B, but it's, so this is notation for the independence from SCC intersect DCLB. OK, I'll write B anyway. But so really what I mean by B is STC of intersect DCLB. So it's this trace of B inside the stable part so that this makes sense. So yes, this is non-fucking. But it's in a stable theory. So I mean there aren't that many options. Yes, the stable part is stable because it's a union of stable sets. OK, I'll finish the definition and maybe then I can explain a few things. Then the type of B over C and F of A implies the type of B over C and A. So a stability dominated type, a type is a stability dominated by a function if whenever you take something independent from the image into the stable part, the type isn't completely determined by this image into the stable part. OK, it's high. It's too difficult to see things in the house head. So should I rewrite stuff at the top of the board or try to avoid writing stuff at the bottom maybe from now on? What? Yes, so. What? What you are saying is that E is defined in the clothes inside the standard. No, no, what I'm saying is that really what I should have written there is that. That's my definition. What I'm defining here is actually this symbol. What does this symbol mean when B is not in the stable part? What it means is that the stable part of B is independent from F of A. Right, so. I think you're working in dependent theories. Yes. I mean this thing is stable, so it depends what you mean by this symbol. It's not working. It's not working, yes. It's also just not working because this set here is bunch of stable, stable embedded sets, so it's just not working. Yes, so maybe I should just have said it's not working. OK, maybe. OK, whatever. OK, so that's a stably dominated type. So the definition is a bit terrible and ugly, but there are good reasons why we consider this definition because it really says that the type comes from a stable set, whereas other definitions, for example, like generic stability, doesn't really tell you that. It tells you that you have a lot of properties that are common between stably dominated types and generically stable types, and we'll see that actually in the theories that I consider they're equivalent. But the important thing here is that you really know that you have a map to the stable part that really dominates your type. This is the important part of the definition. Yes, but I'm not in stability theory here. I'm in a non-stable theory. Well, it's not metastable yet. Anyway, so now I can define metastability. So T is metastable. If two things happen, so the first thing is really the important thing. The thing and thing is there for technical reasons. So the first thing is that for all set of parameters C, there exists a B contained in C such that, sorry, sorry, sorry, sorry. A theory is never metastable like this. It's metastable over gamma, which is zero-definable. Set, yes. No, zero-definable set. Or it could be tight-definable. I don't think it changes much to the definition, but here I'm going to take it zero-definable set. And so whenever I have a C, I find a B such that for all A, the type of A over C and dcL of Ca intersect gamma, this type here is stability-dominated. Because C is B. Stability-dominated, okay, yes, I should have said that. Stability-dominated by the full description of the stable part. So by any F, enumerating STB, sorry, intersect dcL of Ba. So it's easy to check that if you're stability-dominated by some function, you're actually stability-dominated by a function that enumerates everything. And so this is what stability-dominated without reference to a function means. Here the intersection is just with a dcL or includes B2. No, no, no. Well, this contains B. No, I put B. So yes, so this is essentially saying that you have a set gamma that you want to ignore. And if you add everything that comes from gamma, over that you're stability-dominated. Okay? All right, so this notion was invented by, well, so the fact that, well, the first example was proven by Haskell-Ruschowski and my personal though they never, oh, I forgot number two. That's what you're pointing at. But as I said, number two is there just for technical reasons. So it's much less important. It's just that every type of A over C where C is equal to aclc, remember I have imaginaries in, has a global C in variant extension. Yes? And then in the second definition you're making it stable-dominated. Types of a B. Okay, larger, but the type is now over a larger set. Oh yeah, but you're right. You're right. You're right. You're right. It's just that I was silly. So let me also add that gamma is orthogonal to the stable part because that's really the case you really want. So which means that in particular the stable part over this bigger set is the same as the stable part of a B because everything you add is actually adds nothing to the stable part. This is just for ease of notation. If you don't put that you have to put the bigger set down there. Okay. The things you add just come from gamma? Yes, they're all in gamma. It's B union stuff intersect gamma. Gamma, do you have to put that in the GCL gamma? No, I'm in gamma. Gamma is something you define it with closed. So I mean here I'm more considering that gamma is equal to gamma eq. You could say that I could also be defining T is metastable over gamma eq. Just like gamma eq, gamma is a definable set. Okay. If you want. But the definition, I mean I'm only going to consider cases where gamma is equal to gamma eq anyway. So the definition is not going to change. But if you prefer I can define it this way. It means the eq of the full induced structure on gamma but it's not stably embedded. Okay. Okay. Okay. Preferring this way. You're happy now? Okay. So just because I'm going to use the term later, such a B is called metastability basis. Okay. So. No, no. Any B such that everything, every type modulo gamma is stably dominated, it's called a metastability basis. So what exactly means that f enumerates this intersection? Well, every element in there is a function of a which is B definable. So you just take all the functions that enumerate this thing. Just look at the definable closed out, a section, stable intersection, stable part. Is it a triple? Yeah. It's a triple. You can enumerate it. No. It's a myth. That's for all a, for all a's there. Yeah. Any tuple anymore. Yes. Well, in a model of T. Okay. So this notion was invented to, well, for ACVF. And so the good thing is that ACVF is indeed metastable. And actually the notation also come essentially from ACVF because in ACVF, the stable part is everything that's internal to the residue field. And gamma is going to be the value group. And metastability bases are going to be maximally complete models. We now have a few other examples. So for example, the existential, well, the theory of existentially closed valued differential, or maybe I shouldn't have started with this one, valued differential fields where the differential is actually is monotonous. So I should put monotonous somewhere. So V of D. Oh, yes, you're right. So where the, the, the, the, the variation only goes up in the valuation. This is usually, this series is referred under different name, but this is also metastable, and you can check that, well. Once again, the stable part is going to be what's internal to the residue field, which now is going to be a model of DCF0. So a quick characteristic zero, sorry. And gamma is also going to be the value group. And also, so another example is, is a separate closed valued fields, valued fields of finite imperfection degree. I don't know what happens in infinite imperfection degree. So oh, yeah, okay. So all the examples are essentially valued fields, as you can see. They are also, you can fabricate other examples. So morally, any Hanselian field whose value group is, whose residue field is stable should be metastable. And this is not, I mean, this requires a proof that I don't think has ever been written. Okay. So now we know what the metastable theory is, and we know a few examples. So now I'm going to start talking about groups. So the goal, the, the, what I want to talk about is try to explain that, that, well, if you have a metastable theory, this decomposition of object as a stable thing over gamma things reappears also for groups. Wait, I, we have some metastable reason for that. No, there's no reason for that to be true. Well you could lose gamma. Okay, but except this or something. Yeah, for example, it could be, but then everything is metastable over the full thing, over the full theory. So it's kind of empty. Also over the value group. I said it aloud. So also, I think I wanted to mention, but I really want to get going to get to actually say things about groups and not just define metastability is that in a metast, in a NIP metastability theory, because we don't know also, we don't know that metastable over something NIP and twice NIP. That's not entirely clear. But in a NIP metastable theory, stable, stable domination can is actually exactly the same as generically stable, which is also exactly the same as orthogonal to gamma. So the table domination is actually can be defined in a much nicer way. But well, we still can't define metastable without stable domination. So I had to define it anyway. And as I said, it does give you more tools than just generic stability because you actually get actual maps to the stable part. Okay, so now let's define stability dominated groups. So if G is a definable group, so everything I'm going to say actually works for pro definable groups. And actually, if you want to prove the theorem, I'm going to state you need pro definable groups, but I'm not going to put the word pro anywhere. That's going to be better for everyone, I think. So if G is a definable group and P is some global type, a global definable type, a definable type, well, okay, global type concentrating on G, we say that what? It's not just right now, but in a second it will be. We say that P is a definable generic if for all, or I might have forgotten something very important. I will, for all G in G of the monster model, the action of G on P, which is just the type of the elements of the form GA, where A is the realization of P. Gp is, say, definable for some small c that does not depend on G. Equivalently, the orbit of P under the action of this group is small, and P is definable. Yes, you prefer G star P? Yes. Well, that's why I said definable generic so that it's a word. Yes, I agree. But for a type, you want, you want strongly definable generic, maybe? Definable F generic if you want. If you want definable F generic if you want. Yeah, I know, I am aware. But I find definable F generic weird because, so I think at some point I call them degenerics, but because you don't really need forking to define them, so it's weird to have the F around. But anyway, it's an important. And so secondly, G is a stably dominated group if it has a stably dominated definable generic. Definable F generic if you want. So I should have said, and I forgot, that stably dominated types are always definable. I mean, they have a unique definable, okay, finitely many depending on if C on stationary issues. So you have definable C, ACLC definable extension. So the definable here is kind of free. So anyway, group G is a group that you have a stably dominated type which has a small orbit, which is, okay. So the first result I want to mention, which is not the hard result, but which kind of starts to give an idea that these notions work well with groups is that if you have a stably dominated group, the map that dominates the, the maps that dominate the degenerics can actually be taken to be a group morphism. If G is stably dominated, there exists a stable group. So by here I, a stable group H and a definable group homomorphism, rho from G to H such that any generic of G is stably dominated via rho. Yes, any definable of generic. But actually you can show, you can show that any F generic in that case is definable and there's a unique orbit of F generics. And the old generic, yes. So that's the first sign that things are nice, which is that, well, out of a stably dominated group, we do find a stable group that dominates what's happening. But we would like more than that and in particular we would like given any group to be able to decompose it into stuff that come from gamma and stable stuff. So what would we, if we look at what's happening in the definition here, what's happening is that whenever we have an element, we have a function to gamma such that in the fiber things are stably dominated. So what we would like to do, so I'm going to put which because it's very false, when we have a group G, we would like to be able to find an H, sorry, maybe I should write it this way. So this is any definable group and we would like to be able to find something where this is gamma internal and this is stably dominated. That would be a good group version of the definition of metastability. But it's not true. Gamma is the thing over which I'm metastable. So for now I suppose T metastable of a gamma. So for all G, I want to find H stably dominated such that, blah, blah, blah. Yes. No, it's not, it's just to say it's stably dominated. But I rewrote it anyway. And this is not, okay. But this isn't completely a teleforce, sadly. So for example in ACVF, there is an easy counter example that nothing like that could, ever happen if you take the additive group of the field. So if you look in ACVF, you look at the additive group and this has no internal quotient, no gamma internal quotient. And it's not stably dominated. What is happening is something a bit more subtle, which is that GA can be covered by translates, or of the valuation ring. So for every gamma in gamma, you have what I call gamma O, which are the sets of axis of the V of axis greater or equal to gamma. And these are stably dominated. Not just a trans-ex, it's actually a trans-ex, you have some groups? It's multiplicative, don't say it. It's multiplicative. These are, this is a chain of, this is an increasing union of subgroups of GA that cover it and such that each of them are stably dominated. No, I'm defining gamma O, this is gamma O. So it's everything that has valuation higher than gamma and this is an additive subgroup and it covers. So now, if we look at our wish, we can't really hope for H to be stably dominated, but we can hope for H to be an object like that, which is covered by stably dominated things. But this is still false, because if you look at SL2 of K, this is, well, nothing works like you want and essentially the only, the best. Yes. In SL2K, you can find, so in particular, this would imply that you have a maximal thing which is covered by stably dominated subgroups. But in SL2K, you can find stably dominated subgroups that are not included in the larger one. For example, you take SL2O, it's a stably dominated subgroup and all its conjugates also are, but they are not included in anything bigger which is stably dominated. So SL2K is, well, there is no hope of anything happening, so that's why I am further restricting to abelian groups, because there things are going to happen the way we want. And so for abelian groups, we get the result we want, but first I need to define the class of groups that look like this. Yeah, yeah, yeah, so. Yes, I believe that. So actually, yeah, if you look at what's happening in SL2. Yes, indeed. But so if you look at what's happening for SL2, you can make the following conjecture, which is that instead of having such a nice picture, you find an, I need to define something first, sorry. So first I need to define groups that look like that. So G is limit stably dominated if there exists a type, an infinite definable family of H gamma where gamma realizes some type Q, where Q is a type on gamma to some power n over some small c. So first H gamma is stably dominated connected. So I never define what connected it means, but here it means that the orbit, you don't just have a small orbit, you actually have the orbit is a singleton. To every stably dominated H inside of G is, well, for every, there exists a gamma such that H is in fact a subgroup of H gamma. So you find a family that covers all possible stably dominated groups. And also for technical reasons, you require that the family H gamma is filtered. So small, well, for small sets of realizations, small sets. Okay, let me just write the H gamma. The family H gamma is filtered. So whenever you have a small set of such as gamma, you find a larger one that contains them all. So whenever you have a small collection of H gamma, as you find a larger one, which contains them all. So it's just to know that the union of the H gammas, I mean, you fall, well, okay. In that case, what did I, I did not define stably dominated. This is not what I defined at all. I defined something else, I'm sorry. I was thinking. Sorry. A stably dominated, a limit, a stably dominated family is what I just defined. Of G. No, no, that's an infinite, so now that's what I defined. A limit stably, so if G is a group, is a definable group, a limit stably dominated family of G is something like that. Actually, yes. No, that's why I changed my definition. H, which is the union of all gammas of the H gammas, which is a subgroup of G, is unique. It's totally, it's the continuation of the remark. I defined a stably dominated family, which is this thing, and now I'm saying that whenever I have a stably dominated family, if I look at the union of everything, I get a group because it's filtered. And this group is unique and does not depend on the family. Because of number two. The group does not depend on the family because of number two, because it actually covers every possible thing. And so, and you can check it's also infinitely definable. No, it comes out of the fact that it's a union like this. Yes. No. So the theorem I'm going to write now is that if it's a billion, H exists. So the H is infinitely definable. So it's an infinitely definable family of groups. Well, it's an infinitely definable family of infinitely definable subgroups, if you want, but that's what I mean by an infinitely definable family. The theorem is what I'm going to write just now. So the theorem is that if you start with a G, which is a billion, then these groups exist. If G is definable in a billion group, so there exists a limit, a stably dominated family. What? I mean, since here, I've assumed that T is metastable of a gamma. Now I'm in a theory which is metastable of a gamma. No. Yes, I know. I'm finishing my theorem and then I'm done. So there exists a limit, stably dominated family. And in fact, and you get more, you get what we really wanted from the beginning, which is that the quotient is gamma internal. And this also depends on its limit, not just on the existence of a family. No, I think this is true even without the existence. No, that as soon as there exists a family, the quotient is internal. Yes. And so if you add finite dimensional type hypothesis, like the fact that the stable part has finite, well, every definable set has finite moly rank and such things, you can actually get H to be a definable to see that H is actually definable and not type definable. H is not its self-statement. No. It's a, so this is an hyper-imaginary. And you, so it says that you have a map from G into some powers of gamma such that the fibers are finite. This is not, yes, this is hyper-imaginary. It's a set of hyper-imaginary. Yes, it's a set of hyper-imaginary. Yes. But when H is definable, then it's much more reasonable. Okay, so I wanted to say a word about the proof, but clearly there is no more time for that, so I'll stop here. Thank you.
|
In their work on the model theory of algebraically closed valued fields, Haskell, Hrushovski and Macpherson developed a notion of stable domination and metastability which tries to capture the idea that in an algebraically closed valued field, numerous behaviors are (generically) controlled by the value group and/or the residue field. In this talk I will explain how (finite rank) metastability can be used to decompose commutative definable groups, in term of stable groups and value group internal groups. Time permitting, I will quickly describe the applications of these results to the study of algebraically closed valued fields, in particular, the classification of interpretable fields.
|
10.5446/59325 (DOI)
|
that says that it says something a little bit weaker than saying that in any unstable NIP theory, well, that any unstable NIP theory defines or interprets an infinite linear order, because the order will not be actually definable. But let me just start maybe with a little bit of background. So there's a theorem of Schellach that says that if you have a structure which is NIP and not stable, then some formula has the strict order property. So T has strict order property. So I'll recall what this means. So recall that formula phi has the strict order property if there's a sequence of parameters for y, say, such that the formulas that are defined, or maybe it doesn't matter, are strictly defined sets. If you vary the formula, if you vary the parameters over the sequence, you define sets which are strictly increasing. Having a formula that has a strict order property is equivalent to there being a definable partial order with an infinite chain. Because here, if you have this, you can define an order from that. So I think pre-order and quasi-order means the same thing, no? So pre-quasi-order, by saying that, say, B is less equal to B prime, if the formula defined by B is less, included the formula defined by B prime. So this is, of course, a transitive relation. And the assumption that phi has SOP exactly tells you that this order has an infinite chain. And conversely, you can figure out what to do. So this is telling us we have a partial quasi-order, but if you allow quotients, then you have an interpretable partial order that has an infinite chain. And then there was a question, which I'm not sure to whom it's attributed. I think Schell absolutely raised it. I heard it first from Ryszowski. I don't know. I'm sure other people came up with it. Is can we get a definable, with the same assumption, so if T is nip and stable, does it interpret an infinite linear order? So note that you cannot ask for an infinite definable order because if you just take a structure which has an equivalence relation, all classes of psi 2, and then you have an order on the quotient, so you have an order, but you have two cover of it, then there's no definable order on that structure. But of course, there's an interpretable one. OK, so this is still open. However, what I can prove is the following. So the assumption is still the same. T is nip and stable. Then there is a finite set A of parameter. There's a type over A and some type definable relation such that this defines a strict linear quasi-order on realizations of p, such that r defines a strict linear pre-order or quasi-order on realizations of p. Yeah, but here I have this because it's strict. So there's a choice to make. If you take the one that's not strict, you have v. If you take the one that's strict, you have. Yeah. So some days I mean. We're having a strict type defined one thing. Is this the best possible thing to have a type defined one? Yeah, so that's why I stated like this, so it looks better. Well, we always thought that it's a progress. What do you think it's something? Why is it interpretable? Yeah, exactly. So let me explain what I mean by a pre. So what does it mean, a linear pre-order? It means that it's transitive. So OK, where's the interpretable? So if we have this, so from this, we can define the relation e, which you define as the link that you have neither of the two. You don't have that x is pretty small than y. You don't have that y is pretty small than y. Now this is a v-definable equivalence relation. And the quotient of e by this equivalence relation is linearly ordered by r. So that's what it means to be this linear pre-order. So in the quotient is linearly ordered by r. Oh, yeah. And no, it's more than that. The quotient is infinite. So the quotient is infinite and linearly ordered by r. Is this still? And everything is over r. Over a, sorry. This is over. So is the statement clear? So yeah, for me, it's somewhat easier to think that you have a v-definable equivalence relation. So I'm going to go back to the question. For me, it's somewhat easier to think that you have a v-definable equivalence relation. And then you have a linear order on the quotient. And if you think of what that means, then it means that either the strict order is type-definable or the non-strict order is v-definable as you want. And an immediate corollary is that if t is omega categorical, then the conjecture is true. Because the point is that here a is finite. So if the theory is omega categorical, everything is definable. Right, so that's also a good question. So p can be any, yeah, so p can be a one type. In fact, it can extend any unstable type. That you start with. So what I should say is that, so this looks like a strengthening of this. But it's a theorem of a very different nature. Because this is a non-structure theorem. It tells you you have this thing that says that you have some complexity in it. With this, you should really think of it as a structure theorem. It's giving you a linear order as a very rigid object. It's a very, and so you could hold it. But you have to use those linear orders to actually build some structure theory of models. So build some classification theory of models of an NIP theory, t. So some partly speculative applications. Well, almost all of them are speculative. Only the first one is only partially. So well, the nicest case is when the theory is omega categorical. And so it's natural to start there. And there, we can hope to have a classification of NIP, finitely homogeneous structures, extending the stable case. So the stable case is known by work of a number of people, like Lan, Chirlino-Schofsky, Harrington, well, quasi-finite. Well, there's a very, anyway. So this, when I started doing it, I gave talk about it in Paris, and it's in progress, joined. We found some shows. And I don't really want to talk about it today. It seems that one could hope to really have some coordination. I have no idea how to write this. What? What's the next letter? Hi. I'm very tired. Coordination of models by linear orders with the idea that if you compare with, let's say, the omega stable case, compared with the omega stable case, so a strictly minimal set would be replaced by linear orders. So we do these two things connected? Yeah, yeah, this is the omega stable. This is the omega categorical version of this. But here you can hope for much more than, because here I don't know what that means. Oh, wait. Here you're not assuming omega categorical? No. No, I'm not assuming omega categorical anymore. Well, OK, so in stable, so let's say omega stable, so you coordinateize maybe models by looking at dimensions of regular types. So here those dimensions of regular types, what they could be replaced by are isomorphism types of certain linear orders. So it's no longer going to be a number. It's going to be the isomorphism type of a certain order. And then in the stable case, in very nice cases, you know the model up to isomorphism by knowing those dimensions. Here you could hope for, so for example, what happens in real close fields, in real close fields, let's take maybe divisible order to be in groups. In divisible order to be in groups, if you know, so how does it work? Actually, let's take real close fields. It's a little bit. So what classification could you hope for if, so if you have a real close field, you can look at the valuation given by the convex hull of the reels, and then becomes a valued field. And then the residue field is a subfield of the reels, and then you have the value group. If you know the value group and you know the residue field, so the residue field lives in some bounded thing. So if you know the value group and the residue field, you still don't know the whole field, but there's a maximal one with this given value group and residue field. But actually, if you just fix the value group, then what I said is also true, because the residue field anyway is a subfield of r. So for a fixed value group, there's a maximal model of RCF that has this fixed value group. And this is the idea that one could hope to have something similar, where here the value group is playing the role of my ordered set. So one could hope that you somehow have ordered set, and they don't give you the isomorphism type, but say if the structure is this tall, they will give you at least there's a maximal one that has those. But maximal you mean? Maximum under inclusion. But I haven't really thought about this, so if this looks very vague, it's normal. But this is kind of the idea. When could look at algebraic structures? So this is also very vague. But for example, there's this conjecture that if you have an NIP unstable field, maybe it has a definable valuation, well, if you take such a field, it's going to have a generic type. If the generic type is generically stable, then I don't know what to say. But if the generic type is not generically stable, then some kind of upgraded version of this theorem is going to give you an order such that the generic type is going at minus infinity along that order, which you expect the type to be the generic type at minus infinity along the valuation. So this is going to give you an order such that the generic type goes at minus infinity along that order. And then using the fact that the type is generic, you could hope to somehow create a valuation by looking at stabilizers of convex subsets of the order or something like this. But. So k is a fixed type of field. k is an NIP field. And I'm assuming this generic type is not generically stable. Oh, it has a generic type. It's an F-generic type. F-generic type. I've added two groups. For example, you can have one that does for both. But OK, this is speculative. I haven't really had time. But it's just to give an idea of why this is likely to be useful to give us strong structure theorems, hopefully. But there's some big things that need to be figured out first. Anyway. So now what I want to do is I'd like to give you some idea on how this is proved. And I usually don't like giving proofs. But the reason I'm doing this is because the proof is probably much easier to tell than to read. And I mean, it's the kind of thing that it's not so hard to actually sketch the proof. But it can be very hard to read it. So I think it's worth actually doing it. I'm not going to do the general case. So I'm going to just do a special case that will give you the idea of what's going on. And you'll see there's just one idea. It's not so hard. But we have to play a little bit with indiscernible sequences. And so I need to recall certain things about indiscernible sequences in NIP. OK, so probably you know that if you have, so let's just, from now on, assume that my theory is NIP. And one of the first things, usually, that one learns is that if you have I an indiscernible sequence, say this is I, and you have a formula with some parameter. So the sequence is indiscernible over the empty set. It's not necessarily indiscernible over the parameter B. So when I evaluate that formula along the sequence, then it's going to cut out a finite union of convex sets. So this formula, if x varies along the sequence, I'm going to have finitely many convex sets such that, say, the formula phi of xB here is true. And then here it's false. Then here it's true. And then maybe here I just have one point where it becomes false. And then it's true outside, something like this. But the point is there are only finitely many changes of truth value along the formula. So this is usually one of the first things that one learns about NIP. And it's equivalent to NIP. And this is strengthened by the so-called shrinking of indiscernible. Maybe I'll move here. Which explains what happens when you evaluate not a formula with one variable, but you look at the formula with several variables, each variable having the same size as a tuple of the sequence. And you want to know what happens there. And then the conclusion is essentially the same that there's a finite, that there's an equivalence relation with finitely many convex classes such that the truth value, if you evaluate the formula on a tuple from the sequence, the truth value only depends on, if you look at your tuple, say it's an increasing order on which equivalence relation each element lies. So let's write it down. So i is indiscernible. You have any formula. Then again, you can cut. So maybe some classes have just one element, or finitely many. But there is a convex finite equivalence relation, E on I. So what I mean by convex finite, I mean the classes are convex, and there are finitely many classes. Well, such that the truth value, when I evaluate it on the tuple, so I take a1, a2, a3, a4, a5, and I evaluate my formula on that, then to know whether the formula is true or not, I only need to know if the tuple have taken an increasing order, I only need to know in which classes each of them lies. So such that if, say, a1, an, or an i in increasing order, b1, bn, or in i in increasing order, and ai is equivalent to bi for all i, then the formula is true on a1, an. OK, yeah, shouldn't have called b. If and only if it's true on b1, bn. Yes, I'm allowed to take two in the same class. And each of those classes is going to be definable, I mean relatively definable by an instance of that way. So there's a unique, maybe I should say, well, there's a unique finest, yeah, courses one. There's a unique courses one that has this property. So if you do this. So now if you have i an indiscernible sequence and you take, say, a finite tuple b, you can do this for every formula. So every formula is going to give you a finite convex equivalence relation. Now you should think of i, usually indiscernible sequences, especially in stable, they're indexed by omega. That's not what I want to think about. I want to think in a, in a, in a, in a, in a, in a, in a thinking about indiscernible sequences that are indexed by a dense linear order, which is usually very big. But definitely dense. So you should think of all indiscernible sequences as being indexed by a dense order, so that I don't have problems of consecutive elements, things like that. Then if I have a finite tuple b, I can take every formula phi. For every formula phi, I get an equivalence relation. And then I take the intersection of all those equivalence relations. What I get is an equivalence relation with convex classes on i, which now has maybe two to the parameter. The parameter is fixed. i is fixed, b is fixed. Yeah, thank you. Yeah, b is fixed. But phi varies. So now what I get is I get an equivalence relation. So we get convex equivalence relation eb. So it depends on b on i with at most maybe two to the t classes. Such that what? Well, such that the classes, there might be some finite classes. But the infinite classes are mutually indiscernible. And actually also over the finite classes. So maybe older classes, but the finite ones. So the infinite classes are mutually indiscernible. What does it mean? Over b. Yeah, thank you. They're mutually indiscernible over b. What does it mean to be mutually indiscernible? It means if you take two of them, each one is indiscernible over b and the other one. Or well, if you take any number of them, each one is indiscernible over b and all the other ones. OK, so maybe you have to think a moment why this gives you that. And this is really what I prefer to think about. So the way I think about this, you have this sequence. Ah, it's indiscernible, so you don't see any. It's the same everywhere. If you add a finite parameter b, then the sequence is cut into a bounded number of pieces. And those pieces are again mutually indiscernible sequences. OK. So now there's a very nice situation is when you only have finitely many such classes. And in particular, if you only have two such classes. And so there's something that measures that, which is called the DP rank. And I guess I'm only going to define it for finitely many. So in the over categorical case, you have to define it. In the omega categorical case, in the finitely homogeneous case, it will be finite. No, otherwise, no. No. Because they can still be infinitely many formulas with more and more variables. OK. So the DP rank is going to bound the number of classes that you can have. So say that the DP rank of p is at most n. Let's do it only for an integer. If for any indiscernible i and any b realizing p, well, if you do this, you only need to cut into at most n classes so as to get this. So it's not completely true. Well, first I guess it should be n plus 1. And yeah, it's not completely true. OK. What? Yeah. OK. OK. OK. So I'll get back to this. I'll finish this. I want to look a bit more closely at what happens here. So this, I mean, some of the whole idea of the proof rests in understanding the minimal situation. You have an indiscernible sequence, and you have a topol b. The topol b breaks the sequence into many pieces. Let's look at the simplest thing it could do. Well, there are actually two simplest things that it could do. One thing is that i breaks into two infinite pieces over b. So maybe over b, i breaks at some point into two infinite pieces. And imagine that there's no limit point on either side. I mean, there's no end point on either side. And there's some formula with parameter in b that's true here and false here. And that's it. The two pieces are mutually indiscernible over b. OK. So this is the first simplest situation. But there's another one that could happen. It could be that there's just one point that behaves badly. And that if you take off that point, the sequence that remains is indiscernible over b. And the fact that there was those two minimal situations is the main observation behind the definition of distality, actually. So you should think of this situation as being stable like. Because what happens in a stable situation? If the theory is stable, the sequence is totally indiscernible. And you have this stronger property that if you get a finite set, you can just remove a bounded set from the sequence. And whatever is left is indiscernible. So when it happens that you can take a point and you remove it, and the sequence i is indiscernible over b, let's think of this as a stable like situation. And when the sequence is cut in two and there's some formula that's true, below, and false above, then this is like an order like situation. So distal theory, it's not important for this talk, but the distal theory is a theory where this never happens. When you never have this situation, you only have this one. So now if you want to, so why was I in trouble here? Because the number of classes is not the correct number you want to look at. If you have your sequence and you have a parameter b, and let's say it cuts it into finitely many classes. But there could be cuts that are order like cuts, where some formula changes truth value, and then there could be isolated points that you need to take off. And what you want to count are really the number of those isolated points and cuts. This is the relevant number. So you see a point gives you really three classes, but I want to think of it as just two. I want to think of this as one thing that happens in the sequence. And if you count that, then the Dp rank is good. And any b, they are at most n cuts in i over b. And what I mean by a cut is either one of those two situations. So either there's a formula that changes truth value, or there's an element I need to take away. Well, this doesn't happen to consecutive because my sequences are dense. So then the Dp rank is at least, oh, sorry. It's n minus 1. So if it's 1, no, it's good, actually. So the Dp rank would be 2. So it's known, so this is proved by Tai of an Alex, that the Dp rank is additive. Sub-additive, yeah. Thank you. I hope I'll write it correctly. Looks correct. So the Dp rank of a topol, AB of the type of a topol, is bounded by the sum of the Dp rank, we should say rank, the Dp rank of a over a, and the Dp rank of b over a. So something you would expect. It's not so obvious to prove, but it's true. And of course, there's one natural thing to do with the picture that I drew here is to split the Dp rank into two parts, just counting the number of, you could just count how many of those things you can have, and you can count how many of those you can have. And this does work. It does give you two notions of rank, which also sub-additive, each separately. I'm not going to, in this talk, I'm not going to use them because what I'm going to do right now is restrict to the case where the Dp rank is 1. So I was willingly being vague about that. They're cuts given by a formula with one variable and parameters in b plus maybe other parameters from the sequence itself. That accounts for the formulas with more than one variable. So the correct way would be to define it like that. But then, if you'd have to use the sequence, you could get arbitrary many funny things going on. Yeah, yeah, but you're only allowed to cut, yeah, but still, it's going to be bounded. As long as you don't cut on one of the parameters themselves. But let me tell you what the Dp rank 1 case is, and that will be. So from now on, we restrict to the case where the Dp rank of any one type, so the Dp rank of the structure is 1, which means the Dp rank of any one type is 1. And then what does that mean? It means if I take an indiscernible sequence i, and I take now a single ton a, then either the sequence stays indiscernible over a, or there's a cut, and I get two and the two infinite pieces on the left and on the right are mutually indiscernible over a, or there's one point that I need to take off, and when I take it off, the remaining thing is indiscernible over a. OK, and that's a precise statement. This is equivalent to being Dp rank 1. So from now on, I'm going to explain the proof in the Dp rank 1 case, which still shows the idea, but simplifies the number of technical things. And then one has to understand how to restrict to that case. If you could just find the type of Dp rank 1, you'd be good, but that's not always possible, so you have to do more complicated things, but this I won't explain. OK, so the first thing is we want to be in this situation. This is the situation that's order-like. But we can, because if this situation always happens when you take any indiscernible sequence and any single tone, then the theory is stable. So if the theory is unstable, you have this situation happens at least once. If this always happens for every single tone and every indiscernible sequence, then the theory is stable. Because stability is restricted to formula with one variable, and you can test stability by evaluating. So if T is unstable, it is some I and some single tone, A, such that and a formula. So let's write it all out. And there's a formula. OK, let's call it B. There's a formula for your FxB. And here I'm going to assume there's no other parameter in phi. So to answer that, maybe I would need to add things to the base to ensure that. But everything I'm doing is invariant on the adding finitely many constants. So I can certainly get down to this situation. There is I, B, and a formula phi of xB, such that phi holds on the left on an infinite initial segment, and the negation of phi holds on the final segment of the sequence. So I'm in this situation. Just to emphasize that it could be tuples, that the sequence is not B is a single tone, but the sequence might not be. But then I'm going to forget about the bar from now on. So is this OK? You can forget about almost everything and just so that we are done to this situation. We have this indiscernible sequence, and we have one formula that's true, that's not a formula. And we have one formula that's true, then false. And also we have this DP minimal that will play an important role later on. We want to get an order from that. So there are two natural things to do. One natural thing leads to Scheller's theorem that there's a definable partial order. And what is Scheller's argument in this case? It's saying, look at the type. So this is an indiscernible sequence. I can look at the type, let q be the type of an element that fits here in the sequence. So that if you add it here, you get an indiscernible sequence. Let's look at that type q over the sequence. So q over the sequence, which is the type of an extra element in this cut here that fits in here. And you can define naturally a partial order. Well, you have a formula that has the strict order property. Yeah, I didn't need to define q actually at this stage. So if you take a1 and a2 here, it is not possible. OK, let's define p to be the type of b over the sequence. This maybe will be the type more important for me. So it's not possible to have a b prime that realizes p, such that negation of phi of a1 b prime holds and phi of a2 b prime holds. If a1, a2 are two elements such that when you add them both, you get an indiscernible sequence. Why is this not possible? Because look at what the formula phi does. It would be true here. Then it would be false on a1. Then it would be true on a2. And then it would be false again. But that means it changes truth value. That would contradict the p minimality. It changes truth value too much. So this is not possible. So you assume that an a1, a2 preserves this element. Yeah, yeah, I'm assuming that both. So maybe I shouldn't write that. I'm not assuming that not only that they both satisfy q, but that together they fit in the sequence. And then it's inconsistent to have this, which is exactly saying that phi of a1 y and there's a phi of a2 y and there's an inclusion between them, which you probably know what it is. So as a2 goes there, this is bigger. So this should be bigger. And the strict comes from the existence of b. I can find a point that changes truth value between them by indiscernible. So this is how I get the strict inclusion here. And the inclusion is just this statement. The y steps compete, right? Yeah, I think it's correct. OK, so this is the proof of Scherlach's theorem in this easier dp minimal case. Now to get the linear order, we're actually going to look at it a little bit differently. So Scherlach's theorem gives us the partial order on realizations of q on elements that fit in the sequence. I'm going to construct an order in realizations of p. So p is the type of the parameter b. So almost done. There's a little bit more. Just a little bit. Yeah, I know. I approach us for that. OK, so what is the natural way to try to define an order on realizations of p? So if I have b and b prime, if I look at phi of xb, it's true here, false here. Phi of xb prime is true here, false here. That's because they satisfy the same type of the sequence. But now what I can do is I can add another piece of the sequence in the middle. And then phi of xb, let's see what happens. Phi of xb is true here. So let's look at bb prime. So phi of xb is true here. It's false here. And well, it's the same argument I had over there. By dp minimality, there's one point where it changes truth value, where it's from true to false. Be a cut that is sort of indexed by b. Now there's also one point where phi of xb prime changes truth value. A cut, a cut. Sorry, a cut. There's a cut in the sequence where the formula phi of xb prime changes truth value. And now if this happens, I want to think of b as being smaller than b prime. OK, now this doesn't quite work. But let's see what does work, what we can get. So first we can define, well, there's a natural equivalence relation that I won't actually need, but that will turn out to be this equivalence relation. Which is, I can say b and b prime are equivalent. If no matter what I put here, the cut is the same. But you're assuming b is the same type of an i. I'm assuming they have the same type of an i, yeah. P is the type of an i. P is the type of an i. P is the type of an i. It's on the right place. P is the type of b over i. So P knows that the formula is true on the left and false on the right. So there's a natural equivalence relation if the cut, so let's call this the cut defined by b, and this is the cut defined by b prime. So the natural equivalence relation is if cut b equals cut b prime for all, say j, let's call j, whenever I put a piece j in the sequence, the cuts are the same. That's an equivalence relation. Inj, the cut in whatever. Inj, yeah, in this thing. OK, now my strict order, what I want to be a strict order relation, is going to be r of a b prime. And here you have to be careful if you cannot have the cut of b prime less than the cut of b. So if we never have cut of b prime less than cut of b. So no matter how you put j, maybe the cut of b and b prime coincide. That happens in particular if you take j equals is empty. So that's going to happen for sure sometimes. And it could be that b prime can be separated, say, but you cannot have the opposite. So wait, is that what I want to write? No, this is the large one. Yeah, this is the large. Sorry, so this is what is going to be the large relation, the non-strict. And note that this is v definable because it's saying you do not have something. So the complement is there is something. So this is v definable. It's obvious that it's infinite. I mean, that's probably the typical problem. Wait, that's the question. No, no, the fact that the quotient is infinite is obvious. Because it's indiscernible, so whenever I put a j, there is some b that has a cut here, there's some b that has a cut here, there's some b that has a cut here. And those all give different classes. So yeah, so what is obvious? That the quotient by a is infinite. That's obvious. The other thing that's obvious, a little bit less so, but you just have to write 1, 9, is that this is a transitive. And this is just one line. If you write it, maybe I don't have time to write the line. Yeah? Yeah, yeah. What? Yeah, yeah. What? Yeah, yeah. What? What do you want to write? Oh, I have other things to write than the line. Yeah, yeah, what is the question? The formula is always the same phi. Formula is always the same phi. We just vary the. Yes. The formula phi is fixed once and for all. OK, so I claim that this is very easy to check just from the definition. So what is not clear is linearity. So what does linearity mean? It means our two points comparable. So what does it mean that two points are not comparable? It would mean that you have b and b prime. And there is some j such that the cut of b lands below the cut of b prime. But then there's also some other j prime such that it's the opposite, the cut of b prime is below the cut of b. And if you have that, well, you can't compare them. But now, so this is sort of the crux of the proof, is to observe that if this happens too much, we're going to contradict an IP. And what does too much mean? So this might happen. At this stage, it could very well be that this is not yet linear. So if it's linear, we're done. We have all the properties. If it's not linear, it means there are two realizations of p that can be switched by just changing the thing in the middle. So then what we're going to do is we're going to pick one of those realizations, add it to the base, increase the sequence, and then iterate and work here. So you don't need to worry about this. Eventually, if this process keeps going, you will get a sequence like this of points which are all realizations of p. You will get an indiscernible sequence with the property that, well, if you look at phi x bi, it's true up to this cut and then false. And with also the extra property that any two consecutive ones, you can change what's here, put another sequence, so as to switch the two. And now, what we want to do is just show that then we can rearrange this middle piece so that to get any given order on the base. And why can we do it? So this is sort of the main technical thing, is to understand why this works. And the reason is because of this DP rank. So I'll explain it. What do you need to do? What I want to do now is I want to, OK, so maybe I should say the conclusion to the end. So now what I claim is that by changing the sequence in this middle piece, I can arrange to get any permutation of the cuts that I want. Why does this contradict an IP? Because then the formula phi of xy would have IP. Because now if I just take a point here, it's related by phi to everything to the left and to nothing to the right. But everything to the left can be just any subset of the bi's that I want. So if I can do that, I'm done. Now we have to understand why we can do that. And the reason is, so this additivity of DP rank tells us that for the DP rank of, I think I'm only going to need it for 2, the DP rank of any two points is at most 2. Because DP rank of each is 1. So it's actually exactly 2. Because if I take any 2 of, sorry, those are b's. If I take any bi and bj, I do have two cuts that they induce on this sequence. So the main, OK, so now I'm going to do this permutation inductively by switching consecutive points. So I'm going to explain one switch. So say you are at a different situation. You have here some bi. So let's do it just at the start. I want to exchange b3 and b4. What I know by construction is that if I forget about all the other points, I can erase this, put another sequence that exchanges them. But I have no idea a priori what happens over the other points. Maybe the other points now, the sequence changes. Maybe the type has changed. But DP rank will tell me that this cannot actually happen. Because now if I take, so now let's take b3 and here I have b5. So in the new sequence, if I look at what happens over b3, b5, if I look at the new sequence, it's here. I know two places where the top of b3, b5 cuts the sequence, here and here. So there cannot be any other. So what this means is that this whole left piece and this whole right piece have to be mutually indiscernible over b3, b5. And what this means is that this piece that I've added, b3, b5, they don't see anything here. They don't notice that anything has changed. And therefore I can just do the same thing here and replace them and move this here and iterate. And it contradicts an IP. And that's it. Thank you. Thank you. Thank you.
|
A longstanding open questions asks whether an unstable NIP theory interprets an infinite linear order. I will present a construction giving a type-definable linear (quasi-)order, thus partially answering this question.
|
10.5446/59327 (DOI)
|
Pyong Han Kim, Alexi Kolesnikov, and Cheong-Buk Lee. And they will start from some motivations. So this is about generalization or a variant of the notion of last-carga-la group for a type. So we are only looking what's going on inside the set of realizations of a type. And there's actually a couple of natural ways to define such localized Galois groups. We will focus mostly on one of them, which we will denote by Galois 1L of p. I also discussed briefly the other possible definitions and what are the advantages and disadvantages of each of them. So as to motivations, so there are two of them. First, description of so-called first homology group, H1 of p, which is a way to measure how far is p from having free amalgamation property. So this is a group of the group H1 of p, which measures how far is p from having free amalgamation. I'm not going to speak about this. So don't worry if you don't really know what is this. What is, well, free amalgamation is the same as independence theorem, probably. Almost of you know what it is. And the description here is for p, which is a strong type. So p is a complete type over algebraically closed set. And also here we assume that it's a type of algebraically closed set, so a type of a tuple, a, where a is algebraically closed. And then in this case, we get under these assumptions, the description tells us that H1 of p is, well, the quotient of our localized Galois group of p by the commutant of this and all the stabilizers of representatives of these, of elements of this group. So this group is already quotient. And then we can look at representatives and then we take classes of all functions, of all automorphisms, which have a fixed point. So commutant times the group generated by all stabilizers. Stabilizer x or x satisfying p? Here t is n eto. Yeah, here t is n eto. Yeah, yeah, just motivation. So don't worry about it. This is like the last time I mentioned this H1 unless somebody has more questions. But yeah, this is like a kind of different subject. And secondly, in paper by Krzysztof Ludomir and Pierre, they consider this group and also analogous group for KP types instead of Laskar types. Consider an epimorphism from Alice group of the flow. Because of a monster model acting on the space of types of some bounded length alpha over c, global types, onto Galois L1 of p or KP in an attempt to understand this flow. So this is the main object of interest, well, one of the main objects of interest in this paper. And this may be also seen as a variant of some epimorphism, of some epimorphisms considered earlier for different flows, but again, this is just for motivation. So now let me start some more systematic stuff about those groups. So first, OK, so the original Laskar-Galois group occurred already in a couple of talks, but let me just remind you the definition. So for any theory, we consider Galois group of t, which is the question of all automorphism of a monster model by strong automorphisms, which are, by definition, this is the group of automorphism generated by automorphism fixing some small substructure of c. So f out of c, such that there is a model, such that f restricted to m is identity. And we want to define something similar, but we don't want to look at automorphism of all the monster model, but just automorphism of a set of realizations of types. So we only look at restrictions to the set of realizations of a type, and then we want to quotient out by something which we think of as automorphism, strong automorphism of this set. And here you may actually have a couple of ideas how to define it. So we will use a couple of notations for different variants of this notion. Let me just remind you here that this is not visible immediately from the definition, but this does not depend on the monster model c. And definitely one of the properties that we want to have with our localized groups is also that it does not depend on the monster model. And actually not with all candidates for the definition, this will be clear. So consider for the moment just a partial type, but later in the talk it would be almost all the time a complete strong type. So consider a partial type p. Let's say over the empty set. It doesn't change anything, but for simplicity. And yeah, now, OK. So general question, what should Galois L of p be? So some kind of answer is already given by those motivations because, well, this particular notion Galois 1 of p fits the best probably to this context, at least to this one. But I guess also over there. Just look at the good information of having the same last strong type of electric restrict utilization. Yeah, this is one of the options. Yeah, of course, this is one of the options. Yeah, OK. So if you define this like this, one of the problems is that you don't know whether, well, at least we don't know whether it's independent from the choice of monster model. The last type? Sorry? Distinction to one single last type? No, one type. No, no, one. Why last type? No, no. No, no. OK. So at the moment we just restrict to any partial type, anything. It's OK. Well, OK. So did I say, yeah, a high p is over the empty set for the moment. OK, Galois 1 of p is automorphisms of p of c. Well, these are just restrictions to this set, restrictions of automorphism of the monster model c. Modulo the group of automorphisms of p of c such that for every realization of p, a is Laskar equivalent to f of a. So just the automorphism that preserves Laskar type of single realizations of p. OK. So this is actually, OK, this is a special case of more general definition where here you put lambda, but this seems to be most natural. Sorry. What is the point of time? Yeah, one is just that you take realization of p, not the tuple of realizations of p. OK, it should be clear in like two minutes. So second one, we don't want only realizations, but we want arbitrary long tuples, even if the tuple enumerating all that set. So Galois fix, I think, Galois fix of p is again automorphisms, quotient out by those automorphisms that let's call this tuple c. So c is Laskar equivalent to f of c, where c is enumeration fix. This is fix, where c is an enumeration of the set of realizations of p. And it turns out that actually it's enough to look at tuples of length omega, which is not. The theory is comfortable. No, the theory is arbitrary. You don't really need this. So if you look at tuples of length lambda by induction on lambda, you can actually see that for omega it's already implies for n lambda, the preservation of the Laskar type. So this is equal to, this is isomorphic, well, this is literally equal. It's not to say you can find out the subtree volume. Only for g compact, not for, here t is arbitrary. It is enough to say for countable for any theory. So this is the same as automorphisms of p of c, modulo f, OK, I will call it here out f l omega of p of c, where in general out f l lambda of p of c is the set of, is the question of the group of automorphisms of p of c by those who fix the type of, oh sorry, this is just a set, of course, yes, thank you. Just a set of those f which fix the type, Laskar type of tuples of length lambda. So for each a in p of c to lambda, a is Laskar equivalent to f of a. And yeah, interesting cases would be mostly lambda equal to one or lambda equal to omega because then we get this group. We're preserving the type of, the Laskar type of all the tuples. So this, this stabilizes in future very lambda, the stabilizer omega. That's right. Sorry? The, the, the, the lambda and out f l omega are the same with lambda, isn't it? Yeah, yeah, exactly. Yes, yes. So this is interesting only for like countable lambda and, well, it's probably not very natural to consider this for lambda equal to 35. So one and, and omega are the most interesting cases. Okay. And third one. So this is of course smaller group than this, but we can look at even smaller group. So Galois res from restriction L of p is the quotient again of automorphisms by those automorphisms that are restrictions of strong automorphisms. So this may actually seem at the first glance as most natural one, but, but there are some problems with it. Maybe these problems can be solved, but, but we're not able to. So f, there is f tilde, which is automorphism of the whole monster model such that f tilde restricted to the realizations of p is f. Okay. Oh yeah, of course, out f. Thank you. Well, like this is the, the quotient is the largest. Yes. So, so this projects onto Galois, omega or Galois fix of p and this projects onto Galois one of p and we know that this and this in general are not equal, but we don't know about this. So here do we have, do we have isomorphism in general here? No. For you, just every automorphism of PFC come from from automorphism. From automorphism, yes, it's just restriction of the restriction of global automorphism. Okay. So first notice, obvious thing that P is the last car strong type if and only if Gal 1 of P is trivial. So which is kind of desirable things to us. We don't really want to study Galois groups of, of last car strong types, which could be not trivial if we use other definition, the second or the third one. Okay. Second thing. So let's say proposition Galois lambda of P does not depend on the choice of master model. Okay. And we'll prove it. So here is the observation for any lambda for any small lambda. Yeah, there is one. Okay. Here is lambda. So the quotient Gal lambda. Yeah, this is a tuple. This is Cartesian product. So this is the restriction of lambda. Can lambda be larger than separation of the mass? Well, like in the definition, you can take anything you want, but here lambda, well, lambda is small, but like if you take huge lambda, it's still the same as Gal omega. So the point is for omega and finite lambda. Okay. Yeah, so here is this automorphism module of lambda. Okay. Yeah, no problem. Questions are welcome. So okay. So yeah, the proof is easy, but I will, yeah, I will present it to, to let you see where is the problem with, with the definition Gal L res, the one coming from restriction, which also may seem as a natural one, but I don't know how to show that for these definitions independent from the master model. Okay. So, so of course we can just, any two models we can embed in one bigger. So it's enough to show for one model, one master model and another much bigger than it. So fix master model C and the bigger master model. So C prime will be C plus saturated and strongly homogenous. Okay and we just write the isomorphism. So okay, maybe picture C prime C and then our set of realizations of P. We start from something here, which comes, which comes as the restriction of automorphism of this model, smaller master model. So fix F, which is equal say, or maybe just, okay, fix F, automorphism of model C and we want to put the value on the restriction of this. So phi goes from Galois lambda of P to Galois, P, but here we've index C with respect to C and here we respect to C prime, which means of course that it's computed in C prime. And we give it in the following way, just extend F in any way to automorphism of C. So there is automorphism, now let's say F tilde of C prime such that F tilde restricted to C equals F. Well F is automorphism of C, okay, we want to put the value on P of C, so we need to check in the end, our definition is correct when we use this F, but the F is on C. Okay, so put phi of the class of F restricted to P of C to be the class of F tilde restricted to P of C prime. And now why phi is well defined? Okay, well defined, so take two functions, two automorphism of C, which have the same restriction here. So suppose F1 restricted to P of C is equal to F2, well the class, so this class is the same, F2 restricted to P of C. Okay so in other words F1, F2 inverse restricted to P of C is Laskar strong in this lambda sense, so F lambda L of P of C. Okay so it preserves the Laskar type of tuples of length lambda, and now we want to know that if we take extensions G1 extending F1, G2 extending F2, then G1, and these are automorphisms of the big monster model, so automorphisms of C prime, we want to know that the restriction to P preserves lambda, the Laskar type of lambda tuples. Okay we want that G1, G2 inverse is in out FL lambda of P of C, so C prime, yes C prime, and then, yeah yeah of course, of course, thanks. So take a tuple like this, take A, which is in P of C prime to lambda, but now lambda is small, well formally it's not small, but without loss of generality we can assume lambda is countable, because for big lambda is the same as omega, so we take this and then since lambda is small, there is A prime in the small model in C, small monster model, which is Laskar equivalent to A, and now everything is easy, so F, not F, G1, G2 inverse of A prime, this is Laskar equivalent to G1, G2 inverse of A prime, because automorphism preserves Laskar equivalence, but here, sorry, if we start from A, here we have A prime, let's say, thank you, and now this is in the small model, so this is the same as F1, F2 inverse, F1, F2 inverse of F prime, but this was strong automorphism by assumption, so this is Laskar equivalent to A prime, so this shows that F is well defined, and by standard arguments F is now onto, this is no problem to check, and for very obvious reasons, it's one to one, if we get something trivial after extending, of course we are trivial before extending, and phi is onto, so let me just draw a picture, because it's a very standard thing, we have C, we have C prime, and we have some automorphism here, F tilde, let's say, then we fix, we take n a small submodel m of C and another n, this is sent somewhere by F, but we can also find a copy of m over n, say m prime, inside C over n, so just send this to this copy by F, and then you are done, phi of the class of restriction of F is the same as the class of F restricted to P of C prime, so this step is exactly the same as the usual Lasker-Galois group, but this requires some step to get that phi is well defined, and it's not so clear how to make this when we work with the definition Galois-Ress, we don't really work with n small tuple, we cannot find A prime like here, and in fact you can easily see that if you just don't care and you define, you try to define this morphism like this for Gal-Ress, then this doesn't work in general, you may have some other type which doesn't have anything to do with P, and then you don't have this canonical extension, there might be two automorphisms which here are trivial, but one of them does something on another type, so this is not so clear. So in the rest of this talk, I will talk about the first Galois group, the Gal 1 of P, and actually it will be dedicated to considering the question how far is this from, sorry, how far is Galois of a type of A, how far could it be from Galois of type of ACL of A, so why do we care about this? Because it's the type of A over the empty set, over the empty set. So for, okay, over the empty sets where we assume, well, we will assume, we will assume DCL of the empty set is ACL by, we may name parameters, so all our types will be strong types, and then in this description, so this is just a motivation, you don't need to focus on this, but the first motivation that I gave was a description of first homology group which worked only for algebraically closed tuple. So to understand homology group for any tuple here, in addition to knowing that the H1 of algebraic closure is described by Galois, by some quotient of Galois group, we also need to know how far is this from this, and the situation is not as good as we would like to. So there will be some special cases, some positive observations, but there will be also examples which are negative. Okay, so to start with, we have that in a G-compact theory, if T is G-compact, then if type of A is Laskar type, then also type of ACL of A is a Laskar type, and this fails, this fails if G is not G-compact. Can you remind me if G-compact is embarrassing? Yeah, this means that Kp equivalence, I think it was in some other talks, but this means that Kp equivalence is the same as Laskar equivalence, Kp types are the same as Laskar types. Okay, so Kp types of any tuple equal Laskar types. Yeah, I will just say that, so Kp type is the smallest bounded type definable equivalence relation on the most amount, and this is the smallest bounded invariant. Yeah, yeah, but here we just work over the empty set. Okay, so G-compact is over any, no. Okay, but we work over empty sets, so here we don't work too much. So this fails if G is not, sorry, if T is not G-compact. There is example, I think today I will not have time to show it, but it's not like super complicated one. And then one can translate this, in other words, we may see this, that if H1, sorry, H, not H, Gal. Gal of P is trivial, then, well, where P is the type of A, then Gal 1 of ACL of A is trivial, and in other words, yeah, I mean, you may see that, okay, this means that in case where this is trivial, they're just the same, and you may ask whether this is true in general. So question is Gal 1L of A isomorphic to Gal 1L of ACL of A for G-compact, okay, T. So if we don't assume G-compactness, of course, this fails badly already under assumption that this is zero, so it doesn't make sense to ask about this without G-compactness. In G-compact case, it's also not true in general, so the situation is quite complicated. So example, so consider the structure where we have omega-many circles, say S1 with index K with circular order, so let's call this just maybe S or R, K, and rotation maps G1 over N, where this is rotation by, maybe here, 1 over N, and also index K, rotation of Sk by, let's say, 1 over 2 pi N radians. K is just the number of the circles, so we have first circle and so on, S11, S12, and we add double covers between each of them, so there will be pi 1, pi 2, and so on, okay, so these are K for all K. So what are those pi? Yeah, yeah, so let me just write, so pi i is a, okay, natural double cover, cover of S1 by S1, okay? So in terms of complex numbers, just square. Yeah, they're not, so this is like, this is a structure, and yeah, they're not quite compatible, so that's why there will be like growing algebraic closure, so now we start with A here, and then those two points are in ACL of A, so C0, C1, and then those four points also. And the important thing is that in each component we have something from ACL of A, and thanks to this, because of this, the Galois group of ACL of A, Galois of ACL of A is actually the Galois group of all this structure, and this is inverse limit of double covers of tori, this happens that is, what? Sorry, this is all ends, for every K we take all the ends, all the ends. Yeah, pi K, yeah, just a single function, just one, okay, we just take one cover, and then inverse limit over K of tori, so this is our modular 2 to K times Z with natural projections, natural projections, and this is not isomorphic to circle, which is Galois 1L of A. So I will not go into the details here because I want to state at least one positive result. So since even in G compact T, even for G compact, yes it is, it is G compact and ACL, sorry, it is G compact, yes. Being in the same Lasker type means being infinitesimally close on every component, okay, on every circle. Yes. In each circle you have all the rotations. Yeah, yeah, exactly, and comes. So it is not this business. Yes, it is quite, it is not exactly the typical example of TIGLOR, right? For each circle you have all the rotations. Yes, for each circle we have all rotations, that is why being Lasker equivalent is being infinimeterously close, okay. In the last two minutes some positive things, so okay, so we restrict to finite tuples. So if we don't have isomorphism, but we still have, if T is G compact, we have that Galois of ACL of A is inverse limit over finite subsets of ACL of A of Galois 1L ACL of A. And now, so can finite. Sorry, A and C. Just A and C, yes, or C. So now a question could be, can this be non-isomorphic? So okay, theorem, this is quotient of Galois 1 of A by a finite subgroup. And it may actually happen that it is still not isomorphic. So even if we named, although G is, this group is connected, Gal 1L of A, C is connected. It may be not isomorphic to this finite quotient, Gal 1L AC over F, let's say. So this is by a finite subgroup F. And there is, sorry, no, this is quotient by F. So this is Gal 1L of AC, sorry, of A. And we have example where they are not isomorphic and, sorry, AC. Sorry, this is A, which is the question by F. So they may fail to be isomorphic. I don't have time for example. The positive thing is if this group is abelian, they are isomorphic. If this is abelian, they are isomorphic. So it's mostly negative results, only in a very special case. We have isomorphism. And for most questions, they are not exactly the same. Okay, thank you. Thank you.
|
The notion of the localized Lascar-Galois group GalL(p) of a type p appeared recently in the context of model-theoretic homology groups, and was also used by Krupinski, Newelski, and Simon in the context of topological dynamics. After a brief introduction of the context, we will discuss some basic properties of localized Lascar-Galois groups. Then, we will focus on the question about how far GalL(tp(acl(a))) can be from GalL(tp(a)). This is a joint work with B. Kim, A. Kolesnikov and J. Lee.
|
10.5446/59329 (DOI)
|
for Alex's previous talk and then also preparatory talk for the next speaker. Believe it or not, I contain all the definitions. So if you want to look at the definition, just stop me and then I'll show you, spend more time for the definition, even the basic one. So I think the, you know, as you know, this who defined this definitely shall I define an SOPN theory, long time of his 500th paper and then nobody really what's going on in SOPN theory. And then suddenly, I will define, I will define. What did you say? I don't know what you said something. I could have said, SOP1. He defined the notion of SOP1. Very good. Yeah, sorry, sorry. Just, okay. SOP1 and I'm the speaker. Sorry, sorry. Okay. And then Zoe actually found very nice and interesting example on non-bounded omega-free piece field. Well, there is very interesting notion of, even if it's not simple, it has a very nice notion of independence theorem and also the symmetric and things like that. So people suspect that there must be something going on. And then actually, even before Zoe, the so-called, the person named Granger, who actually left immediately his PhD thesis, student of Mike Prest, he studied already before Zoe on the vector space, infinite dimensional vector space over algebraic cross field with the void in your form. And that seems to be, he didn't say it is NSOP1, but it seems like to be NSOP1 and also has nice independence property and satisfying the independence here. And I think the Chonik of Ramsey's work that they did a lot of nice work in this paper, particularly they give criterion. And as you see this criterion, I mean this SOP1 theory is where this Kim's Lampe, so-called the Kim's Lampe fails. So once you're expert in simple theory or whatever, then as you see their criterion, their re-description of SOP1 property, something should going on on NSOP1 theory. And then breakthrough was made by Kaplan and Ramsey. They really prove something symmetry and then the three amalgamation, tie-bomb amalgamation, so-called independent theorem and also extension axiom, so-called in terms of Kim's dividing, Kim dividing. And then question remain as to whether everything works over SAT and then that's the thing I'm going to talk about. Okay, so working in saturated model and then you know all dividing, right? Dividing means that there is an indescent of a sequence and then you collection of the formula is inconsistent dividing and then fork. And then it's all Shala's notion. This notation is does not fork and then why Shala uses forking instead of dividing because dividing is not clear for forking. By compactness you can have this nice condition in any theory for extension. Okay, then symmetry, I mean any theory in theory has a symmetry over any SAT. If and only if it has trans-tivity for every SAT, then it has a local character that equivalent and simply is one of the equivalent properties. And you know on stable, this order property and stable, it's not on stable. And stable is simple. Okay, now the Moli sequence in the type of age of A is just indescent over independent, non-forky independent. And then the fact is simple theory by the definition of local character, any complete type has a Moli sequence. I mean some theory in some complete type doesn't need to have Moli sequence. This is, this is called Kim's lemma, so means that the dividing actually is simple theory equivalent for some Moli sequence or for any Moli sequence. Because you know the Moli sequence always exists, this for any is not a vacuous notion. It really you know makes a sense notion. And then using this Kim's lemma one can show that this forking and dividing and genocean coincide. And then play, the play we showed that the so-called independent theorem, but people tend to change the name because so many independence, so many independence. So people tend to call steady amalgamation or type amalgamation, you know this. And then characterize it there. Okay, so I spent some time in these examples. So we know the standard example in infinite set algebraic cross field vector space, random grab, blah, blah, blah. To parameterize the equivalence relation, why? So parameterize the equivalence relation is just two sort of structure. I mean, okay, so two sort of structure and then turn out a relation such that imply G is in this parameter set. And x, y is a defines equivalence relation on P, on Q, right? And then any finite structure is present in the exist that is random parameterize the equivalence relation. Okay, and then P is better than me, like any curve has a rational, absolutely reasonable curve has a rational solution. Omega P means the unbounded so that its absolute color group has pre-pro finite group having omega many generators. So infinite, this is interesting example. And when I first launched student, vector space in water theory setting is you don't have field sort, you know all the colors. It's funny that, you know, it's very strange, a little bit to me, line and the plane is not the final set. It's just algebraic closure. But doing that, you can have nice property like a strong and minimal and then forking independence is actually independence. And I always wonder why we don't name the field sort. So if you field sort, Granger proved that. Now field sort we divide in your form, you catch the dimension. You can have t1, t2 or distinct theory such as tn, you say stable theory with algebraic cross field sort, capture the dimension. But then I now realize why didn't model theory didn't put vector space in this manner. Why? Because simply finite dimension only finite many linear independent element, right? But general theory, Mollie sequence. So you have infinity many independent point. What does that mean? That means that the, the sporking independence does not catch linear independence in this theory. Now how about infinite? So infinite dimensional vector space with final uniform and with the sort algebraic cross field. Then it's even worse because it's non-timple theory so that, so non-timple independent is even not symmetric, not symmetric. So for example, looking at this formula, so t infinity in vector space, infinity dimensional vector space named sort for algebraic cross field looking at also one or two. So it's just hyperplane. Here's the b0, b1, x. This is a formula. Now hyperplane or one-dimensional line plane is all defined as that. Infinity-dimensional, this definitely devise over the empty set. So if you take the vector, vector b0, b1 along this line, then this plane is x minus b0 in this apparent space, let's say, then also one or two. Also one or two. Also one or two. Maybe, maybe, maybe. Okay. So this hyperplane is a defined concept but this formula devise. But actually this does not, Kim devise over the empty set. Why? The idea is you moving the parameter not along the line but moving along the independent, Bolli sequence. Right? Then it looks like two-dimensional but actually infinity-dimensional hyperplane so that whatever you along with, you know, Bolli sequence, it has still intersection. Still intersection. So it does not devise. This is typical, nice example, devise but does not Kim devise. And this Kim dividing captures linear independence. So that's the point. Okay? Good. So, you know, the tree property and simple t doesn't have the tree property. Okay. So SOP1, now I can talk about SOP1. Maybe this side. So SOP1 is just binary tree, binary tree so that the funny thing is that, you know, whenever you branching out 0 and 1, then here's R-par. Anything beyond this is inconsistent with this one, this parameter. So that's it. But this, an SOP1 is clearly not having SOP1 property but this, you know, you don't get any point here. But so there's a nice criterion here given by Chonika Ramsey that t has SOP1. If I know if you have, right, so here's say A0C0, sorry, A1C1, A2C2, blah, blah, blah, blah, so that each AICI having the same type over the left, over the smaller sequence, smaller component. So that if you take this sequence of all the type omega plus omega, then you can easily see that, you know, Kim's lemma fails to hold it because one direction, one path, one sequence is consistent. The other sequence inconsistent but both smaller sequence of the omega part, right? So as soon as you can see this criterion, you immediately see that, you immediately feel that something should going on in NSO-Pian theory. And then they made it Kaplan and Ramsey. And then they actually introduced the notion of global model sequence so that a given A model, we save a global type or monster model, is M invariant. It's just any M invariant or automorphism invariant. Then any type has global extension which is M invariant. The point is, given a set, even simple theory, simple theory. You don't need to have global invariant extension, right? But model is the general theorem that says that always any theory has global extension. Then we say this sequence is global model sequence. If some M invariant global type Q such that AI satisfy Q restrict to QM A less than I. So here I can found in the monster model. I sometimes, when I gave some of the place, I got confused. So this I actually can find in this monster model. And then this formula, Kim device over A, Kim device over the model if there is a certain global model sequence. And then collecting all this formula, then which is inconsistent. And then type folks. And then one may curious why they start to work with usual model sequence. Why they start working with global model sequence instead of the usual model sequence. That is because I think my guess is, you know, so they want to prove Kim's lemma over model. Usual model sequence, even if Kim's lemma fails, you cannot have this property. It's not immediate. This property is not. Even if Kim's lemma fails with user model sequence, it's not hard to get this. But with global model sequence by the property of invariant, when this global model sequence Kim's lemma fails, then you get immediate to get this. So they painlessly get this Kim's lemma for Kim independence. Right? But the real breakthrough is, I mean then extension is not so hard. But still, Kim device. Yeah, yeah, Kim device, over model, fixer model. Okay. So for any, for some, for some, for any. Right, right, right. So for any means that even if, you know, all the model is the same type, but all the global type may not have distinct type. May have distinct completion. But still invariant gives you, this failure gives you this Chonic of Ramsey sequence. But the hard part is still symmetry. Still symmetry. Symmetry, they have to develop quite innovative notions called like three more sequence. Right, that's the hardest part. And then they manage to prove it, prove symmetry with respect to this Kim independence and type of amalgamation of order. And then later on, using all this technique, they can prove that, you know, Kim dividing is the same amount of saying that not just global model sequence for any model sequence. So this comes later. I mean, if they try to work on this first, they even don't get the Kim's lemma, right. But as they work on global model sequence, they got this and then they proved later on this. Sir, would you mind if I just give a model sequence of global model sequence? Model sequence here. I promise, I told you that I have all the definition here, right. So even this notion is defined here, right. Okay. Global model sequence is a model sequence you mean in SOP1? In this sense. But model sequence doesn't need to be global model sequence. But global model sequence is a model sequence. Not your model. For my talk. For my talk. What do you mean? My talk. Okay. Okay. Now, in simple theory, we had this type amalgamation with respect to Laska type. Laska type is basically, is connected by indiscible sequences, right. There's an indiscible sequence later, then you can move one realization to the, as far as you can do, then it's called Laska type. Okay. Then, now from now on, for the last of my talk, T is NSOPIAN, has non-forkening existence. Non-forkening existence means that any formula over the set does not fog over the set. If forking is dividing, then any formula over the divide, any formula over the set, definitely it does not divide over the set. But, you know, it's possible that formula over the set can fog over the set. Equivalently, that any complete type has a model sequence in it. We work reasonably hard to show this, that NSOPIAN theory actually has non-forkening existence, but still cannot manage it. There's a, actually, Yan actually found, observed, particular case, it is true, but in general, we hadn't figured it out. So, without this, you know, you can still say something about, like, a Kim dividing, but doesn't make, it's a vacuous definition, right, without this. So, assume that T has NSOPIAN as non-forkening existence, so that this so-called Kim dividing over set is not vacuous notion, but there always is at least one sum-molar sequence, right? Then this does not Kim divide, and this notion, co-parent to the case M is a model, because just the Ramsey and Kaplan showed that this global moly sequence, you know, is the same amount of same there, is the equivalent. Okay, then, right, any questions? Okay, so, now we managed to prove that Kim's lemma, under this hypothesis, that the NSOPIAN with non-forkening existence, we can show that Kim's lemma, that the Kim divides, if and only if, for any moly sequence, you collect all this formula, then that is inconsistent. NSOPIAN, right, yeah, NSOPIAN. I assume that for the rest, T's NSOPIAN has non-forkening existence. Okay, so that we managed to prove, this is spent a lot of time to prove this, and then extension comes from this Kim's lemma, and similarly, we just mimic the same, I think the same proof, but type of amalgamation still you need to walk. Then we actually prove this type of amalgamation for last types with respect to this Kim independence, so that T is G-compact. G-compact already, someone defined this, so this is the one you want to, I didn't put the precise definition. Right, so the rest of the slide, I'm going to talk about the, oh, one thing I want to say is application is, you know, result is T is countable, and number of countable model is finite, more than one, then you have the same thing as stable theory, simple theory, you can have the same situation such that you have collection of tuples, finite tuples, so-called the, its own weight is strictly omega. So, so it's, NSOPIAN, I told you, I told you, I told you, didn't I tell you, told you, NSOPIAN has non-forkening existence. For the rest, we assume T is NSOPIAN and has non-forkening existence. Right, so that, that, that, that I'm not going to talk about that. So, for the rest of the time, I have about 15 minutes. Last slide, this one? Right, right, right. Yeah, yeah, we, so finite, so, yeah, finite ways, theories, is try to, people try to working on this way, way stops. So, super simple theory, possibly, super NSOPIAN theory, possibly doesn't make much sense for some reason, I don't know why, but super NSOPIAN theory doesn't make much sense, but on the other hand, you can talk about, you can talk about the finite way. So then, finite way, then, then, it contains, it contains properly, super simple theory. So, if it's less, then maybe people try not to develop theory, but it contains more. So, maybe that's the right context of talking about, if you're really involved in super stops. Anyway, okay, so, okay. So, now I'm going to talk about the proof, as much, for the rest of the time, I'm going, for this result. Under NSOPIAN. Under NSOPIAN with existence, I'm going to prove, I'm going to talk about the sketch, sketch of the proof of this, Kim's Lemma, particularly Kim's Lemma. So, there are basically three steps. The first step is, under existence, here, here is the point where this existence action is strongly used, that, given any molly sequence, you can find more than independent, non-fucking independent from I, the place is important, because non-fucking usually doesn't satisfy symmetry, so, left hand side, I, right hand side, such that I is a coir sequence, so it's a global molly sequence. And then, the idea is, if I don't take, if I have time, I don't know. But, this proof is not too hard, but, need some idea, and then, basically, use the notion of fundamental order, actually, introduced by a long time ago, I mean, like, by Boazah, and then used by many people, like, Pillay and, Laska used, in his paper, and I used, in another joint paper, and then you can get this. So, the claim two, then, using claim one, you get claim two. Then, proof, claim three is, I'm not going to say anything about claim three. So, we proved claim one first, and then, we spent some time, couldn't get it, and then, just, why not assume just Kim's lemma, and then prove claim three? Kim's lemma. Kim's lemma? This one. This one. For any further? Right. For some, but for every. Okay. And then, use, under the assumption of Kim's lemma, you can prove this. Then, this, actually, listen to me, get proved, claim two. And it turns out, the proof is not too hard, but, you know, you need some idea. I'm talking about the proof for claim two, for the rest of my talk. So, the point is, let's say, you have, so, plus A as empty set, so, you have two indisputable sequence, I and J, starting with, say, A0 is equal to B0, right? So, assume that, you know, one, vertical, multi-sequence, assume that that is consistent. Then, enough to show that, the horizontal multi-sequence, consistent. That's what you want, right? So, now, I hear a bold pace. One has to use this both, multi-sequence. So, J's multi-sequence is used, and then, I's multi-sequence is both used. So, since J's multi-sequence, then, by standard argument, you can find copy, this LJ copies, such that, everything here, every component, have the same type over J. So, it's just using the multi-sequence, such that, this, this hold, right? After assuming this. So, you know, this, this, all this realize same type of J. Moreover, this together with the rest of the J, have the same type as J, right? So, you have this configuration, then, you move this L shape to right-hand side, a little bit, so that, you can have this. So, all this same type over that. Now, then, you unfold this, unfold this array, and make a tree. Make omega less than omega tree. How do you do that? Because, first, you have, first, you have, say, everything, same type over this. Now, take automatic image while fixing the rest, and moving this to this, right? Then, you can have this fan-shaped things, right? And then, moving this to this. Then, again, you have, right? And then, now, they also realize same type over this. So, you know, just to, sorry, you just keep expanding it, kind of unfolding this array, and then find the tree, so that, so that you get omega less than omega tree, such that, any pass, any pass has the same type as J. Any pass has the same type as J. On the other hand, each sequence of siblings. So, in this, this, this, right, any alpha here, and then, all these siblings have the same type as I. My I is originally a vertical one, and then, J is now a path. Now, we should pass through this modeling theorem, you know, joined to work with Hyungjun Kim and Skaw, so that, I mean, I'm not going to talk about the detail of modeling theorem, but what you get is, now, this sequence of L sub k, such that, L sub k, so here's L0, L1, L2, blah, blah, blah. That is actually more or less like, this is L0, L1, L2, right, they are all by compactness, and then, the kind of Ramsey Ramsey, you know, you know, this old Ramsey. You can assume that the, in this another sequence, moreover, moreover, now, basically here, the baseline is a J. Because, now, it's a rotate a little bit, rotate, so that, this is J, right, but due to this modeling theorem, this indissernability, everything here have the same type. I'm not saying everything here, same type of J, but anything has the positive value, that has the same type of just one up. After I twist, now, this vertical line, not clear whether every path has the same type of this space, but at least every path, I mean, path means this. Given any function from omega to positive number, this has the same type as this one. But, here, you can use, this two sequence is, again, by indissernability, this is Chonic of Ramsey sequence. So, consistency, inconsistency must be preserved, because I'm working on Anna's opion theory, providing existence of Chonic of Ramsey sequence. What's a CR sequence? Right, right. Maybe somebody wants. So, here, AI part is, non-Anna's opion, this does not happen. So, whenever you have this condition, then consistency should be preserved. Okay? So, it's enough to find some positive value of the function G, such that, along that path, it's consistent. So, along this, finding some positive value to G, along that path, consistent, then that should be consistent with this. And then, Anna's opion, you know, that should be consistent, should be managed here. Okay, now, now, I'm going to use claim one. I'm going to use claim one. Claim one says that, so, here, J, so, originally, J Mollie sequence is used because you can get this tree. Omega, let's say, Omega tree. Now, since I is Mollie as well, you can apply this claim one, so that you can find J, in like a forking, non-forking independent, and this sequence is Mollie, I mean, indecentable sequence, sorry, indecentable sequence. So, you can assume that they have the same type, so, by claim one, you can assume that, you can assume that all this is M indecentable sequence. Actually, not just, but each piece, each vertical line is actually, actually, a queer sequence, so, global Mollie sequence, global invariant Mollie sequence. Now, so, working with, starting with this, and then, now that. Now, because this collection of AI, along this vertical line is consistent, so, this, say, a one, say, k one, does not Kim divide over the MTC. Now, we use Kaplan, Ramsey's type of amalgamation over Mordor, because we find Mordor here, right? So, that the, now, this, by pigeonhole, you can assume some sub-sequence must have the same type over this one, but still is a global Mollie sequence. So, that means something you can find here, which is independent, but this formula is already known Kim device over Mordor, so, by Kaplan-Ramsey consistent. Now, this, the second line, again, pigeonhole, something must have the same type over this two. So, you can find another pass, and then three, now, another one, you know, by pigeonhole, and, you know, this, you can find, because as far as some sub-sequence, which has the same type over this three, two first, then, this one, and that is independent, then, using the independent theorem over Mordor, so, you can find the pass. That's it. I still have five minutes, so, I have actually several versions. I've had five minutes left, ten minutes left, five minutes left, so that the, okay, so, recent observation. So, recent observation, recent, not observation, this is result by mainly by Ramsey, again, Ramsey Kaplan, or the Ramsey Kaplan, Shala. So, the application is that, say, T, again, same, same assumption, same assumption, local character hold. I, or, finite, D, and any set, there is a zero, a, such that, and, say, D, a zero. This comes from, so, this comes from the fact that there does not exist D and finite, D, and, A, I, sorry, continuous, increasing, sequence of size is less than T, and the length is, I is T plus, this is, so, here, continuous is important, and then, size is important, and this is also important, so, such that, such that D, A, I, A, I plus one, four, or I, but everything, anything else can happen, in the random parameterized equivalence relation, you cannot find a increasing sequence of a countable omega length, such that, each just four. So, I think the super stable, super NSOP in the precise sense, in super simplicity, doesn't preferably make much sense, and then also, if you don't have a freedom of, what did you say just now, what doesn't make sense? I will talk about, I will talk, so, if you have a freedom of choice of the size of set, then, you have increasing one of arbitrary length, and then, non-continuous, also, with this restriction, you can have non-continuous, so, you know, here, the continuity, size bound, length bound is very important, everything else can happen, so, the fact is, oh, actually, and the trans-dividity lifting, hold, so, believe it or not, all the actions that, all satisfies, so, only thing, it doesn't satisfy the one direction of trans-dividity, which is base monotonous, so, another thing, then, our stuff, say, now, P of x, a, 0, Kim device, overset A, if and only, if and only, for any, say, A i, which is this independent small e, e, say, A i, is less than, right, sorry, right, and in this number, P of x, A i, i is inconsistent, so, believe it or not, this is actually very, very important theorem, a question is asked in Ramsey Kaplan paper originally, but much weaker condition is actually true, so, this is very nice, I stop my talk. So, now, there's a time for questions.
|
Let T be an NSOP1 theory. Recently I. Kaplan and N. Ramsey proved that in T, the so-called Kim-independence (ϕ(x,a0) Kim-divides over A if there is a Morley sequence ai such that {ϕ(x,ai)}i is inconsistent) satisfies nice properties over models such as extension, symmetry, and type-amalgamation. In a joint work with J. Dobrowolski and N. Ramey we continue to show that in T with nonforking existence, Kim-independence also satisfies the properties over any sets, in particular, Kim’s lemma, and 3-amalgamation for Lascar types hold. Modeling theorem for trees in a joint paper with H. Kim and L. Scow plays a key role in showing Kim’s lemma. If time permits I will talk about a result extending the non-finiteness (except 1) of the number of countable models of supersimple theories to the NSOP1 theory context.
|
10.5446/59330 (DOI)
| " the general setup of what machine learning looks like in general and what it looks like in the spe(...TRUNCATED) | "There are multiple connections between model-theoretic notions of complexity and machine learning. (...TRUNCATED) |
10.5446/59331 (DOI)
| " I want to thank the organizers both for inviting me to this and for allowing this talk because I s(...TRUNCATED) | "We give (equivalent) friendlier definitions of classifiable theories strengthen known results about(...TRUNCATED) |
10.5446/59332 (DOI)
| " But I'll talk about it nonetheless. It's actually nothing particularly deep. It's a personal obses(...TRUNCATED) | "It is by now almost folklore that if T is a countably categorical theory, and M its unique countabl(...TRUNCATED) |
End of preview. Expand
in Data Studio
- Downloads last month
- 5
Size of downloaded dataset files:
131 MB
Size of the auto-converted Parquet files:
131 MB
Number of rows:
8,481