doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/53517 (DOI)
|
So I'd like to start by thanking the organizers for giving me the opportunity to talk about my work at this virtual conference. And I'd like to mention that part of the work that I'm talking about today is joined with Dennis Ossin, Carolyn Abbott, and some ongoing work with Alex Rasmussen. So in the area of geometric group theory, one of the tools that's available to us in order to try and study a group is to do so by means of a group action. And for any group, there's a very natural way to do this. So you start with your favorite group and you pick a generating set for your group, which for my purposes does not necessarily have to be finite. And we construct the Cayley graph of our group with respect to this generating set. And the nice thing about constructing this Cayley graph is that we've turned our group into a metric space with respect to the word metric on X. And our group has an action on this graph, which is simultaneously isometric and co-bounded. So not only do we have a metric space, it comes equipped with a natural interaction with the group. But this process is very much dependent on the choice of your generating set X. Because if you pick different generating sets, then the resulting Cayley graphs can look very different. So as an example, if our group is the set of integers and my generating set is sort of the standard generating set plus minus one, then the Cayley graph looks like a by infinite line. And this is a very intuitive way to think of the integers. But on the other hand, if I chose the generating set, which contained every element of my group, which is sort of the largest generating set I could choose, then the Cayley graph is significantly different. And it looks like a bounded set of diameter one, because any two integers are now connected by an edge, which is labeled by their difference. And we start to see that the first Cayley graph, which is the line, is very useful for the purpose of studying our group. It remembers a lot of properties of our group. But this other Cayley graph remembers nothing about the inherent structure of our group. And in fact, if I were to do this for any group and take the generating set that contained every element of my group, this is exactly the Cayley graph that I would get, it remembers nothing of the properties of the group from which it came. So at the beginning of joint work with Dennis and Carolyn, this was an observation that we wanted to formalize. We wanted to understand that when we have two generating sets, how do we say that one is better than the other for the purpose of studying our group via the associated Cayley graph? And the definition from our work is the following. So we take your generating sets x and y, and we say that x is dominated by y if the following happens. You take every element of the generating set y, and you consider its word length with respect to the other generating set, which is x, and then you take a supremum over all of these word lengths. As long as that's finite, I'm going to say that x is dominated by y. And this dominance relation is a pre-order relation. And every pre-order gives rise to an equivalence relation in exactly the same way. So we say that x and y are equivalent as generating sets if they each dominate the other. And the natural thing to do once you have an equivalence relation is to collect the equivalence class of any generating set, which we'll denote by the use of these square brackets. So what are some things to note about this dominance relation? Well, the first thing to note is that it is inclusion reversing. So the larger my generating set, the smaller it is in this dominance relation. And this is consistent with the observation that we just made, that if you choose the largest generating set, the one that contains every element of your group, it's the worst calligraph to be looking at because it doesn't remember much about your group. The second thing is that this dominance relation can be extended to the equivalence classes themselves. So I'm going to say that the equivalence class of x is dominated by the equivalence class of y if x itself is dominated by y. Or alternatively, you can find representatives in these equivalence classes that satisfy the same dominance relation. If g happens to be a finitely generated group, then the equivalence class of that finite generating set will be the largest equivalence class. And this is simply because the supremum turns into a maximum, which is always finite for a finite generating set. And the last thing to note is that if x and y are equivalent as generating sets, then it follows from my definition that the Cayley-Graphwoods respect to x and the Cayley-Graphwoods respect to y are quasi-isometric. So first of all, what is quasi-isometric? Well, it's a coarse version of isometry for spaces. And what that means is that up to having one constant that allows you to scale the metric and having potentially another constant, which you can add or subtract to the metric, these two spaces look alike. So in other words, the large-scale geometry of these spaces is the same. And from our definition, it's easy to see why this would be true, because the constants that give you the supremum in both cases will precisely determine the constants of this quasi-isometric. But I'm not just interested in equivalence classes of generating sets. I'm interested in a very specific type of equivalence class. And that is what we call a hyperbolic structure on our group. And it's the equivalence class of the generating set so that the associated Cayley-Graph is hyperbolic. And when I say hyperbolic, I mean it is a groma of hyperbolic space. So geodesic triangles are delta-thin for some delta positive. And it's OK for me to call a structure hyperbolic, because we know that hyperbolicity is a quasi-isometric invariant. So no matter which representative I pick from this equivalence class, the associated Cayley-Graph will still be a hyperbolic space. We might change the delta that's needed to make this Cayley-Graph hyperbolic, but that's OK. And I've collected all of these hyperbolic structures together into this set, which we denote by script H of G. And I'm going to endow it with the order that's induced from the dominance relation. So we have a partially ordered set. And one thing I'd like to mention about H of G at this point of time is that we don't actually only have to think of equivalence classes of generating sets. We can start in a much more general setting that of co-bounded actions of our group on hyperbolic spaces. And there is a way to put these into one-to-one correspondence with the elements of H of G. And the proof of this is really very similar to the proof of the Miller-Schwarz lemma. The primary difference is because these actions need not be proper. I cannot guarantee that the generating set that I end up with is finite. And that's why we don't request that we only start with finite generating sets to begin with. But this is just something to keep in mind. It's not particularly important for the purpose of this talk. So what are some of the things that we can say about this post-set H of G? Well, one of the things that we can say almost immediately is that it breaks down into the disjoint union of these four smaller partially ordered sets. And this follows almost immediately from the work of Gromov. So in Gromov's work, you have a group G that's acting on a hyperbolic space X. And our hyperbolic space comes equipped with a boundary. And we consider the limit points of the action of our group on the boundary of this space. And we know that this action can therefore look like one of five different kinds of actions which are essentially characterized by what this set of limit points looks like. So the first possibility is that our action is elliptic. And H of E is the post-set which contains the elliptic structures. And what this means is that there are no limit points of our group on the boundary of our hyperbolic space. And in terms of structures, it means that the elliptic structure, which is the equivalence class of the generating set that contains every element of your group, is the smallest structure. And in fact, the associated telegraph is quasi-asymetric to a point. The second possibility is that the action is linear. And H sub L is where we collect all of our linear structures. And what this means is that there are exactly two limit points on the boundary. Moreover, it also means that our group contains a loxodromic element. And by that, I mean that there is a bi-infinite axis in our space. And this loxodromic element acts along this axis by translation. And if I pick any other loxodromic any element in this setting, its limit points on the boundary are the same as this other loxodromic element. The third possibility is the one that I'm interested in the most, which is where we collect quasi-parabolic structures. And what's a quasi-parabolic action? Well, it's where we have infinitely many limit points on the boundary of our hyperbolic space. But moreover, we request that there is a global fixed point on the boundary. And the last one is where we have our general type structures. And in this case, we also have an infinite number of limit points on the boundary of our hyperbolic space. But the group has no fixed points on the boundary. So the standard example to think of general type actions are free groups that act on the heligraph that looks like an infinite tree, so for example, F2, which is the free group on two generators, acting on the heligraph that looks like a four valence tree. Now Gromo's work has one more possibility. It contains the possibility that a group acting on a hyperbolic space can be parabolic. However, parabolic actions are never co-bounded. And because I'm dealing with actions on heligraphs, those actions do not turn up in this partially ordered set. And that's why I only have four substructures here and not five. And why am I interested in H of G? Well, it gives us a way of studying all possible co-bounded actions of our group simultaneously. And the hope is that if I can understand each of these smaller partially ordered sets, then there's a way to put that information together to try and understand H of G as a whole. And so not only can I look at all the different actions of my group on hyperbolic spaces, there might be from this examination a candidate for a best action to study in order to try and understand my group via a group action. And that brings me to HQ of P, which is the post set containing the quasi parabolic structures. And the reason I'm particularly interested in them is because in our joint work with Dennis and Carolyn, we could say a lot of things about linear structures and general type structures. But there were not a lot of things that we could say about HQP of G. And that's why my interest lies in this post set. Furthermore, it was clear to us that quasi parabolic structures do behave very differently from some properties that we were able to determine for linear and general type structures. And in order to illustrate that, I'm going to cite a few theorems from our work. So it turns out that we can construct groups Gn, which have as many linear type structures as we want, but they have absolutely no quasi parabolic or general type actions on hyperbolic spaces. And we can do the same for general type structures. We can create groups Hn that have exactly n general type structures, but no quasi parabolic or linear structures. And the natural question to ask is, can we do the same for quasi parabolic structures? And the answer is no. Because if you have a quasi parabolic structure on your group, you most definitely will have a linear structure on your group, and they will be related in the sense that the linear structure is dominated by the quasi parabolic structure. And the machinery to do this comes from using the Busaman pseudo character. So a pseudo character is an almost homomorphism. So it's a homomorphism up to some bounded defect. And what we do is that we start with our quasi parabolic structure. We look at the action, the quasi parabolic action of the group on this Cayley graph. And when you run the Busaman pseudo character on it, what it does is create a projection to a line. And it does this by crushing enough of the action out in a manner that is analogous to expanding your generating set. And because of that, we get a linear structure. And because we've expanded our generating set, and our dominance relation is inclusion reversing, we get that the linear action is smaller than the quasi parabolic action. Moreover, while we had lots of examples of groups that have different linear and general type structures, we had very few examples of groups which admit quasi parabolic structures. And one of the main ones in our paper was this group Z reads Z, where you can get an anti chain of quasi parabolic structures on this group. The way to do this is to factor through the action of ZN reads Z, which are the line fighter groups acting on their bus their tree. And if you order these by divisibility, then you can get an anti chain of quasi parabolic structures here. But we were unable to come up with examples of groups that have a finite number of quasi parabolic structures. So some open questions from our paper were the following. Does there even exist a group that has a non zero finite number of quasi parabolic structures? Is there a group where we can find an uncountable chain of quasi parabolic structures? And if we request a more complicated example, is there a group where you can simultaneously have a chain and anti chain of quasi parabolic structures? And I'd like to mention that instead of quasi parabolic structures, if we ask the question for general type structures, then the answer is yes. In fact, you can pick your favorite a cylindrically hyperbolic group and HGT of that group will always contain a chain and an anti chain of cardinality continuum. So this makes it a natural question to ask in the other situation when you have an infinite number of limit points on the boundary of your hyperbolic space. And because we know that there is some relationship between linear and quasi parabolic actions, it's natural to ask that if our group does have quasi parabolic actions, then is the number of linear structures always less than or equal to the number of quasi parabolic structures? In other words, is it always true that the linear action on such a group has to come from applying the boseman pseudo character to some quasi parabolic structure? And I was able to answer these questions subsequently after our joint work with Dennis and Carolyn was done. So here are the answers to these questions. The lampliter groups, all of them, Zn reads Z for n at least two, have a finite number of quasi parabolic structures. There is a group, namely F2 reads product Z for which you can have an uncountable chain and an uncountable anti chain of quasi parabolic structures. And moreover, there exists a group where the number of linear structures strictly is larger than the number of quasi parabolic structures, which is non zero. And all three of these theorems are actually the consequence of a much more general theorem that I proved for groups that look like G reads Z. So here is that theorem. We let G be a group. And I claim that if I consider the hyperbolic structures of G reads Z, then I can always find a substructure, which I call B of G sitting inside of it that embeds into it, in fact. And what does B of G look like? B of G has this structure. So I consider the poset of proper subgroups of G ordered by inclusion, and I have these two disjoint copies of S of G. And the reason they're disjoint is because every element of this copy is going to be incomparable as a structure to every element of this copy and vice versa. However, every structure from these two copies of S of G dominates this common linear structure, and that of course dominates the smallest structure, which is the action on a point. And the two copies of SG correspond to quasi parabolic structures on our reads product. Moreover, if G is a finite cyclic group of order n, then this embedding is an equality. So for the lamplighter groups, this is exactly the structure of the hyperbolic structures on our group. And how do we prove this theorem? Well, my work heavily relies on the work of Capras, Cornulia, Mono, and Tessera. And they have this machinery of strictly confining automorphisms, which can be applied to groups that have this semi direct structure. So you have a semi direct product with Z. And I'm not going to define what a strictly confining automorphism is because it's a bit of a technical definition. But what's important for the purpose of this talk is that if you start with a subset of your base screw bay, that's strictly confining under either of the generators of Z, so T or T inverse. And from this information, we can extract a quasi parabolic structure on our group H. And it is exactly the Kruglitz class of the generating set that contains this subset of A along with T to the plus minus one. And what does the quantifier regular mean? Well, that means that if I take this quasi parabolic structure and I apply the Busaman pseudo character to it, the linear structure that I get is actually the projection of this group to the Z factor. So the homomorphism that you end up with is the projection to Z. And how does this apply to the group G read Z? Well, we can write our read product as a semi direct product where the base group is the direct sum of Z many copies of G. And how does T act on it? Well, T will take this infinite tuple and shift every element one space to the right. And T inverse will take the same tuple and shift every element one space to the left. And now I pick my subgroup H of G. And I'm going to define sets which are subsets of A that are going to be confining under the action of T and T inverse. So I define Q H, which has this structure. And I think of this as starting at index zero. I am allowed to have elements of my full group. But on all the negative indices, I only allow myself to put in elements of H. And Q prime of H is going to be the mirror image. So now starting from index zero all the way to the left, I have the full group. But on the positive indices, I only allow elements of H. And the thing to notice that this structure is compatible with the action of T because I can shift the full group. I can shift the subgroup H into the full group, but I may not necessarily be able to shift backwards. And here this structure is compatible with the action of T inverse, but not T for exactly the same reason. And it turns out that Q H is actually strictly confining under T. And Q prime of H is strictly confining under T inverse. And when I do this process for every proper subgroup H of G, what I do is that I get an entire collection of Q H's, all of which are confining under T. And I get another collection Q prime of H, all of which are confining under T inverse. And this is exactly what creates those two disjoint copies of S sub G in my partially ordered set, one that corresponds to the Q H's and the other that corresponds to the Q prime of H's. And because of the incompatibility of the action of T inverse with this structure, and similarly T with this structure, this is also intuitively what makes them incompatible to each other in our post set. And what does the regularity condition give us? Well, it's important because it gives us the common linear structure. Because no matter which one of these quasi-parable structures that I pick, and I apply the Busavan pseudo character to it, I get exactly the same homomorphism to Z. So the linear structure that they dominate from this construction is the same. The hard part of the proof is showing that there is an equality when G is a finite cyclic group of order N. And the reason is that you first have to argue that in this case, every quasi-parabolic structure firstly comes from a confining subset. And then you have to use long algebraic arguments to show that that confining subset has to be the look like QH or Q prime of H for some proper subgroup H of G. But how does this buy us the examples that we want? Well, for the land-cliar groups, this is obvious because I have an equality. And why do I have a finite number of quasi-parabolic structures? Well, because Zn has only a finite number of subgroups. And how many quasi-parabolic structures do I have? It's twice the number of proper subgroups of Zn. So that's a finite number. And for the example, which is F2 reads Z, here's what we do. We start with P of N. And this is the poset of the subsets of the natural numbers ordered by inclusion. And I embed this poset into the proper subgroups of F infinity. So F infinity here is the free group on countably infinitely many generators. And I start by indexing these generators. And for any subset of the naturals, I map it to the subgroup that's generated by the elements that have those specific indices. And now you can embed F infinity into F2, which is the free group on two generators. So you get an embedding to the proper subgroups of F2, and by the theorem, this creates an embedding into the quasi-parabolic structures of F2 reads Z. Now of course, I could have done this with if F infinity reads Z as well. The reason I wanted F2 was because I wanted an example of a group that's finitely generated with this property. And lastly, what is the example of the group that has strictly more number of linear structures? than quasi-parabolic structures? I claim it's the group that looks like this. It's the standard land fighter group direct product with Z. This group only has two quasi-parabolic structures. And those are precisely the quasi-parabolic structures that come from the land fighter group. The direct product with Z does not contribute any new quasi-parabolic structures. However, I claim that this group has uncountably many linear structures. And how do we get this? Well, our group K has two independent projections to Z. One comes from the Z factor of the wreath product, and the other comes from the projection to this copy of Z. So you can actually get a projection to Z cross Z. And now you pick your favorite line in the Euclidean plane of whatever slope you want, and I can project the entire grid down to this line. And if I do this for lines of different slope, I get very different, in fact, incomparable linear actions. And because you have a continuum worth of slopes that you can choose, you get uncountably many incomparable linear actions. And that's why this is cardinality continuum. So not only is the cardinality of the linear actions strictly greater, this is as different as it can get. So after I did this work corresponding to wreath products, Carolyn and Alex took this machinery of confining subsets, and they applied it to other groups. And the work of theirs that I want to mention here is the results that they have firstly for bump-stalked solitaire groups BS1N. These groups also have a semi-direct product structure, which looks like Z1N, semi-direct product with Z. And they were able to show that the hyperbolic structures of this group have this partially ordered set structure. So what's happening here? Well, we take the prime decomposition of N, and we assume that there are K different primes involved in the prime decomposition event, and this is that K. And then you take the power set of the set 1 up to K. That creates an embedding of this lattice here, and all of the structures here except the smallest one, which is the linear action, correspond to quasi-powerballic structures. In addition to these, there's one more that comes from the action on a hyperbolic plane. And of course, then you have the linear structure dominating the action on a point. But all of these quasi-powerballic structures also have interpretations in terms of arc and finding subsets. And the other result of theirs that I'd like to mention is the classification of hyperbolic structures on the group Z squared, semi-direct product Z, where this extension is defined by a matrix from SL2Z. So it's a two by two matrix of determinant one. And it turns out this is the structure we get. So we have two quasi-powerballic structures, which dominate a common linear structure, and that dominates the action on a point. But in terms of confining subsets, what are these quasi-powerballic structures? So we take this matrix phi and we consider it's attracting and repelling eigenline. And in terms of confining subsets of Z squared, these are neighborhoods of the attracting and the repelling eigenline for phi. So together with Alex and Carolyn, we're currently working. And what are some of the questions we're interested in? Well, we want to know, does there exist a group that has an odd number of quasi-powerballic structures? Because all of the examples we know so far have an even number. For the land-flatter groups, it's twice the number of proper subgroups of Zn. That's an even number. In the bombstock solitaire group case, it's also an even number. And for Z squared, semi-direct product with Z, it's still an even number. So does there exist a group that has an odd number of quasi-powerballic structures? And in particular, does there exist a group that has only one? Because what we'd like to do is construct groups KN that have as many quasi-powerballic structures as we want. And I'd like to mention that if I can answer this second question, then it would be immediately possible to answer the third question by taking the n-fold direct product of this group. Because for that group, the quasi-powerballic action would have to factor through one of the copies of your group, and that is exactly what will give you exactly n. And I'm also interested in understanding from the setting of the VEET product for which groups the equality holds. So I know it's true for the finite cyclic groups, but I also know it's not true in general. And I also know that it's not true for every finite group. There is a counter example in the case G is a Z2 direct product Z2. So that's the finite cyclic group of order two direct product with itself. But I suspect that this equality could hold at least for some of the finite symmetric groups. So what conditions does your group G need to ensure that this equality holds? And the work that we're focusing on is aimed at understanding the following. We'd like to classify the structures on groups that look like Z to the n semi-direct product with Z, where the extension is defined by SLNZ. And what we suspect is that in this case, as long as you consider the eigenspaces of a fee, your quasi-parabolic structures at least on this group should somehow be in some one-to-one correspondence with those eigenspaces. And what we'd like to do is also understand structures on iterated H&N extensions, with the ultimate goal being to try and understand the structures of polycyclic groups. But because these groups are not going to admit general type actions, the hope is that maybe we can find specific examples that answer these questions in this class of groups without having general type actions. Because we know that probably we won't be able to control the number of linear structures. But at least if we can have no general type actions and have as many quasi-parabolic structures as we want, then we're closer to understanding H of G as a whole. So that's sort of the direction in which we're headed. And that's all I had to say. Thank you very much.
|
The study of the poset of hyperbolic structures H(G) on a group G was initiated by Abbott-Balasubramanya-Osin. However, the sub-poset of quasi- parabolic structures is still very far from being understood and several questions remain unanswered. In this talk, I will talk about the motivation behind our work, describe some structural results related to quasi-parabolic structures and thus answer some of the open questions. I will end my talk by discussing ongoing work in the area. This talk contains some joint work with C.Abbott, D.Osin and A.Rasmussen.
|
10.5446/53520 (DOI)
|
So welcome everybody to this talk on action of the criminal group on the cut zero cube complex. Before starting, I would like to thank the organizer of the virtual geometry group theory conference to have invited me to give this talk. And I would like also to thank Guillemin and Fett for the technical help support. So I will present a work which is which is joined with Christian Ouvres and we are interested in criminal group. So first of all, I will define the notion I need to construct this complex. So this group is coming from algebraic geometry. So I will recall some notion. So for the one which are used with algebraic geometry, when I will speak about surface, it will be regular surface over any field and projective surface. For the one who doesn't understand what I mean, I just mean projective surface which are smooth without singularities and you can take the complex number. So be rational transformation between two surfaces is an isomorphism between two open dense subsets. So this is for the Zarisky topology. So we are allowed to remove zero locus of polynomials. So in the case of surfaces, we will be allowed to remove points and the curve. And contract curve, sorry. Why are we interested in the rational transformation is because in algebraic geometry, classifying a variety up to isomorphism gives too many classes, it's really too rigid for this notion. And so the good notion to classify them is up to the rational transformation. So these maps are really important in algebraic geometry. And one we will focus a lot on it and we will sit all along the x, all during the talk. It is a notion of blow up. So we blow up a point P in a surface S prime. It means that we have a morphism. So in a map which is well defined from surface S to surface S prime. And it contracts the more dark line which is isomorphic to P1. It contracts it to the point P and outside this point P and this line which is isomorphic to P1, it's an isomorphism. And we say that the point P has been blown up and what we get, this line we get here is called the exceptional divisor. And what happens is that when we do this, if you take two lines down which cross on P, they will be separated above. So in fact any point on this exceptional divisor will give you direction of a line down. When we fix surface S, we can look at its group of rational transformation. And the Krimonaut group is when the surface fixed is a projective plane. As the name suggests is Luigi Krimonaut which has introduced this group. And it has been studied from many, many point of view. So of course from an algebraic point of view, but also the dynamic of its map has been studied. It has been studied also is this to Luigi and also from a geometry point of view. In fact since the year 2008-10, this group has been constructed in action of this group on a geometric space on a hyperbolic space and a lot of results has been done using this action. Like for instance that it's not a simple group and that it satisfies the alternative kids and so on. But the aim of this talk and also that we, the work of the work with Christian is to construct an action of the Krimonaut group on a kazehoku complex. So first of all it will give a new geometric space from the Krimonaut group and we'll see that it's really a natural way to construct it and that we unified a lot of results which we already knew and also it will extend them to any field. And what we in mind when we started this project was also to construct then if this was working well. Kazehoku complex is for Krimonaut group of higher rank. So when we look at the rational transformation of projective space, not just a projective plane. And so this was the second part of our cycle but now we focus only on the first part. So construct a kazehoku complex for the Krimonaut group of rank. For those who, well, it's the first time we saw this group. So for the kazehoku complex we can also, a birational transformation, we can write it explicitly from the projective space of the projective plane. So for xyz we have to see it's here homogenous polynomial of the same degree and without command factor. So this is a rational map and if you want it's birational you need to also say that it has an inverse of the same form. And we ask that the polynomial are homogenous and of the same degree in order that this map is well defined on P2. And we want that it's a common, without command factor, because if it will be a with command factor you can cancel it and once we have done this the degree will be well defined and it will be the degree of the F5. And here is a Dutch arrow and this is because there are points where F is not well defined and they are exactly the one where the polynomial F0, F1 and F2 vanishes simultaneously. It will be sent to 0, 0 and it's not allowed. So I will give some examples. So here I recall what I said this slide before. So the first two groups of the criminal group is the group of homomorphism of P2. So when the map is well defined and the inverse is also well defined everywhere. And these are the maps of degree one. So it means that all the factors are linear, all the polynomials are linear and because we ask that it has an inverse and it's on the projective space it's a PGL, it's homomorphic to PGL3. Now an important, so this is the subgroup of the criminal group and another important criminal transformation is a standard quadratic evolution, sigma. And all my examples will take sigma anytime, so be focused on the definition. So to x, y, z it's sent to y, z, x, z and x, y. So it's of degree two. If you look at a find chart, so you divide by the last coordinate and you put z equals to one, it's sent x, y to one over x, one over y. So it's an evolution. And if we took two of the three coordinates to be zero, so we will correspond to these three points here, then it sends to zero, zero, zero, so they are three points which are not well defined for the application sigma. And if you look at the line yellow for instance, which is of equation x equals zero, then the image will be something here, zero, zero, so it will be sent to the point one, zero, zero, which is this point. So this yellow line is contracted by sigma to this yellow point, the orange line to the orange point and the red line to the red point. And because it's an evolution, it's the same for sigma minus one. And what we see is that except these three lines which are contracted and these three points which are not well defined, it's an isomorphism outside this line and this line to the other one. Now we go on an important theorem for surfaces, which is the Zariski theorem and it explains why a blue up is really important. It says that if you have a birational transformation between surfaces, then we can find the surface w and two compositions of blue ups, p1 and p2, subject these diagram commutes. And what it says, it says that in fact you can decompose f as a composition of blue up and of blue up and of blue down, the inverse of blue up. And this is called the resolution of f and we can decide to choose a minimal one by saying that we can take w minimal, which means that if you have another w prime with two other compositions of blue ups, p1 prime and p2 prime, then we'll have such a diagram. Here the arrow is full. It means that in this way it is well defined, so it means that it can contract things but you cannot go up things. And in fact w prime will be obtained by drawing up points from w. So and it means that in fact we have done too many things while we have blue up points that we have to contract it again. So we have done some things which are not necessary. So let's say how this theorem works with our favorite map, which is the sigma. This is a sigma we have seen before, where yellow is the three lines here are contracted to the three points. And these are the three points which are not really fine. So what we want is that we want to do composition of blue up, blue up here and blue down. So first of all we can draw up these points. We obtain here a surface and we have seen that this point is replaced by p1 by the exceptional divisor and that lines which were crossing in p, they are separated here and this line will correspond to this line here between the yellow and the red. And now if we compose sigma with this blue up, with this blue up map, there are still two points which are not well defined because p changed things above the point 010 but then outside is an isomorphism. So we didn't change anything outside. So this composition is not well defined. So there are still two points which are not well defined. So we can continue to draw up and draw up points at this point. We arrive to this surface. Now this point is again not well defined for the composition of sigma with this blue up and we can draw up it again and then we arrive to a surface where now the composition between sigma and this blue up is well defined. And we see that to this map, in fact, what she does, it contracts this orange line to this point, it contracts this red line to this point and contracts this line to this point. So in fact, we have decomposed sigma with three blue up and here I didn't do the intermediate step but it will be also three inverse of blue up. And so if we come back to this theorem, it's lost to define what is called the base point of F. There are the points which are blue up in the minimal resolution of F on this side by P1. So here the base points of sigma was the points 0, 0, 1, 0, 0 and 0, 1, 0. I mentioned that I did an example and not any base points they lie in the surface S or in the case of what is before in P2. For instance, if we look at this map from P2 to P2, the only point which is not well defined is the point where is when z is 0, 0 here and 0 here and x equals 0. So the point 0, 1, 0 is not a point which you will define and it's the only one which lie in P2. So if we come back here with growing but it's not the one of sigma, not the one I'm talking about, only this one is not well defined for the application I gave just before. So when we blow up this point, we get a P1 and in fact if we look at the composition, it will be a new point on this line which will be not defined for the composition. And this point which will be not defined for the composition is also called a blow up of the point. And the notion of blow up will be primordial for the construction of well for the consequence of the cube complex. So now we can construct our cube complex. First of all we want to define vertices. There will be an equivalence class of marked surfaces. What I mean by this is that S is a rational surface which means that it's b-rational to P2. P is a b-rational mark from S to P2. It exists because S is rational. And the equivalence class is given by the fact that we want that the marking when we compose them, it does an isomorphism. So what we want exactly is that 5 prime minus 1 phi which would be the mark which respect the marking which goes from S to S prime. We want that it is an isomorphism. So everything we do will be up to isomorphism of the surface we are looking for. And we put an edge between two vertices if we have to represent that you like this and such that C minus 1 phi is a blow up or the inverse of a blow up of a point. Okay, so this this elementary maps will be in fact our edges. So now we do an example if we look at two points in P2 and one which is on the exceptional divisor of Q. So I repeat when we have blown up the point Q it appears an exceptional divisor either to P1 and we have a point on X which were not in P2 before. So this is a blow up of the point P. So there's an edge. This is a blow up of the point Q. There's an edge. And once we have Q we can also blow up the point Q prime and this will give you another edge. But attention this surface here, well from this surface here and this surface here we cannot blow up the point Q prime. You see the point Q prime up here we have to first blow up Q. Now once we are here, okay here we have the point P and Q in P2. So here it's an isomorphism outside the point P and the exceptional divisor E P. So we have the image of Q somewhere and we can decide to blow up this point and it could give us a new vertex. But if we go here and we blow up the image of P it will give the same vertices. It will give the same vertex. So this gives us vertex. And of course what we want to do is that here we see a square and we have a Q complex and what we want to do is that we want to fill this square. And so our definition of square will be if we have two distinct points on the surface then if we're going one point and going the other points give you a square because then the order doesn't matter. Now here we have a third point so we could think that we could have a cube but because this third point doesn't belong to P2 we don't have a cube but it will generate two squares. So we will have a square here and once we are on this surface F1 we can either blow up the point P or we can either blow up the point Q prime and it gives us a new square. Now we go to the definition of cube so N cube is given by 2 to the power N vertices and such that there exists a representative of this form and we have a surface SR containing N distinct points and all the other vertex will be obtained by blowing up in all the possible ways of subset of this point. So say it correctly it means that the market will satisfy the FeeR minus 1 of PG is the blow up of subset of the point P1 into PL and any of this blowing up of this subset will give a vertex. So here if we have P2 identity we have three points in P2 bring a P, Q or R and use this vertex. If we do have two points if this, this and this vertices and if we do have three points it will give us these vertex. So some remarks about this cube complex. First of all it's not locally compact because when we are on P2 we can blow up all the points of P2 which are infinite so this gives us infinity many ages so it's not locally compact and it's even not of a finite dimension because each time you take a subset of these points they generate a cube so you can have a cube of dimension as big as you want. But what saves us a bit to get the result is that this cube complex is oriented. So two vertex which aren't by an age we can give an orientation if P prime minus 1 P is a blow up of a point S prime and so we know that it's an age if it's this is a blow up of the inverse of a blow up and when we go in the direction of this is a blow up it means that this one will be oriented from this one to this one and so we can have an age on the cube complex and we'll see that the height of this vertex is plus 1 compared to the height of this vertex. So this question will prove that this cube complex is a pet zero cube complex and also I give just the idea of the proof so the connectness is given by the Zariski theorem we have seen before so if you look at two vertex you can look at composing a marking with the inverse of the other one it gives you a birachina transformation and by the Zariski theorem you know that you can decompose this map by blowing up and blowing down and so this gives you the the age so it gives you a pass between these two vertices. Then for the simply the simply connectedness we can look at a loop we can omotate or motop it to inside the one square ton and we look at the vertices that it pass through you want to go to the end and we can assume that these vertices they don't do a while this loop doesn't do a back and come back so we can assume that the vi is not the same as vi plus two and beyond this vertices we choose one of minimal i and so it means that the one just before and one just after because they are linked by an edge they have a height plus one of this of the age of the height of this vertex. And this means that on this vertex well on a representative of on a surface which represents this this vertex vi zero it exists two distinct points one which will give a surface which represents vi zero minus one and one which represents a surface which of the i zero plus one and so once we have two distinct points on the surface you have a square and so you can replace vi zero by vi zero and to be sure that this finished so now so sorry so now we either making cross the minimal eight of the vertices of the loop or we make decrease the number of vertices having the minimal height and to make this stop we can use again the RISC-SIRM and it will be a surface which dominates all the vertices. And when we do this operation to replace vi zero by vi zero prime the surface will also dominate this one so in fact we will note up this loop to this vertex where the minimal surface dominating these vertices represents and the links are flagged by definition of form of cube complex almost well we have something to say but now which is nice with this cube point sorry so yeah now the criminal group act on the complex and it acts just on the marking and so to P2 identity if we make a sigma then we'll get the vertex P2 sigma and because it's an evolution this vertex will be sent to this one so in fact this cube blue green will be sent to this cube red white and it's and this one will be fixed and this one corresponds to the well this is a vertex where representative is given by S sigma which is a minimal resolution of sigma with the marking being the composition of this three and we have this three squared here here and here because once you okay so this cube corresponds to going to S sigma this cube corresponds to go by contracting directly to S sigma but what we can do is that if we cross the wall the green wall and then the dark green wall here it's doing the first group then the second one and here what happens is that the line which we're going through these two points it becomes minus one curve so it's curve of intersection minus one and these are the curves that you can contract onto a point and the surface will stay will remain smooth so once we are here instead of first bringing up this point we could have contract the curve yellow to a point and then going up the point blue and we'll arrive here and then it will be reminds us to contract orange and red so and this is exactly once we are here to go to yellow contract it and then we go up so it means we blow up blue and then we have two way to contract orange and red. And I what what is nice with this cube complex is that there is a there is a correspondence between some some notions associated to the criminal and with some motion of the complex so for instance twice number of base points of the rational term of transformation it's given by the distance is a combinatorial distance between the vertex p2 identity and the vertex p2f. Also the twice the dinamical number of base points so this is a notion which has been introduced by Julie Desertiers and the Jeremy Blanc and they have three digits and in fact twice this number it's exactly the translation the translation length in for the action of f on the 4-0 complex for the combinatorial distance so it has also geometrical meaning and we see immediately that if this translation length is 0 it means that we are the application is elliptic and we know by construction of four complex that elliptic elements they are elements which are conjugated to an automorphism of a surface because you conjugate by the by the marking of this surface your transformation and it will give you an automorphism of the surface and such elements which are conjugated to automorphism of surface are important because when you have a birational transformation okay there are elements which are not automorphism but if you are regularizable sorry it means that you can conjugate them and then they became an automorphism of the surface so it means that in some sense their dynamic is still nice and they prove that regularizable elements are exactly the ones which have a dynamical base points 0 and for us it's really almost by definition. Once we have proved this which is easy we have this immediately and we need it for any field while they did for others even if it could have been important. Another result we can get how we go fast on it but it was that every birational transformation is conjugated to an algebraically stable model so this is kind of generalization of the thing so not any transformation can be conjugated to an automorphism of a surface but it can always be conjugated to an element of the group of the rational transformation of the surface where the dynamic of when you compose by himself will be nicer and so this is a nice notion in a dynamic and algebraic geometry and this is a theorem which has been proved by Charles Favre and Jeffrey Diller and their proof is a bit involving and for us is really immediate because this stable model will be vertices which lie in the axis of F so it's immediate that this exists because by a result of Aglund's isomic larceny simple so we have no parabolic elements so either it's lobstodromic and we have an ax, either it's elliptic but then this surface where you are conjugated to an automorphism is your stable model well is one of your stable model and another remark is that they did it for C but probably for algebraically closed field would have worked and we need for any field with a group really straightforward and another kind of result I will go fast thanks we can have is that we can find the sum group which are regularizable so a group is regularizable now if it can be conjugated to a subgroup of an automorphism of surface so now we don't look at just one application but all subgroup and so subgroup with property FW by definition that it would be regularizable because then it would be elliptic for our action and being stabilizer means being regularizable and also the element of uniformly bonded degree there will be a group as an element with uniformly bonded degree it would be a regularizable because we can bond the number of base points by constant times the degree and so and the number of base points give you the distance between the p2 identity and the action of small g on p2 identity so it will be a bonded orbit and then we have a fixed point and because it's oriented we will have a fixed vertices so these are results again which are already known but we give a really easy proof which works over any field and just with one tool we can we get several results and so it kind of any fact everything so it's really a nice object associated to the criminal never the less it stays well at least well it stays some open question on the criminal group of Hong Tu and one is the following that if you have a finitely generated subgroup of the criminal group such that each of its elements is regularizable does it imply that all the group is regularizable and if we rephrase it in term of a tube complex it means that if we have a finitely generated subgroup of the criminal group such that each of its elements is elliptic does it imply that all the group is elliptic and so here we have a algebraic question and we want to we can rephrase it completely in term of tube complex and for instance when a group acts on a tube complex which is a finite dimension at zero complex we know that this result is true but it's not the case when it's when it's an infinite dimensional like it's of case so so we are now trying to see if with the condition of a group we can still have such results which would be really nice so if someone has some kind of restriction to group acting on a cut zero tube complex of infinite dimension to get this kind of result please we are welcome to have some information and I will finish and so and well before finishing I will say just that in the article the second part is of building tube complex for a criminal group of higher rank and the construction is not exactly the same but the idea came back not so far we just tried to solve the problem we have to when we wanted to generalize it. So thank you for your attention.
|
The Cremona group is the group of birational transformations of the projective plane. Even if this group comes from algebraic geometry, tools from geometric group theory have been powerful to study it. In this talk, based on a joint work with Christian Urech, we will build a natural action of the Cremona group on a CAT(0) cube complex. We will then explain how we can obtain new and old group theoretical and dynamical results on the Cremona group.
|
10.5446/53522 (DOI)
|
So thank you very much to the organisers for organising this very nice virtual conference and for the opportunity to speak. So I'm going to be talking about quasi-actions and almost normal subgroups. So to motivate the talk I'm going to be talking about how quasi-actions arise quite naturally in various problems in geometric group theory when you want to study groups up to quasi-isometry and I'm going to be talking about the class of Z-biopropolic groups. Then I'm going to talk about the central notion in my talk, the notion of a discretisable space and these are spaces which are coarsely discreet in the sense that every co-bounded quasi-action acts in a discreet way on the space. And once you have such an action you have an almost normal subgroup which is quite interesting. And then I'm going to talk about dichotonies between continuity and discreetness. So for instance every finely generated group is either coarsely continuous in some sense which is the case if it's a co-compact lattice in a rank one league group or when it's not such a lattice it's coarsely discreet, it's discretisable. And then I'm going to talk about some applications to quasi-isometric rigidity and in lots of situations you have normal or almost normal subgroups being preserved by quasi-isometries. So a baby case that I'm going to be talking about is the following theorem of Kapovic, Kleiner and Lieb and Moschus and Geven White or uses tools that they did. And it says the following, if we have a finely generated group quasi-isometric to a product of Z with a non-Abelian free group then the group is virtually Z times a non-Abelian free group. So this class of groups is QI rigid. And in general having splitting of the direct product or even having an interesting infinite normal subgroup isn't a QI invariant. And this is illustrated most strikingly by the Bergamozes groups which are quasi-isometric to a product of two free groups, non-Abelian free groups, but they're simple. So they don't have, they don't split as direct products, they don't have any normal subgroups. And the broad outline of how you prove this theorem is the following. We have a mystery group G and all we know about it is it's quasi-isometric to Z times F2. But we can show that it makes a weak sort of action, something called a quasi-action, which I'll define in a few slides on F2, on one of the factors. And then a theorem of Moschus and even White says that you can quasi-conjugate this, you can straighten this quasi-action to a nice isometric action. And once you do this, you can then look at the kernel of this action and show it's two-ended and then deduce the theorem. But these are the two key ingredients. And I'm going to be considering a more general class of groups, the class of Z via hyperbolic groups. So these are groups where you have an infinite cyclic normal subgroup, such that the quotient is a non-elementary hyperbolic group. So it's a hyperbolic group which isn't finite and isn't two-ended. And I share the following that every finitely generated group quasi-isometric to Z by hyperbolic group is also Z by hyperbolic. So this is a far-reaching generalization of the previous theorem. And to complement this, we have a QI classification. Two such groups are quasi-isometric if and only if their quotients are quasi-isometric. And the QI classification was actually known before, but the QI rigidity is new. And the key idea is the first step of the proof is just as before, if we have a group quasi-isometric to a group of the form Z by Q, then the result of Kapovic, Kleiner and Lieb that I mentioned earlier says that G quasi-acts on Q. And this motivates the following problem, how do we understand quasi-actions of groups on more general hyperbolic metric spaces other than trees? And the point of the theorem, the point of this talk isn't just to prove this theorem, it's really to develop some machinery and some tools for understanding quasi-actions and some of the things you can do with them. So what is a quasi-action? So if we have a group G and a metric space X, then a quasi-action of G on X, denoted by this symbol here, is a collection of quasi-osometries of X, quasi-osometries of X to itself with uniform constants. So every FG is a K-A quasi-osometry for some uniform K and A, such that the following things hold. Firstly, if we apply FH and then FG, that's the same as applying FGH up to some uniform error. And FE, where E is the identity, is the same as the identity map up to some uniform error. And to simplify the notation a bit, we'll write D dot X rather than FGX. And if we have two quasi-actions of G on X and Y, we say that they're quasi-conjugates, if there's a quasi-osometry from X to Y, which is equivariant with respect to these quasi-actions, but it's a uniform error. So if this inequality holds. And a key example is if we have a quasi-osometry from X to Y, and we have an isometric action of G on X, then the quasi-osometry F, quasi-conjugates this isometric action to a quasi-action on Y. And part of the theory of quasi-actions is to do the other direction to quasi-conjugate a quasi-action to a nice isometric action. So to motivate my approach to these things, I'm going to consider two hyperbolic groups, which behave quite differently from the point of view of quasi-actions. So both PyRyne of S is the fundamental group of a closed hyperbolic surface, and F2 is the free group of rank 2. And I'm going to claim that PyRyne of S, from the point of view of quasi-actions, should be thought of as a continuous or smooth object, where there's the free group of rank 2 ought to be thought of as a discrete or combinatorial type of object. And of course, both groups are discrete. So in some sense, they're both, they're both finally generated groups, they both belong in the discrete world. But from the point of view of course geometry, PyRyne of S should be on the left side because of the following theorem. And this is due to Toukiye Gubai, Kasun Jungreis, and later refined by Markovic. And it says that every quasi-action on the fundamental group of a closed hyperbolic surface can be quasi-conjugated to a genuine isometric action on the hyperbolic plane. And in contrast, motion to give white, I mentioned this result earlier, they show that every co-bounded quasi-action on F2, on the free group of rank 2, can be quasi-conjugated to an isometric action on a locally finite tree. And the point is that H2 is definitely a continuous, smooth type of object, because you can do take limits, you can do analysis on it, whereas trees are discrete combinatorial objects, they're locally finite graphs, so they belong on the right side. And some more qualitative differences between these two situations. So after we pass to the index two subgroup of orientation preserving isometries, the isometry group of H2 is a virtually connected Lee group. In contrast, the automorphism group of the tree with the usual compact open topology is totally disconnected. Similar situation, if we have the isometry group of H2 acts transitively on H2, the connected space, whereas the automorphism group of the tree has discrete orbits. So we'd like to say that pi 1 of s is coarsely continuous in some sense, and F2 is coarsely discrete. And I want to make this precise and come up with the best proper definition for what I mean by coarsely discrete, something where F2 should be coarsely discrete, but pi 1 of s shouldn't be. And I want a lot more examples of coarsely discrete or coarsely continuous groups. I want this to be useful, I want to be able to prove some interesting QI rigidity results using this technology, which I do. So here's the important definition. So I'm going to assume x is a proper Geolite symmetric space. And we say it's discretizable if it satisfies either of the following equivalent conditions. Firstly, every co-bounded quasi action on x can be quasi conjugated to an isometric action on a locally finite graph. And if you've seen other sitherems where you quasi conjugate quasi actions to isometric actions, you might feel a bit disappointed by this because x might be a nice space with interesting local geometry, it might be a symmetric space, it might be a building, it might be a nice cat zero cube complex. And we ruin all of that local geometry by passing to y, which is just some locally finite graph quasi isometric to x. The key point is not the geometry of y, but what you can deduce about g and how it acts and why, knowing that you have an isometric action on a locally finite graph. And this motivates the following equivalent formulation, any co-bounded quasi action on x has something called a coarse stabilizer. So what's a coarse stabilizer? If we have a quasi action of g on x, we say some subset of g is bounded if a quasi orbit is bounded. And if one quasi orbit is bounded, then every quasi orbit is bounded. And then if we have a subgroup of g, we say it's a coarse stabilizer of the quasi action if firstly it's bounded. And then every bounded subset is contained in finitely many left h cosets. And the first condition, the h being bounded, basically says that h can't be too big. And the second condition says h can't be too small. It has to be just the right size to capture enough of the geometry of this quasi action. And the key example is that if we have an isometric action on the locally finite graph, then the stabilizer of a vertex is going to be a coarse stabilizer of this action. And so this in the definition of discretizable spaces, this gives the one implies two direction. And it's not hard to show two implies one. But going forwards, having your coarse stabilizer is really the property that we want, the property we want to use, and tells us about the structure of g. So this is really the definition that we want. And this is closely related to the notion of an almost normal subgroup. So we say that a subgroup of a group is almost normal, denoted by this squiggly triangle symbol, if every conjugate of h is commensurable to h. So for example, every normal subgroup is almost normal. Additionally, here's an example of a group which contains a almost normal subgroup that's not normal. So we have the BAM sub solitaire group BS12. That's the group given by this presentation here. And the infinite cyclic subgroup generated by a is an almost normal subgroup that isn't normal and isn't even virtually normal. And more general example, if we have a group acting on some locally finite graph, then a stabilizer of the vertex is an almost normal subgroup of the group g. So these are quite ubiquitous and we can already see the connection between almost normal subgroups and discretizable spaces. And there's also an ocean of equation space. So if we have a normal subgroup, then we have a quotient group. If we have an almost normal subgroup, we have a quotient space, space which is well defined up to quasi-osometry and essentially has all the geometry you'd expect the quotient group to have except it may not actually be a group. So we say that a group is finitely generated relative to a subgroup. If there's some finite set, we can assume it's symmetric such that that set union the subgroup generates the whole group. And the quotient space, if we have an almost normal subgroup of g, then the quotient space as a set is the set of left cosets and it's equipped with a relative word metric. So the distance between two cosets, gh and kh, is the least n such that g inverse k can be written in this form. And it's not hard to see that this quotient space is a proper quasi-gad6 space. So it's quasi-isometric to gad6 metric space. It's discrete and it's well defined up to quasi-osometry. So it doesn't matter what finite relative generating set we pick, this is a well defined space. I'm going back to our example on the previous slide. If we have the BAM-Vaxolotile group and the infinite cyclic almost normal subgroup h, then the associated quotient space is just quasi-isometric to the BAS-Satry. And one of the nice things about discretizable quasi-actions, discretizable spaces, is that if we have a co-bounded quasi-action on x with a coarse stabilizer h, then we have the following analog of the Milner-Schwarz lemma. So the group g is finally generated relative to this subgroup h. h is an almost normal subgroup of g and the quasi-action of g on x is quasi-conjugate to the isometric action of g on the quotient space. So essentially, once we know we have a coarse stabilizer h, we can forget about the original quasi-action and all of the geometry of that quasi-action up to quasi-conjugacy can just be seen from the isometric action of g on the quotient space. So it acts just by a normal left multiplication of cosets and we've replaced this not nice quasi-action with a very nice algebraic concrete isometric action. So an interesting corollary of this is that if we have two co-bounded quasi-actions g on x and g on y and they have coarse stabilizers h and k, then the two quasi-actions are quasi-conjugate precisely when h is commensurable to k. So this geometric problem of deciding whether two quasi-actions are quasi-conjugate is actually can be converted into an algebraic problem just deciding whether these subgroups are commensurable. And another observation is that if the group is finally generated, so that means the group itself has a nice geometry and we have an almost normal subgroup, then the group can be thought of as the total space of a coarse bundle over the quotient space where fibres correspond to left h cosets. So I'm not going to define what a coarse bundle is, but the intuitive idea of it is this picture here. So what happens is we have this is the Cayley graph or Cayley complex of the Bams Laxolotile group and if we collapse all the left h cosets to points, we sort of get this, we get the bas-ser-tree of the Bams Laxolotile group. And likewise, if we take three images of points of the bas-ser-tree, we recover the original group. And this sort of coarse bundle stuff, this coarse bundle terminology has been used in lots of important QI rigidity results. For instance, by Father Moshe, by White, by Moshe Sakeven White, and Eskin Vichylin White. And it's an important tool. And just having this almost normal subgroup gives you this structure. So, so far, the only discretizable space, coarsely discreet space that we've encountered is the free group. But actually, there are loads of them. It's a very ubiquitous concept. So we have the following trichotomy for hyperbolic spaces. So the continuous case, here, every co-bounded quasi-action is quasi-conjugated to an isometric action on a negatively-curved homogeneous space. So these negatively-curved homogeneous space have group isometry as a lead group, and they're virtually connected in most cases. So we have, this is the coarsely continuous case. Then there's a mixed case, which is action on a space which has both discreet and continuous behavior. So every co-bounded quasi-action can be quasi-conjugated to an isometric action on a pure Milfaux space. So these were defined by Kofas, Cornulia, Munna, and Tessera in their study of locally-compact amenable hyperbolic groups. And these are sort of a walk-foot-up between a tree and a negatively-curved homogeneous space. And so the discreet behavior comes from the action on the tree, and the continuous behavior comes from the action on the homogeneous space. And if we don't have one of these special cases, then the group X has to be, space X has to be discretizable. So that means every co-bounded quasi-action on such a space can be quasi-conjugated to an isometric action on a locally-finally graph, or equivalently we have a coarse stabilizer. So this situation arises very often. And if we're only interested in some of hyperbolic spaces, quasi-isometric to finally generated groups, then we can be a bit more specific. Then in the continuous case, every co-bounded quasi-action is quasi-conjugated to an isometric action on the rank one symmetric space. So this is a lot more structured than just knowing a negatively-curved homogeneous space. And the discreet case, if we're not one of these things, every space, every such space is discretizable. So in particular, using the QI rigidity of co-compat lattices in rank one league groups, we can deduce that every hyperbolic group is either virtually a co-compat lattice in a rank one league group or it's discretizable. And this is the key point. And I'm not going to prove this theorem, but I'm going to indicate some of the key steps in doing this. So using the geometry of this quasi-action, we give G the structure of a topological group. This is based on work of Kevin White. And this topological group, ApliRI won't be locally compact. It may not even be Haustov, but you can quotient out by something to make it Haustov. And then you can then complete it to a locally compact group. And this is all done by Kevin White, essentially. And then once we have a nice locally compact group, we apply some structure theory due to Montgomery Zippin and Gleason Yamabe in their solution to Hilbert's fifth problem. And we essentially deduce that it's a league group or it's discretizable. It may be the case, there's another case to consider, which is when this locally compact group G hat is amenable. So for instance, the action of the Bamslaug-Sollertag group, BS12, on the Bassertree, when we complete that, we get a locally compact amenable group. And in which case the work of Kephas, Cornulia, Monor and Tessera, that I mentioned on the previous slide, classifies all of these groups, or characterizes all of these groups. And this phenomenon isn't just something that happens in negative curvature, or of course negative curvature, but it holds much more generally. So for instance, the group of virtual comological dimension two, which is finally presented, then one of the following holds, either it's virtually a surface, virtually z squared, or virtually pi one of a compact hyperbolic surface, closed hyperbolic surface. The mixed case, in which case it's generalized down with Saxolotag group. And so these are groups which are sort of coarsely warped products of a real line and a tree. And again, like Milfer space, we have the continuous behavior coming from the action on the real line, and the discrete behavior coming from the action on the tree. So this, these groups is a mixture of coarsely continuous and coarsely discrete behavior. And everything else, everything else is discretizable. And this uses very different methods, it uses the structure of the second terminology, and ideas due to Fowl and Kleiner, as well as bits of JSJ theory. I'm not going to be talking about this case anymore. And I'm going to be going back to my original theorem I stated in the beginning of the talk. If we have a finely generated group quasi isometric to a z by hyperbolic group, then it has to be z by hyperbolic. And we now have the tools to prove this. So firstly, if a group is z by hyperbolic, then it's virtually central. That's very easy to see. And then the result of Gersten says that it has to be quasi isometric to a product of z times q, where q is the non elementary hyperbolic quotients. And this uses the fact that the comparison map from bounded terminology to ordinary terminology in degree two is subjective. So the co cycle defining the central extension is a bounded co cycle. Then the result of Kappavitz, Kleiner and Lieb that I mentioned at the beginning says that if we have G prime, which is quasi isometric to G, then G prime has to quasi act on the hyperbolic space q. And now there are two cases. So Kleiner and Lieb prove the case where q is quasi isometric to a rank one symmetric space. And they use the fact that you can quasi conjugate the quasi action of G prime on q to an isometric action on the symmetric space. But even then you're not done, because you have to show that more due to the current of the action, that this is a proper action. And you can do this, but it involves growth estimates. It's not immediate from from the q i classification of of quasi actions on rank one symmetric spaces. However, if the space q isn't quasi isometric to that one symmetric space, then it has to be discretizable. And then we're basically done, because we have a core stabilizer h. And it's not hard to see that the core stabilizer has to be two ended group. And without a lot of generality, we can assume it's actually infinite cyclic. And we have an infinite cyclic, almost normal subgroup. And so there's a homomorphism from the total group G prime to the abstract commensurator of H, which is isomorphic to q. And this is sort of an analog of the fact that if you have a normal subgroup, then the you have a homomorphism to the automorphism group of the normal subgroup. We only have an almost normal subgroup. So we have a homomorphism from the ambient group to the abstract commensurator of the almost normal subgroup. But because h is undistorted, because it's quasi isometric to z times nq, and h is finite has subsistence from z factor in this direct product. The image of this map phi has to be contained in one minus one. And that means you can pass to some finite index subgroup of h such that G prime normalizes that subgroup. And now we're done. So an interesting observation that we make here is that actually the discretizable case where we know don't know understand as well as the mass matrix is actually much easier. So this is a sort of paradoxical little fact that it's much easier in the generic case any hyperbolic group that's not quasi isometric to rank one symmetric space, which are essentially the prototypical hyperbolic groups. So in some sense, the prototypical hyperbolic groups aren't very representative of the whole family of finely generated hyperbolic groups. And we can generalize this or partially generalize this to other central extensions. So if we assume that we have a residual finite group, which is quasi isometric to a central extension of zn by q, where q is a non elementary hyperbolic group, then G has to be a central extension of zn by q prime, where q is quasi isometric to q prime. And one thing that jumps out at you is I assume the visually finiteness. Do I need to why what's going on here? How is it relevant? And this uses subgroup susceptibility in some way and the observation that if we have an almost normal subgroup, which is separable, then that's a sufficient condition for the group to be commensable to a normal subgroup. And that's what you want to do to prove this. And if we don't assume a visually finite, then we can still deduce some interesting algebraic information. So if we have a group G, which is quasi isometric to such a central extension, then there's an almost normal free abelian subgroup, such that the quotient space is quasi isometric to q. And then over because it's quasi isometric to a central extension, we see that the natural map from G to the abstract commensurator of z to the n, which is isomorphic to g ln q, is actually contained in the group of orthogonal n by n matrices. And this sort of indicates why things are easier when n is equal to one because orthogonal one by one matrices are just one and minus one. That when we have z squared and z cubed and so on, we have sort of more room to move around without distorting distances. And recently, Larry and Minastian gave examples of cat zero groups that aren't bi-automatic. So these are groups which, particularly these groups are quasi isometric to z squared times f2, but they have no normal z squared subgroup. So these sort of counter examples to the theorem at the top of the slide, if we remove the visually finite hypothesis. And if you read their paper, you'll see that these groups h and an extension defined by some orthogonal matrix. So you'll sort of see this map to the group of orthogonal matrices in their paper. So some future directions that I could take this research and what locations. It's for something, a question that we're thinking about a lot is for which finely generated groups g and normal, almost normal subgroups h is the following sort of method theorem true. So if we have some finely generated group g prime, which is quasi isometric to g, then there's some normal or almost normal subgroup h prime, which is quasi isometric to h, such that the associated quotient groups or quotient spaces are quasi isometric. And in general, there are lots of counter examples to this thing, but there are also lots of interesting cases where it is true. So known cases include if we have a central free abelian group, and the quotient group is qi, test symmetric space of non compact type. Several instances where both the normal subgroup and the quotient are free abelian. So in the nil potent case, this is due to grumov in the non nil potent polycyclic cases, this is due to asking fish, fish are in white paying at the mouse. If h is an almost normal subgroup of g, and the quotient space is quasi isometric to a tree, then that was shown by motion to even white. And this is actually equivalent to g being a graph of course, one, create reality groups. And more generally, if we have an almost normal course, one, create reality group such that the quotient space is infinite ended, and we need to see a few very mild finalist hypotheses, then then the situation we also have a positive answer to the meta theorem. And if we have central z to the n, such that the quotient is typo bolec, then it's also true. And that's what I've been talking about in this talk. And there are many more cases and many more applications for some of the machinery. And the ideas and concept I talked about shows that demonstrated this question could hold in a lot more cases, we could have qi invariance of normal subgroups much more often than was previously thought. Thanks very much for making it to the end. And please get in touch if you have any questions or comments. Thank you very much.
|
If a group G acts isometrically on a metric space X and Y is any metric space that is quasi-isometric to X, then G quasi-acts on Y. A fundamental problem in geometric group theory is to straighten (or quasi-conjugate) a quasi-action to an isometric action on a nice space. We will introduce and investigate discretisable spaces, those for which every cobounded quasi-action can be quasi-conjugated to an isometric action of a locally finite graph. Work of Mosher-Sageev-Whyte shows that free groups have this property, but it holds much more generally. For instance, we show that every hyperbolic group is either commensurable to a cocompact lattice in rank one Lie group, or it is discretisable. We give several applications and indicate possible future directions of this ongoing work, particularly in showing that normal and almost normal subgroups are often preserved by quasi-isometries. For instance, we show that any finitely generated group quasi-isometric to a Z-by-hyperbolic group is Z-by-hyperbolic. We also show that within the class of residually finite groups, the class of central extensions of finitely generated abelian groups by hyperbolic groups is closed under quasi-isometries.
|
10.5446/53523 (DOI)
|
So, hello everyone. So, first of all, let me begin by thanking the organizers for inviting me to speak in this very nice virtual conference. So, as the title says today, I would like to speak about this idea of spaces of cubulations. So, what we would like to do is, well, we're going to have some cubulated group G, and ideally we would like to understand all the possible ways that this group can be cubulated in. So, let me begin by fixing the terminology a little bit. So, we say that a group is cubulated if it admits a proper and co-compact action on some kaseo-cube complex by cubical automophisms. So, isometry's that take vertices to vertices. Which, for most kaseo-cube complexes, all asymmetries are going to have this property. So, it's been, I should really talk about co-compact cubulated groups because I think that's what's become a standard terminology. But I'm just going to say cubulated for simplicity, meaning this notion that I've justified. And similarly, a cubulation is going to be any such proper and co-compact action on some kaseo-cube complex of our fixed group G. So, yeah, just a quick reminder of why people care about kaseo-cube complexes and why they've received so much attention in recent years. So, the point is that cubulated groups, so when a group does admit a cubulation, it's known that it satisfies very nice algebraic properties. So, first of all, kaseo-cube complexes are very special kaseo spaces. So, any cubulated group is in particular going to be a kaseo group. And this already gives you a very nice list of very nice properties that I'm not going to go into right now. But in addition to these properties, so it's known that cubulated groups satisfy the struncted subternative, which was originally shown by Sageven Wines. So, this means that every subgroup of a cubulated group is either virtually abelian or it's going to contain a non-abelian-free subgroup. Also cubulated groups are known to have finite asymptotic dimension, which was shown by Wright, in which, for instance, implies the denavikov conjecture. But then the most interesting properties are for those groups that are called virtually special, which is a notion that was introduced by Hadlund and Wines. And so, a large class of cubulated groups is going to satisfy this additional notion. For instance, every hyperbolic group that is cubulated is going to be virtually special by the work of Biennial. And these virtually special groups have really extraordinary properties, so properties. So, they're going to embed in SNNZ, they're going to be relatively finite, rational and solvable, which is a very important property when you want to study fibrin questions for these groups. You know that all quasi-convex subgroups are going to be separable, where quasi-convex has the usual meaning for hyperbolic groups, but it can also be defined so that this works for virtually special groups. And despite all these great properties that cubulated groups have, there's a long list of very classical groups that are known to be cubulated, they're known to admit cubulations, and so they're known to satisfy these great properties. But today, we're not going to be interested in any of these two, say, lines of resellage, right? We're not going to be interested in cubulating any new groups, and we're not going to be interested in deducing new properties of cubulated groups, at least not immediately. I guess this is going to be our end goal, but not at this stage. So, as I said, what we would like to do is understand all the cubulations of a given cubulated group in some sense. So, before I say more on this, I should be a bit careful, I should make things, again, a bit more precise because I want to study all the possible cubulations of a group, so I should say when two cubulations are going to be the same and when they're going to be different. So, I define cubulations as actions and cube complexes, sometimes, right, they're going to have to be perpendicular compact. And we're going to say that two of these actions are the same if there exists an isomorphism between the two casio-cube complexes that conjugates one action into the other. So, it's going to be an equivariant isomorphism between the two cube complexes. And so, the basic observation that gets this whole idea of the space of cubulations started is that when a group does happen to admit a cubulation, then it tends to admit infinitely many of them. So, infinitely many pairwise distinct ones, what distinct is in this sense that I've justified. And this is interesting, this is perhaps the reason why there's such a long list of groups that are known to be cubulated. Because if a group does admit a cubulation, it is relatively easy to construct such a cubulation and so show that the group is cubulated. So, it's not easy at all in practice, but it is easier than it could be, let's say. And this is because there are so many cubulations, so the construction of a cubulation is quite a flexible procedure. You don't have to be too careful, you don't have to arrange things too precisely. There's many things you can do that are going to result in a cubulation. And before I go on, let me actually spend a few more words on this, because I think this is best exemplified by how you normally cubulate hyperbolic groups. So, there's a very good procedure to cubulate hyperbolic groups, I mean, when you can, not always, but when you can. And this procedure comes from the work of Sagiw and Berzer and White. And it roughly goes as follows, so you have your hyperbolic group G that you want to cubulate, and your goal is showing that for any pair of points in the grammar boundary of the group, you can find a quasi-convex subgroup whose limit set is going to separate these two points at infinity. And these quasi-convex subgroups will then have to be a credential one. And this result of Berzer and White tells you that if you can separate any two points in the boundary by the limit set of some quasi-convex subgroup, even though you might have used infinitely many conjugacy classes of subgroups in doing so, actually, finally many suffices. And these finally many quasi-convex subgroups are going to be precisely the hyperplane stabilizers of some cubulation. And this exemplifies very well the flexibility you have for cubulations in this setting, right? Because if you're cubulating a group this way and you have your infinitely many subgroups, you know that finally many will suffice, but you can always add more. And you're still going to get a cubulation by the work of Seguin, no matter how many quasi-convex subgroups you throw in, as long as you're using finally many, and I could mention one, you're still going to be co-compact and proper. So you have a lot of flexibility when you're cubulating groups this way, but this also shows you another important feature, right? So if you have a hyperbolic group that you've cubulated this way, and this is, for instance, the way you cubulate hyperbolic semagnopol groups as one example, if you cubulate your group of this construction, you don't really know any of the cubulations, right? So you construct your infinite collection of subgroups that do all the separation you need, and then some compactness arguments from the work of Brogeron and Weiss tells you that finally many subgroups suffice, but you don't know which one. So you know cubulations exist, but you don't really understand any of them in general. And for instance, for hyperbolic semagnopol groups in general, it could be that all cubulations have really, really, really high dimension. This is not really understood as far as I know. And it is certainly known that some hyperbolic semagnopol groups will not have any cubulations of dimension three. So this is a such a delicate thing, and we don't really understand the space of cubulations from any groups. So it is an interesting thing to study. But let me give you two more more concrete reasons for studying the space of cubulations of a group. And at this stage, I should really be saying, as I wrote here, the collection of cubulations or the set of cubulations of a group, because really, this is just a set. I haven't put any any additional structure on this just yet. So one is just a classic theme, I would say maths. So whenever you have a group, it is interesting to study geometric structures associated to the group, which here I mean in this very, very broad sense of topological spaces, where the fundamental group is my group G, let's say there's no torsion. And so that these spaces are additionally equipped with a metric that of some kind, that has some specific properties. So for instance, here you can make a natural parallel with the theory of representation varieties, right? So so if you're interested in studying discrete faithful representations of some discrete group into some semi simple group, then this is this is equivalent exactly to some problem of this sort, right? You want to understand all the manifolds, whose fundamental group is G. Again, let's say there's no torsion, and plus a metric, a complete remanent metric of this on these manifolds that is locally symmetric to the symmetric space of the league. And again, you can phrase this kind of problems in a way that is very similar to this idea of classifying all cubulations, right? So you might have some symmetric space and a discrete group, and you want to understand all the actions by a summaries of your discrete group on your symmetric space, when where you identify two actions exactly when there exists an asymmetry of the symmetric space that conjugates one into the other, which is pretty much how I defined two cubulations being the same. So for us, in the context of cube complexes, you have to be a bit more general, right? So you should not, while one in the theory of representation varieties, one normally fixes the league group, here we cannot fix the universal curve of our spaces, we really have to allow our casio cube complexes to vary to obtain some interesting, something interesting going on, because automorphism groups of cube complexes are totally disconnected. So, okay, so this is all quite general and abstract, let me give you a more complete reason why we should study the set of cubulations of a group. And the reason is that this should give us some information on the auto-automorphism group of our cubulated group G, and hopefully for quite general groups. So ideally, it would be great to use this to study auto-automorphism groups of special groups and the generality of a special group G in the sense of argument wise. And so actually, let me say why does out G act on the set of cubulations of G? Well, the point is that whenever you have a cubulation of G, in fact, any action on a metric space by G, and whenever you have an automorphism of the group G, you can always twist your action by the automorphism. So rather than letting your group elements act on the metric space directly, you first apply your group automorphism to all your group elements, and then you let the result act on the metric space. And this is a priori, a new action. And it turns out, so if you're twisting by an inner automorphism, then by the definition of my definition of two cubulations being the same, twisting by an inner automorphism is not going to change the cubulation, because the inner automorphism is going to be a conjugation by some group element in G. And that's exactly the pre-variant isomorphism that you need to show that the cubulation hasn't changed. But in general, if you twist by an outer automorphism, this is going to change the cubulation. And so you get an action of out G on this set of cubulations, and you might hope to use it to say something. Although again, at this stage, this is just a set. And actions and sets don't really carry much information. So the entire goal of this whole idea of the space of cubulations would be finding a natural topology, some natural metrics that you can put in it, and that are going to be preserved by the action about G, and maybe finding some simplitio combinatorial structure on this set of cubulations. So ideally, we would be able to construct some simplitio complex where the vertices are cubulations with some nice properties. We join two vertices when the corresponding cubulations are close in some sense. And ideally, this object would carry some interesting information. And this kind of strategy has worked really exceedingly well for outer automorphisms of free groups. That's basically the whole idea underlying Calvin Bookman's outspace. And there's some promising results in this direction also for untrusted outer automorphisms of our great angle of outing that are due to Charlie Stamble and Bookman. Okay, so at this stage, I hope I've provided some motivation, but this is all very general, very abstract. So let me give you some more concrete examples. And so first of all, I want to give you more examples of this idea that when a group is cubulated, then it normally tends to be so in infinitely many different ways, and that this variety of ways is interesting, has some interesting features. But second, I need to show you some, you know, mention some caveats. So there can also be different cubulations of a group that are different for stupid reasons. And this is really something that we're going to have to be really wary of, and we're going to have to work around. So I'm going to talk about these non meaningful ways of changing a cubulation in a moment. But first, let's talk about the good news, the nice examples. So what do I mean when I say that two cubulations are different for meaningful reasons? So first of all, examples here should be anything that's obtained by twisting. So if twisting a cubulation by an alt-automorphism changes the cubulation, then these two cubulations should be, I should consider them to be different for meaningful reasons. Exactly because I'm one of my motivations is studying alt-automorphism. So anything that's come from the action of an alt-automorphism, I should be interested in. So now looking at a specific group. So if we look at a non-ambilient free group, then a natural collection of cubulations that you can look at are exactly one dimensional cubulations, which in this setting are quite an interesting class. And so these are just proper and compact actions on simplisher trees. And these are exactly, they more or less make up the color involvement in outer space. So any two simplices in outer space are going to correspond to two cubulations, to one dimensional cubulations of the free group. So I'm saying any two simplices or equivalently any two vertices in the spine of outer space. And if you take different simplices, the cubulations are going to be different. And I am going to consider them to be different for meaningful reasons. Because in doing this whole construction of outer space, you are using the geometry of the free group. You are using the free groups are fundamental groups of graphs. So this is specific to this set. So a second example that we can look at naturally are closed surface groups. And here, so whenever you have a multi-curve on your closed surface, well, you can lift it to the universal cover and you can consider some dual object to the collection of lifts of these curves in the universal cover. And by Sargeev's construction, this yields a co-compact action on a kaziotube complex. I'm looking at finite multi-curves. And if you more or less ask that your multi-curves are filling, then you gain proponess of the action. And so any filling multi-curve gives you a cubulation of your surface group. Where filling here means that it cuts your surface into a collection of disks. And now if you take different multi-curves, so non-isotopic, then you're going to get different cubulations, where different is again in the sense above. And again, I am considering these different cubulations to be different for meaningful reasons, because this whole thing is based on the geometry of the surface. And then finally, you can extend this example of surfaces to closed hyperbolic remanifolds. And here instead of multi-curves, you're going to use finite collections of quasi-combex immersed surfaces. And the fact that you have a lot of them and enough to have many interesting cubulations comes from the work of Kaniy Markovitch, to show that you really have a lot of quasi-function surfaces in any closed hyperbolic remanifolds. And so again, any sufficiently large collection of surfaces is going to give you a cubulation. And let me point out that even these last two examples for surfaces in hyperbolic remanifolds, if you throw in enough curves or enough surfaces, you can make the dimension of the kaseopub complex arbitrary high. And so even for a fixed surface, you can make cubulations with dimension tending to infinity, which is different from my first example, where everything we were looking at was going dimensional. Okay, so these are all nice examples, because they're based on the geometry of these very specific groups. So examples that are not meaningful, ways of changing a cubulation that are not meaningful, because they're not based on the geometry of some group, the following. So let me show them to you in the case of zed acting on the real line. So a very simple example, but they work in absolute generality. So we cubulate the real line by putting a vertex at every integer point, then we join them by edges in the opposite way, and then zed translates one notch to the right in the obvious way. So there's two procedures. I'm going to call them procedure A and procedure B. And as I said, they work more generally, and they're going to come up later in the talk. So please try to remember them. So procedure A consists in attaching a loose edge here to every vertex, just a loose edge sticking up and going nowhere. And procedure B consists in blowing out every edge to a square. So you get a chain of squares attached along diagonally opposite vertices. And then zed still translates, still has the equip complexes, actions are still properly compact, but things are different from before. And then there's a third procedure, so procedure C, which is just very centrally subdividing. So I'm drawing it for a tree because in R, it would change the metric, but it wouldn't change my picture of it. So it's better this way. So here it's one dimensional setting. We're just adding midpoints of edges as vertices. So these three procedures, as I said, they can be performed on absolutely any cubulation of any cubulated group, and they are going to give you a new cubulation. So these procedures cannot carry any interesting information because they do not depend on their specific group in any way. And so actually, let me say a few more words on why they work on any cubulation. Well, very centrally subdividing, this is quite clear. You can always very centrally subdivide a cube complex. And you can view this as replacing every hyperplane of the cube complex with two parallel copies of itself. Attaching loose edges is also something you can do quite generally, right? You are going to have some cubulation. You pick one geo bit of vertices and you attach a loose edge to every vertex in this geo bit. You do it equivalently. You equivalently extend the action, and you're going to get a new cubulation. Procedure B is maybe the one that requires a little more thought, right? Because we want to blow up every edge plus square in a sense, but we want to retain the fact that we have a kazero cube complex and also that the action is proper in the compact. But the way you should really think about this is that this is a modified barycentric subdivision. So rather than blowing up every hyperplane to two parallel copies of itself, we are going to blow up every hyperplane to two transverse copies of itself. And this can be followed quite easily in terms of the half space boxes of the cube complex. And this works in absolute general. But since these procedures, these procedures tell you that any cubulated group is going to have infinitely many cubulations, but this is for very stupid reasons and for very uninteresting reasons. It's not the cool ways of having infinitely many cubulations that I showed here on the left. It's an annoying way that is going to make the space of cubulations of any group infinite and very large for no good reason, basically. So we want to avoid these procedures and the way to go is to restrict two kazero cube complexes that are nice in some way. So we're going to assume that they satisfy these two assumptions being essential and being hyperplane essential. And this is going to stop us from performing at least procedures A and B. So barycentric subdivisions are not going to be a problem, not immediately. I'm going to talk about them later on. So we say that a kazero cube complex is essential. This definition is due to Sigev and Tsighev. A cube complex is essential if none of its half spaces is at finite house of distance from the hyperplane that defines the half space. And so procedure A always results in something that's not essential. And so you see here in this picture, right? So whenever I attach a loose edge, I'm creating a hyperplane, namely the midpoint of this loose edge is going to be a hyperplane. And the top tip of the loose edge is going to be a half space determined by this hyperplane that is at bounded distance from the hyperplane. So it kills essentiality. And on the other hand, we say that a kazero cube complex is hyperplane essential if each hyperplane is itself an essential kazero cube complex. And here it's important to notice that hyperplanes of kazero cube complexes have themselves a structure of a kazero cube complex. Basically, because we can look at all the cubes of our kazero cube complex x and we can intersect them with any given hyperplane, and this decomposes every given hyperplane into lower dimensional cubes. So it gives every hyperplane a structure of cube complex, which actually turns out to be a structure of kazero cube complex. And so we can ask that the hyperplanes with these induced cubical structures are essential in the sense of this definition here. And you see procedure B always results in a cube complex that's not hyperplane essential. It will preserve essentiality of the whole cube complex, but it will not, but it would always kill essentiality of the hyperplanes. So in this picture, we're looking again at the train of squares above. And you see hyperplanes are going to be these parallel, these segments parallel to the size of the squares, but going through the middle. And this hyperplane does not violate essentiality of the whole cube complex, because I can move infinitely far left and right. So they have spaces go infinitely far from the hyperplane. But if I look at essentiality of the hyperplane itself, well, the cubical structure on the hyperplane is this. It's just a segment. There's two vertices joined by an edge. And this guy does have one hyperplane whose two sides are bound. And this is, this is bad. Okay, so a couple of remarks. All the cubulations in examples one, two, three are essential and hyperplane essential. So the cubulations, the specific of these specific groups that are constructed here are all essential and hyperplane essential. So these are not some crazy notions. And secondly, it is not really restrictive to only look at essential hyperplane essential tubulations, because Hagen and Twig can develop this procedure called panel collapse that allows you to start from any cubulation of any group and do some cuttings and end up with a cubulation that is now essential and hyperplane essential. So this procedure is going to change many properties of the original cubulation, but it is also going to retain some of them. But the important thing here is that every cubulated group will actually have also essential and hyperplane essential cubulations. And so if we only look at these, these cubulations satisfying these properties, it is not restrictive. It's not going to be a smaller class than the class of cubulated groups. And so these are a good candidate for the space of, of cubulations of a group, because they avoid all these useless stuff caused by, by procedures A, B and C. Okay, so after this long overview, let me give you some results. So the first one is, it connects to what I was saying before, right? So at this stage, we only have a set of cubulations and we want some additional structure. And for sure, the first thing to do is put a topology on this set of cubulations. And so there is one thing that you can do, right? You have the grimo, the equivariant grimova house of topology. So this is a very natural notion of closeness of productions. It actually works for actions on any metric spaces, right? Whenever you have a collection of actions of your group on some metric spaces, you can always consider the equivariant grimova house of topology on this collection. But it's very hard to understand what this looks like if you don't have a model, a model space for your space of cubulations. And so a different strategy would be embedding your space of cubulations into some vector space, at least some linear space, and considering the subspace topology, because we would understand a lot better what the properties of this topology are. And so this is achieved via length functions, right? So the idea is that we want to transform cubulations into functions on the group, because functions of the group are something that is a lot easier to understand in a sense. And so this is a linear length function, so every cubulation is an action on a cube complex that is proper and compact. And to each such action, we can associate the function from the group to the non-negative reals that assigns to every group element its translation length inside the cube complex. So the least displacement of a point of the cube complex by this specific group. We call this the length function of the cubulation. And it's immediate to notice that if two cubulations are the same, as I defined it above, then they're going to have the same length function. But then you can wonder if the converse holds, right? Are cubulations with the same length function going to be the same? And here procedures A and B already kill this hope in full generality, right? So even if you allow only procedure A or only procedure B, then starting from any cubulation of any group, you can construct infinitely many that are pairways different, pairways not the same, but pairways have the same length function. And so this is bad. But this is not worrying because all these pairways different cubulations of the same length function are constructed via procedures A and B, which we've already decided are not interesting to us. And indeed, if we restrict to essential and hyperplanet essential things, things are a lot better behaved. So here's something that I proved with Jonas Bayer last year. So first of all, if the group G is gram of hyperbolic and cubulated, then everything works perfectly well. So two essential hyperplanet essential cubulations of G are the same, if and only if they have the same length function. And secondly, if you want to look at non-hypobolic groups, then the same result holds. So two cubulations are the same, if and only if they share the same length function. Assuming, I mean, as long as you consider cubulations satisfying some additional assumptions, so first of all, they should be reducible. So they should not split as a product, which is something that was implicit here. It followed from hyperbolic group. And moreover, they should have no free faces. Or equivalently, the casiometrics should be genetically complete. And having no free faces, it implies in particular, essentiality and hyperplanet essentiality, but it is a stronger assumption. So it's like asking not just that hyperplanes are essential cub complexes, but also that any intersection of hyperplanes is an essential cube complex. And in fact, under the stronger assumption that there are no free faces, part two holds also for things that are not cubulations. So it works for more general actions on casio-cube complexes, where you might not be proper and you might not be co-compact. It's enough that you have some non-elementarity. So there should be no finite orbit at infinity, individual boundary, no fixed point inside your cube complex. And you should have some minimality. So there should be no proper convex sub-complex that is left invariant by the action. Okay, so now that we have these resultant length functions, we can embed the set of cubulations of G into the topological vector space of real valued functions on the group. And so we get a topology on the space of cubulations. And in fact, if we only look at cubulations up to homothetes, then we also get an embedding of the space of cubulations into projective isling functions, which are an infinite dimensional projective space. And in my work with Anaspira, we also showed that the closure of the set of cubulations of G, satisfying these properties, the assumptions of this theorem, the closure of the set of cubulations inside this infinite dimension projective space is compact. And so you get a compactification of this set of cubulations. And this is really an analog of how you compactify tec-mula space by adding the first and boundary. Or it's also an analog of the usual compactification of outer space in the setting of free groups. And just like in those two cases, for tec-mula space or for outer space, the boundary points can be interpreted as group actions on real trees, where you might lose properties, but still relatively nice actions on real trees. Here in this setting, you can also do the same. So points, boundary points in this compactification can be viewed as the generations of sequences of cubulations. And these are going to be actions on medium spaces, which has sort of a nondescript analog of cube complexes with some properties. So again, these are going to be non-proper actions on medium spaces, but they are still going to have some nice-ness conditions that they satisfy. And this is a very interesting thing to study for to understand alpha-thermophisms, special groups in general, for instance. And actually, any conclude with one last result, this time joined with Mark Egan, also from last year, which also answers a natural question regarding this space of hyper-mini-essential actions. So let me recall what I said before. So by panel collapse, which was developed by Hagan and Twiggan, every cubulated group actually admits at least one essential hyper-mini-essential cubulation. But the natural question now is, does hyper-cubulated group admit infinitely many essential and hyper-mini-essential cubulations? So is there some other procedure other than A, B, and C that I described about that will always modify a cubulation retaining essentiality and hyper-mini-essentiality, but that will be not meaningful for some reason, because it will for some reason work in it for every cubulation? And the answer is no. So there are no more stupid ways of deforming a cubulation. And for instance, if you look at Bergamozi's groups, then they're going to have a unique cubulation that is essential, hyper-mini-essential, and with no parallel hyperplanes. So here by no parallel hyperplanes, I mean no pairs of hyperplanes that they look like they've come from a very centric subdivision. And before, when we were talking about length functions, paracentric subdivisions were not a problem because they modified the metric on the cube complex. And so I am going to see that in the length function. But here they modify the cubical structure. And so they do give me something different and they need to really tell. But if you're not allowed to use procedures A, B, and C, then there exists a unique cubulation of every Bergamozi's group. And let me point out that this results really, we've just assembled lots of very deep results of other people. So chapter-driven non-yotzi super-rigility goes in here. There's some results of Caprazin-demetz on automorphism books of trees that go into this group. There's some of the work of Chalon, and there's also, of course, the work of Bergamozi's and Bergamozi's system. And putting everything together with a few lemurs on cube complexes yields this result. What we've actually really shown together with Mark Hagan is the second part, that for cubulated non-elementary hyperbole groups, the picture is completely different. They always have infinitely many essential hyper- pre-essential cubulations with no parallel hyperbole groups. So what we observed for three groups, surface groups and hyperbole combinational groups, it is actually a general properties of all cubulated hyperbole groups. And here, specialness plays a big role in this group. So it would be interesting to know if this holds for all luxury special groups, because our proof actually also really needs hyperbole-sythiol. And let me just point out to conclude that this result is clear if the outer automorphism group of G is infinite. But the point is that many hyperbole groups have finite outer automorphism. Groups, for instance, all hyperbole-permanity groups. And in general, anything that's not assembled from free groups and surface groups will have finite outer automorphism groups. So, yeah, you need to do something different in that case. So, yeah, I will stop here. And thank you a lot for listening. And yeah, I hope everyone is doing well. And see you soon.
|
The theory of group actions on CAT(0) cube complexes has exerted a strong influence on geometric group theory and low-dimensional topology in the last two decades. Indeed, knowing that a group G acts properly and cocompactly on a CAT(0) cube complex reveals a lot of its algebraic structure. However, in general, "cubulations’’ are non-canonical and the group G can act on cube complexes in many different ways. It is thus natural to try and formulate a good notion of "space of all cubulations of G'', which would prove useful in the study of Out(G) for quite general groups G. I will describe some results in this direction, based on joint works with J. Beyrer and M. Hagen.
|
10.5446/53472 (DOI)
|
Okay, thank you. Min Hai and Si for the kind invitation. So my great pleasure to give a talk in this conference, even though I cannot go there. So today I will talk about a joint work with Ricardo Alonso and Yoshio Morimoto, Wei Yang Shen. Okay, and then the main film that I would like to present today is to show that in the perturbative framework, okay, so if we assume the initial perturbation, VL, VTL in VLOM is more than the solution, it sits in the same function space. So this approach is new compared to the existing approach for assistance theory in the perturbative framework. Okay, so this is the main result. So all of you are experts, so I don't think I need to talk about the equation. So before I go to the L infinity solution, I'm going to recall the work on the L2 first. So both these two results in fact are for the patients with algebraic decay, decay tail in the velocity variable. So even though the back of the decouple is decay exponentially, but the perturbation has much slower decay. So these are the three collaborators, Alonso in Texas A&N, Katta and Morimoto from Kyoto University and Wei Yang from Simon Fisher University. So this is the equation, we all know the Boltzmann equation, and then the purpose of these results are for the case when this course session B has singularity, angular singularity. So the angular singularity in fact is assumed to be in this form. So it has two parts, kinetic part, v minus v star to power gamma, and we assume that gamma is positive, so we consider only the half potential case. So the singularity in the angle covers the full range. So the power of theta is negative 2 minus 2s. So in the results, we consider the case when s is between 0 and 1. So this is motivated by the inverse power law and so on. So now there are a lot of work here, only this very few, about searching for function space with very minimal regularity in space variable x. So in the boundary domain, there's a series of work initiated by Yan Guo using the L2L infinity approach. So a lot of very good results. And then in the whole space, we started many years ago with Alessandro, Morimoto, Seiji Wukai, and Cao Jiangxu, and trying to show that the perturbation, if the perturbation is in xs, s is greater than 3 over 2 because it's three dimension, L2 in v, and try to find the global assistance. But at that time, we fail and only got the local assistance. So recently, there's a very interesting result by Renjun Don, Shuangqian Liu, and Sakamoto and String. And they use this function space, use the vener algebra in x, so they take Fourier transformation in x and take the summation after taking the supermom in t. So using this norm, they can show that the initial data is bounded in this norm and the solution is also bounded. So well-posedness. So in all these results, including some others that I didn't mention, is based on this inequality. So you need to find a norm such that the space is algebra because the Poznan equation is nonlinear. So you have f times g. So in this norm, you have this kind of inequality. So and then use the priority bound, construct some functionals, and then to close the boundaries of the perturbation in this kind of norms. And then for l infinity, if we take x to be l infinity in x, and this kind of estimate is missing for Poznan equation without angular cutoff. So this is true. This is a walk by Seiji Okahi long time ago. Because if you assume the angular is cutoff, then the linearized operator can be written in two parts, a multiplier operator and a compare operator. Then you can write the equation in a mild form and then you can take l infinity norm and so on. So this can be done. But we all know that if the conditional operator is singularity, then you cannot decompose the game part and the loss part. So this kind of decomposition does not exist. So this is the difficulty. So the Poznan equation without angular cutoff is still valid in l infinity function space. This is the goal. So our approach is slightly different in terms of analysis. The approach is different from the previous walk. So what we did is the following. Instead of applying the algebraic structure to cause the booster argument, the idea is to apply the discharge argument to first walk on the l2 estimate on the level set. And then use a localized strong averaging lemma. Then combine these two estimates, we can construct a functional so that the discharge argument works. And then it will give us the infinity estimate on the perturbation. So the l infinity estimate on the solution is obtained by using this approach. So I will elaborate this a little bit more later. So now the discharge argument for PDE and Kaliate equation, in fact, there are a lot of interesting work. I will not mention all of them here. So here I just mentioned for the diffusion equation, there is a paper by Kefirere, and then for Kaliate equation, for example, Lendel equation, for Kefirere equation, when the Kaliate operator behaves more or less like a lapasse. So there's also a lot of work by Gou's, Inver, Moe, Waxiu, and Kim, Go, Huang, and Moe, Gou, Hart. And then in all these work, they show the order continuity and also the Harnet inequality. So for Boltzmann equation, for space homogeneous, there's also a work by Alonso two years ago. So there's some difference between the Lendel equation, Fokker-Plan equation, and the Boltzmann equation, because the Boltzmann equation basically is a singular integral operator. So even though it has some kind of gain of accuracy, but this can be achieved, okay, can be not directly just like the Lapecheun, but through the strong Ever-Vision number. So before I present the result, I also would like to mention there's a big difference between the perturbation in the Gaussian tail. So this is the standard decomposition, the background West Varian square root of Mu times perturbation, little f, okay. So if you simply write the decomposition as Mu past F, and F only has some kind of algebraic decay, so the estimate, the analysis is very different, okay. So for the later, for this algebraic decay, in fact, with angular cutoff, there's a very important progress, Goddani, Michelin and Moe, okay, two years ago, published, was online for many years, and then about the special gap, okay, in some abstract banana space. So without angular cutoff, there's also two group of people obtain the result, okay, a horn, a toron, a chiselini, for mild singularity, and we also did the case for the strong singularity, okay, cover the whole range when s between 0 and 1, okay. So here's the short presentation about the difference between the Gaussian decay and algebraic decay. So if we assume the solution is, can be written in this form, then the linearized operator is a standard self-adjoint, and then you have a very good strong cohesivity estimate for angular cutoff, okay, because this arbitrary long F, L mu F, in fact, gives you the gain of the grade, the s, s is the parameter in the angle, and also gain of moment, okay, so it's strictly negative. Now if you only have polylonged decay, F is mu past F, and then the linearized operator is no longer self-adjoint, okay, so the cohesivity estimate is much weaker, okay, so all you can show is that this quantity has a very good cohesivity term, gives you the gain of regularity, but no gain of moment, okay, because the gain of moment, in fact, comes from the difference, the differentiation of the Gaussian, okay, but for algebraic decay you don't have it, okay. So error term, okay, L2, and gamma O2 comes from the kinetic part of the course session, and in fact in this analysis, you will find out that this integral plays an important role, okay, in the analysis. You cannot use simply the Hs norm with weight, other than that you have to use this as a good term, okay, so this good term is not equivalent to the Hs norm, okay, because the weight is slightly different, upper bound and lower bound is different, okay, but with this you can indeed cause the error rate, okay, so now for L2, the film can be stated as follows, okay, so if L2, if we assume the, we have a regularity, strong regularity in space variable X, so that means the second order derivative in X is bounded, okay, with some weight, okay, so let's, don't worry about the weight at this moment, and then the film can be stated as follows, okay. So we can say that when the parameter s is between 0 and 1, gamma also between 0 and 1, so if the weight is the decay, L2-bay decay, the parameter is big enough, okay, so if the perturbation is small in this L2, H2 in X space and contains no microscopic component, okay, in average, then the solution to the Boltzmann equation exists and converges to the equilibrium, is potentially in time, okay, so this is the L2 theory, so the question is whether we can remove or replace the H2S, H2X by L in 3D in X, okay, so this is the question we want to answer, okay, so to prove that from, in fact, there are three main components, okay, the first component is to show that the moment is, can be propagated in time, okay, so you always, because the net of strong cohesivity estimates, so you always have this kind of error, so, but it can still propagate in time. Now because the singularity, the semi-group generated by the linearized operator, in fact, gives the gain of the singularity H-s, okay, so the L2-bay on solution to the linearized Boltzmann equation, in fact, is bounded by the H-s norm on the initial data, okay, of course the coefficient depends on square-negative, 1 over square t, okay, so we saw by Kotani-Michelon-Mow for cut-off, holds for long cut-off, this special gap, can buy these two free estimates and you can, we can prove the field, I just mentioned, okay, so the idea is how to, now the question is how to, with less this H2x to L infinity, because H2x is too strong in some sense, okay, so this is the film we want to prove, okay, so the, let me see, no, no, okay, this is the film I would like to present today, yes, so the same assumption on the gamma and S, okay, so if we assume the initial perturbation, okay, F0 minus mu, the perturbation is bounded in L infinity and L2, okay, with some weight, okay, for example K log, this is more, and then for higher moment it is bounded only, okay, and K log and K, I don't think I have time to talk about it, but just some numbers, okay, then you can show that the solution exists, capital F, and the solution in both L infinity long and L2 long is volentially decay to mu, okay, so this is the main film, okay, so how to understand the proof, so let me look at this chart, okay, so the main idea is not to show that the L infinity long has the property, the solution is bounded by initial data and then itself squared, okay, other than that we first assume, maybe locally in a very short time, the supermum in both X and T, the L1 long of perturbation is more, so this is assumption, we need to close it, okay, so under this assumption, then we can perform the L2 estimate, okay, so you can obtain the L2 estimate and also the L2 in Hs in V estimate because of the singularity in angle, okay, and the similar estimate can be performed also on the level set of the solution, okay, now with these two L2 estimates, one can apply the ever-vision number that we'll talk about later to have the some kind of gain in X variable, okay, so you have Hs prime L2 in V estimate on both the solution and the level set, now based on this L2 estimate and also the Hs prime estimate, one can construct a functional on the level set and then apply the George argument to show that in fact the solution is bounded in L infinity long, okay, and this long depends only on the initial energy, okay, so then you can close the loop, okay, in some sense, okay, so this is the idea, okay, so to achieve this, there are three main steps, the first step is the L2 estimate on the level set, okay, so here I just give you the statement, okay, so the statement says the following, okay, so suppose this capital G is mu plus little g, you can build little g is given and f is mu plus f, okay, f is the solution we are looking for, okay, so for this capital G we assume he has this, satisfy these two properties, okay, so these two properties are in fact crucial because with these two one can have the uniform co-eativity estimate, okay, L1 and L6, okay, now under this assumption, so if the weight is sufficiently large, okay, and this or fLk plus is the f if the weight v to the power L minus k, the positive part, okay, I think I'll throw it somewhere, I'll throw it somewhere, sorry, and then, so you can have this, yes, so to have the L2 estimate on fLk plus, okay, this term, the level set component, this is the f greater than k, okay, with the weight of course, then to do this, we multiply both sides by this quantity, then you have to estimate q times this one, okay, but under this assumption you can show that in fact this is has upper bound, this upper bound can be written like this, okay, so here you can see that it involves the L1 bound on the little g, okay, L1 bound on the little g, and then by the assumption, okay, the loop, the chart I just mentioned, this quantity we assume it is small, so this is small, okay, small, small, small, so you have a very good term, okay, so this is the error term, and then somehow with this, you, also the assumption you hope you can spread the, you can cause the L2 estimate on the level set, okay, now to apply the eversion lemma, then we also need the L1 norm on the condition operator, and is this one, okay, so we need to show that for the same setting g and capital F, capital G and capital F, then with all these parameters, okay, this kappa is greater than 2, okay, so we want to show that this integral, okay, has a uniform bound as like this, okay, so in fact this is true, okay, this is true, can be proved, we put extra negative derivative in v on this part because we need to kill the singularity in q in order to have a uniform estimate, okay, now with these two estimates, then we can, with this estimate we can apply the famous eversion lemma, okay, so here's the, just a list of the, so the velocity eversion lemma is introduced, was introduced by Gauss-Pertrand-Santis in 80s, okay, so there's a lot of, what in different function space and improvement, okay, and Gauss-Bell-Yong's-Pertrand-Santis and so on, okay, and also Bouchure-Devilleet and Gauss-Bouchure-Purenti and Pertrand, so here I cannot mention all, okay, so here I would like to say that what we need is the localized, localized in time version of the estimate obtained by Bouchure, okay, in 2002, okay, so the estimate can be stated as follows, okay, so if we consider this kinetic equation, Ft plus V dot gradient is F equals the source term, so if we assume that we know that the solution has some vectority in V variable belongs to LP and then you can always find this AS thread, okay, a small number defined like this, okay, so that the following estimate holds, so this estimate holds, okay, so you have gain of vectority in X and T, okay, and it is bounded by the solution at the two endpoints because it's localized in time, okay, and then also the source term, okay, so this Ft, basically is the Q we are looking for, okay, so now we've, of course, there's also some extra term on the LP, now assume we all, we have all, we assume that all this estimate holds, then we can construct a functional and this functional contains the L2 long on the level set and the HS long in V on the level set comes from the covisivity and also the LP long of the LP long S double prime, no, WP S double prime in X, this term, okay, comes from the everywhere in the number, okay, so now if we work on this functional to see how it evolves in time, then you can show that the functional in fact has satisfied this inequality, okay, so this part, this is the, take the supermode in T, so if we, without taking the supermode, you can show that when T2 is greater than T1 and this functional is bounded by initial data LP long and pass this term, okay, so this term, the important part is that this power beta i is greater than 1, okay, and the ai, of course, is positive, okay, there are several terms comes from Q and the commutator and so on, so there's a summation in form, okay, and this is homogeneous in K, now once we have this estimate, then we can take the supermode in T, between T1 and T2, then we have the estimate, we have the estimate on the function, okay, so here let me introduce some notations, okay, so let's choose a K0 to be sufficiently large, depends on initial data, and MK is K0 minus 1 minus 102 to the power K, okay, K is from 0 to infinity, so when K goes to infinity is K0, so if we define the level set FK to be F minus MK with weight positive, and the functional I just define is for MK 0 to T, okay, and then with this estimate on the functional, then we can show that the EK in fact is bounded by the initial data plus EK minus 1 to the power beta or K to the A, okay, but beta is positive, beta is greater than 1, that's the key point, so for the first two terms it's easy because K0 can be chosen large enough so that the level set for the initial data is zero, okay, because the initial data is assumed to be bounded initially, so we end up with this inequality, so EK is bounded by this summation, okay, now for once we have this then it's almost done because it's done because you can construct a E star to be this quantity and then you can show that the E star satisfies the E star K satisfies the equality in the opposite direction, okay, so you can compare by comparison argument then you can see that the EK is in fact bounded by EK star, okay, and what's EK star? EK star by our construction goes to zero when K goes to infinity because EK star this Q0 is greater than 1 because beta i is greater than 1, okay, so this is less than 1, 1 over Q log to the power K, so when K goes to infinity E star K tends to zero, okay, so we have the level set, the functional goes to zero, so the level set the L2 long goes to zero and then this impies the give you the L infinity bound on the solution, okay, now so far this argument up to this point is very locally in time, okay, very locally in time, so basically once you have the local in time estimate by the special gap then the result holds for arbitrary time, okay, so by the special gap and then combine the local L infinity estimate then we have the L infinity bound on the solution go for in time and also have the exponential decay in both L infinity and L2 long, okay, so this is the end of my presentation, so thank you. I have a small question, okay, thank you, hi, hi Tong Yang, hi, I can hear, you can hear me, thank you for the nice talk, I wanted to ask you two small questions, the first one is, so if I understood your talk correctly, you give the first L infinity perturbative solutions in the non-cutoff case, so that's the, you improve the perturbative theory by just using L infinity as the space for the smallness, right, yes, yes, do you think if I'm thinking too, if I'm just comparing with the Navier-Stokes theory, which you know you could imagine that your theorem could be connected to the Navier-Stokes theory by a fluid limit, in Navier-Stokes we can go below L infinity, I'm just wondering if you thought about that, if you think you could go to some LP and if it if it possible to go to some LP, then there should be a critical P maybe, a smaller smallest one so that the perturbative problem can be solved in a well-posed way, so that's the main question and maybe I just said the second one because you maybe you can answer both at the center, I was just wondering whether part of the estimates you use are similar or not to those in the recent papers of Ciri Lambert and Luis Silvestre on non-cutoff-Poltzmann equation, I mean I'm not compared really, I was just curious, so for okay thank you for the question, so for the first question I think this is a very good point, so in fact the L infinity is not necessary, okay, so but I think the argument was also for LP, but for the Boltzmann equation I can so far I cannot find the critical LP, so this is the, so yes, so let me see, so this is at one point I yes I try to understand this LP, we base this by LP but I don't have the precise statement, so this is okay, but in principle your methods can go a little bit below L infinity, in principle, I think so, it seems to me as well, yeah, thank you, thank you, and then for the second question I don't know which paper you refer to, you can look at, well there are two or three papers, it's a series of papers, they are not really focusing on the same goal as you do, so they prove similar things, I participated this program more on the Landau side and a bit with the on the on the moments for Boltzmann, the idea is to prove a conditional regularity, any solution that has, but so it's not exactly the same the same target as you, yes you're right, I know those papers, they are part of the charities rather than the assistance, absolutely, so they don't prove the same theorem as you, but I was just curious because the estimates, the Georgie estimates are similar to the ones you use, it seems to me, I see, but I, yes, so for example for the solution we obtain in this way more, how to show that it is C infinity, I think it's still not covered by the, the C infinity, this is definitely covered in another way in their work, you should have a look, yeah, what they do not cover for sure is your theorem of perturbative solutions, the global existence of perturbative solutions in low regularity space, because they do not focus on the perturbative theory, they do the Georgie-Schauder and infinite bootstrap to get C infinity, right, I know, I know somehow I read their paper and then they mentioned the result covers the solution for example obtained by Thorne-Dewen's frame, yes, but anyway we can discuss this because this is not, okay, thank you, thank you, there are a couple of questions for Ben for us, one on the chatroom, can you read it, can you see it Tom? Oh yes, here you use the D-George argument for L infinity, bang, not the regularity, I know that you work on the Nandau equation and Fokker plan, you have the holder and so, yeah, I'm sorry, I'm sorry, I missed your answer, oh sorry, so yes, we only use the D-George argument for the L infinity, bang, not the regularity for the Nandau or Fokker plan for holder inequality and so on, yeah, but you think it will be useful in this context to try to push the, you think it would be possible to push the the George argument a bit further in order to I think so, yes, yes, I think it's possible to use oscillation numbers and so on, okay, okay, thank you.
|
In this talk, after reviewing the work on global well-posednessof the Boltzmann equation without angular cutoff with algebraic decay tails,we will present a recent work on the global weighted L∞-solutions to the Boltzmann equation without angular cut off in the regime close to equilib-rium. A De Giorgi type argument, well developed for diffusion equations, iscrafted in this kinetic context with the help of the averaging lemma. Mores pecifically, we use a strong averaging lemma to obtain suitable Lp estimates for level-set functions. These estimates are crucial for constructing an ap-propriate energy functional to carry out the De Giorgi argument. Then weextend local solutions to global by using the spectral gap of the linearized Boltzmann operator with the convergence to the equilibrium state obtainedas a byproduct. This result fill in the gap of well-posedness theory for the Boltzmann equation without angular cut off in the L∞framework. The talk is based on the joint works with Ricardo Alonso, Yoshinori Morimoto and Weiran Sun.
|
10.5446/53474 (DOI)
|
So, today I will talk about asymptotic preserving schemes for Levy-Folker-Plunk equation, and this is a joint work with Ujoshi, who is currently a PhD student at Minnesota. So here's the outline. First I will give a brief introduction on the Levy-Folker-Plunk equation, and then I will introduce our numerical methods and show you some numerical results and draw the conclusion in the end. So this is the Levy-Folker-Plunk equation. As you can see that compared with the classical Folker-Plunk equation, the difference is that it replaces the diffusion with this fractional diffusion. And there are several ways of defining the fractional diffusion, and here I list two of them. So the first one is this integral formulation, or we call it risk formulation. So this Pv is the principal value, and this c is a constant depending on the dimension and this power s. Or you can define it through the Fourier transform. So the Fourier index of this operator is the modulus of k to the power of 2s. So clearly from the second definition, you can see that when s equals 1, it reduces to the classical Folker-Plunk operator, or the classical diffusion operator. And also from the first one, you can see the difference between this one and the classical diffusion is that this one is a non-local operator and the classical one is a local one. So the motivation for considering this equation is for its application in plasma physics. But I believe that the application is far beyond physics. So as long as your microscopic dynamics is governed by a Levy flight instead of Brownian motion, then this kind of operator will appear when you model that system. So first let's look at several properties of this Levy-Folker-Plunk operator. So the first one is the conservation of mass, so that's common, similar, the same as the classical one. But the equilibrium is different. So here the equilibrium instead of being a Gaussian type distribution, it is actually this power law decay. So it's a slow decaying function. And also we can consider the entropy dissipation of this operator. So basically we can consider a very general one. So if you define this entropy function like this, so phi you can consider as a smooth convex function, then we can plug in this f over m so that it can be viewed as the relative entropy, then we have the exponential decay of the relative entropy. So this is given by this Gentile and Invert. So if we want to extract the macroscopic dynamics out of this kinetic equation, we should consider a long time and small mean free path scaling. And it turns out that the correct scaling for this system is the following. So you can see that instead of rescale t to epsilon square times t, we have to consider this fractional power. And so the reason for this fractional power is to, is the following. So if you consider the classical case where we have this classical diffusion, then immediately we see that when epsilon goes to zero, f will converge to this local equilibrium. And if you integrate the whole equation in v, then you get the conservation of mass and with this flux J. And to determine this flux, you just multiply v to the equation and integrate again. And this is what you will get. And sending epsilon to zero, then you'll see that this J over epsilon to the leading order is this term. So plug it back into the conservation equation, you end up with this diffusion equation with the diffusion coefficient or diffusion matrix of this form. So from here, you can see that if we, so here this one is the equilibrium for the classical case. But if you want to plug in the equilibrium for the fractional Laplacian case, then this diffusion matrix becomes infinite. So that's why you cannot consider the classical diffusive scaling. So with this anonymous scaling, we have the following results given by Cessbara-Mellet and Trivissa. So it says that as epsilon goes to zero, f will converge to this local equilibrium, which is rho times m, in the following week topology. And this rho satisfies the fractional Laplacian equation. So you can clearly see that the fractional Laplacian or fractional diffusion in v in the original kinetic equation transfers to the fractional Laplacian in rho in the limit. And the way to see how we can get this limit is actually quite simple. So again, first, we know that when we send up to zero, f will converge to the local equilibrium. So here is the same as before, except that the local equilibrium here is this power law decay function. And then to derive the equation for this density rho, they have to construct this special test function. So basically, instead of allowing these x and v to be two independent variables, they require that these x and v are related in the following way. So once you plug in this special form of test function, so you multiply this test function to the original equation and you integrate in x, v, and t, immediately you see that because of the special choice of this test function, these two terms cancels. And this fractional diffusion in v becomes the fractional diffusion in x. So you can simplify this equation in the following way. And then sending epsilon to zero, it will converge to the equation for rho. And you see that this equation is nothing but the weak formulation of the fractional diffusion for the density rho. All right, so this is how we derive the fractional diffusion limit. Now let's talk about numerics. So our goal is to compute this Levy-Foucault-Plunk equation and hopefully that when epsilon goes to zero, it will become a solver for this fractional diffusion equation. And there are two main challenges. So one is the non-locality that comes from computing this Levy-Foucault-Plunk operator itself. So here because of the interplay of these two components, we know that this f will converge to the local equilibrium, which is a slow decaying function. So that means in principle we need to compute the fractional laplacian on a slow decaying function. And actually computing this non-local operator is an active area of research and there has been several quite good results in the past few years. So usually the method to compute this fractional laplacian or non-local operator in general can be categorized into these two classes. So the first one is this finite difference and finite volume method. So the idea is that you have to truncate your domain and either the function is fast decaying so that you can ignore the tail or if the function is slow decaying and the tail information is important, what you can do is that you add the tail information back, but that highly relies on the analytical behavior of the tail. So which is not feasible for our case because in our case that unless f reaches the equilibrium or unless it is at the initial condition, we don't know how the tail behaves. Also people also consider adding some artificial boundary condition when you do the truncation, but that would generate instability. So then the spectral method has been developed and it is more applicable. So the idea of the spectral method is it uses the non-local basis or in this case is the orthogonal polynomials and with the hope that if you apply the fractional laplacian to the basis function then it is easy or at least not hard to compute. So for instance Mao and Shen has developed this Hermit polynomial basis based spectral method and the idea there is that since the Hermit polynomial is invariant under Fourier transform so you can see that if you want to compute the fractional laplacian of a Hermit polynomial you can just transfer to the Fourier space and apply this Fourier symbol and then transfer back. But it turns out that if we want to use this method to compute a slow decaying function we need a lot of modes. So later on people realize that the Chebyshev polynomial is more performs well, performs better when you want to approximate a slow decaying function. So there are these two works that are based on this Chebyshev polynomial based spectral method and so these two work just treats the fractional laplacian applying on Chebyshev polynomial in a different way. And also you can refer to these two review papers. So here we will adopt approach in this work but I want to mention that our choice is not unique. Actually there are still room to improve. So the second thing is the stiffness introduced by this epsilon and for that we aim to develop the asymptotic preserving scheme. So here I listed a few results that for the linear Boltzmann equation in which case we still have the fractional diffusion but I want to emphasize that here the major difficulty is we do not have a strong convergence or we do not have a Hilbert expansion based convergence. So we know that Hilbert expansion actually gives us a lot of guidance when we design numerical methods because it gives us an idea of what's the magnitude of each term and how should we group the terms. But as you can see that when we derive in the derivation of fractional diffusion limit here we use this tricky choice of test function which gives us limited information on how we can construct the numerical methods. Alright so let me first show you how we compute this fractional laplacian and as I mentioned this is adopted from this work. So it goes as the following steps. So first the first step is we do a change of variable so this V is in the whole domain and if we do the change of variable not only a change of variable in the following way then c is in between minus 1 and 1 and this Lv you can consider that's a scaling parameter. So basically the larger the Lv you can see that the points are more spread out so usually if you want to compute a slow decay function then you want to choose this Lv to be large. But of course if you choose Lv to be large then there will be less points in the body so you have to strike a balance there. And then once we are in the finite domain from minus 1 to 1 then we can consider the Chebyshev polynomials and then if we do further change of variable if we let this arc cos i can see to be q then the Chebyshev polynomial just reduces to this trigonometric function. And now if we just work with the new variable q and keep in mind that the relation between q and original variable V is the following then we have to rewrite the fractional laplacian in this new variable q which looks a little bit complicated but the bottom line is it is an analytical expression. Now if we want to apply this fractional laplacian to a function so remember now the function if we write it into the new variable q then q is in the domain from 0 to pi so we can do an even extension and extend the function to the domain from 0 to 2 pi and in this way if we were to expand this f tilde q in the basis which is the cosine kq and because this is an even function so that expansion turns out to be quite easy it's just the discrete Fourier transform. And then the next thing is that we have to apply the fractional laplacian to this f of q and then it reduces to compute the fractional laplacian of the basis function and in this case the basis function is just this Fourier basis. But the thing is this fractional laplacian is no longer the original one so you cannot simply say that this one equals the modulus of k to the power of 2s times the Fourier basis instead we have to resort to the previous new formulation of this fractional laplacian. So actually we have so they have derived this analytical formula for this fractional laplacian applying on the Fourier basis but here I'm not going to list it in these slides it will be tedious but it can be pre-computed. Alright so let me show you the performance of this method by applying it to two kinds of f so the left hand side is this exponential decay function and the right hand side is this power law decay function. So for these two choices we have the analytical formula where we apply the fractional laplacian and this is the error between the numerical result and the analytical formula. So basically you can tell that the slow decay function is much harder to compute because with the same number of modes on the accuracy for this one is much lower than this one so you can see that if you want to reach the same accuracy then you will need a lot more modes for the slow decay function which is not surprising. Okay so if now we go back to the Levy-Folker Planck operator we have another term but this term is easy so we just rewrite it in terms of the new variable Q. Okay so now that we know how to compute the Levy-Folker Planck operator we have to deal with the stiffness and our idea is still we want to use this micro-micro decomposition but the key idea is that instead of saying that the macro part is rho times m we allow the macro part to depend on v as well but not in an arbitrary way. So although we have this dependence on v so the dependence is very mild so we require this x and v actually has the following relation. So this is inspired by this weak formulation. So once we have this special form of decomposition and we plug it into the original equation and this is what we get and again due to the special form of this eta we see that these two red term cancelled and the blue term changes to the fractional aplasia in x and this term goes to zero because m is the equilibrium and this i is just you can just consider as the residual where you try to apply sort of product rule where you apply this fractional aplasia. Okay so after this simplification the equation becomes the following so this equation is completely equivalent to the original one but then we do a splitting. So we split it into two equations one is for eta for the macro part and one is for the micro part. So the rationale behind this splitting is that we just keep in mind that we want to what we want is the in the end is the fractional aplasia and you can see that this eta the eta term in the original equation they are of the same order and they almost gives us what the diffusion equation should look like and then the rest of the thing is put into the micro part. So you can see that evolution is that eta evolves by itself but the row evolves according to eta and the initial condition is decomposed in the following way so eta is of course related to row but in the in the special form and and then the rest of the thing is given to g. So there is a slight thing we should be careful is that the whole derivation here relies on the fact that this eta is of this special form so we but we know that the but the evolution that eta obeys is the following equation so we need to make sure that eta that the the form of eta is preserved along the dynamics which is not hard to show. Alright so so we check a little bit about this splitting system so we have to make sure that this splitting is not too wild so we check the energy stability of this splitting and it turns out that the energy for the f so that which is they reconstructed or the total or just the original function f and also the energy for the macro and the micro part individually are uniformly bounded in time. Okay now the next thing is we need to solve this splitting system and if you look at the equation for g you will you will notice that it is as hard to solve as the original equation for f so our idea is we do an operator splitting so basically we split the the convection part from the collision part so in the first stage of the splitting we just focus on the collision part and then in the second stage we focus on the convection part but we also do a auxiliary term so this is like subtract add and subtract and so we add we subtract this gamma g in the first splitting and add it back in the second splitting. So for this splitting method compared to the original one the the advantage is quite obvious so first of all because of this auxiliary term it alleviates the eoconditioning so because we know that this L-S has this non-empty non-space so if you were to invert the system when epsilon is very small then the system to be inverted will be very stiff so actually it can be shown from this condition number so you can see that when epsilon is very small the condition number just grow dramatically but with this additional term the condition number just grow mildly so that's one advantage and the second advantage is to reduce the computational cost so here instead of just because so if you consider the original equation and if you treat it them implicitly then you have to deal with x and y simultaneously so that means you have to invert a very large system but once we do this the operator splitting then in the first stage we just need to do to invert an equation only for v so so x we can treat it locally and the same for the second one we just treat v locally so the computational cost is largely reduced and the third one is for the asymptotic property concern which I will explain in the next slides so basically we have the following result that the numerical solution obtained by this reconstruction will satisfies the following equation when epsilon goes to zero and to see why this is true first let's from this reconstructed solution we can see that so the first one this eta because the eta satisfies the fractional diffusion so it can be replaced in the following way and then with a pretty cheap argument we can swap the average with this fractional equation so basically we can move this average inside and then the average or the integral of f is rho and so you can see that this is pretty much what we want with two additional terms depending on g so if you can show that g or the average of g is a small term then we are good so we resort to the equation for g so here the first from the first stage we just need to control the magnitude of g so as long as it is bounded and then from the second stage it is clearly that it will be a small term so it will be order epsilon to the power of 2s so from this derivation you can also see the advantage of doing this operator splitting is that it gives us a little freedom so for the first stage we don't require so we don't need g to be small just immediately so for the first stage it just needs to be bounded but then the second stage gives the power of driving g to a small magnitude so that's why in the end it converges to the equation for the fractional flash okay so let me show you some numerical results so the first one is the spatially homogeneous case and here we check the long time behavior so here when s equals 0.5 we have the analytical form of the equilibrium so here the left hand side is we compare the numerical solution with this analytical formula and here we can only plot the body part so you can see that the body part they match very well and so it is with the tail part and then the last figure that's for the exponential decay of the relative entropy and we also check two other cases where s equals 0.6 and 0.8 and in these two cases we do not have the analytical formula to compare with so we only see whether we obtain the correct tail behavior and as you can see that the tail is proportional to this power and the next is for the spatially inhomogeneous case and you can see that we have the uniform convergence for a different choices of epsilon and these two figures are for different s and this is to check the energy stability and again this is for different choices of epsilon so for the last one you can see that there is a sharp transition this is due to the presence of initial layer and also we compare the solution with the solution to the original kinetic equation so we use a IMAX scheme with a very fine mesh and when epsilon equals 1 this is the comparison for our solution with the solution obtained by this IMAX scheme and the last one is to show the comparison with the diffusion limit and this one is to show the asymptotic preserving property basically if you decrease your epsilon then the difference between your f and the local equilibrium should be of other epsilon. Alright so to conclude that we have developed this asymptotic preserving scheme and the key idea is the spectral method with no local basis that is to treat the V-focal Planck operator and to design this asymptotic preserving scheme we need to use this micro micro decomposition and also to solve the split system we need to use the operator splitting so with that I'm going to end my talk and thanks for your attention. Thanks for your nice work and talk so you mentioned the so Cess Brung and Menon and Trevisa and several years ago proved this fractional diffusion limit and if you fact about the two years or three years ago Cess Brung and Menon generalized their work to the half space with non-local boundary condition so my question is is there any possible and to I mean numerically capture this non-local boundary behavior with this APS scheme? That's my question. We haven't tried that yet but yes I believe that we I don't know if this method can directly apply but I believe that we can develop sort of this APS scheme for that kind for the equation with boundary yes. Yeah okay thank you so because this boundary condition is very interesting because the boundary condition is kind of a non-local Neumann boundary condition so yeah. Yeah yeah yeah so I know that when you have fractional aplasia and you have the boundary domain then yes the boundary condition is highly non-trivial. Okay yeah thanks. How is your method sensitive to the operator you choose? Which operator? Yeah which operator? Maybe you'll have a message on chat you cannot say it? If it's not a log... No I see. If it is not well so I have to say that the way we do this micromicro decomposition that is specifically designed for the passage from Levy-Folker Planck to the fractional diffusion. So if it is not this operator then the mechanism for driving to the fractional diffusion might be different than the design of this micromicro decomposition will need to change so in that case I would say that yes it is specific to the Levy-Folker Planck operator. He says that you can unite the working teams to create the work. Yeah, significant difference there. Can you do if you are the type of people of the ATC in the world like aurance buildings
|
We develop a numerical method for the Levy-Fokker-Planckequation with the fractional diffusive scaling. There are two main challenges. One comes from a two-fold nonlocality, that is, the need to apply the frac-tional Laplacian operator to a power law decay distribution. The other comes from long-time/small mean-free-path scaling, which calls for a uniform stablesolver. To resolve the first difficulty, we use a change of variable to convert theunbounded domain into a bounded one and then apply Chebyshev polyno-mial based pseudo-spectral method. To resolve the second issue, we propose an asymptotic preserving scheme based on a novel micro-macro decomposition that uses the structure of the test function in proving the fractionaldiffusion limit analytically.
|
10.5446/53476 (DOI)
|
It's a pleasure to give this talk because much of the talk has been done in Shanghai in collaboration with Ming Tang there and other people who are visiting the Institute of Xi Jinping. So it's a great pleasure to see a program which has been running for several years and which is dedicated to understand how bacteria organize themselves in a collective way. So first of all I would like to say that the word of bacteria is very complicated. It's even bacteria are very simple organisms compared to a chaotic sense. They communicate mostly by sending chemicals in their surroundings, in their environment and reacting to this environment. So in some sense the communication is very simple. In terms of pattern formation at least. But nevertheless they can generate this kind of pattern that you see on this screen which means that with this method of communication they can already organize themselves in a collective way in such a way after evolution of course of billions of years that they can produce this and biologists this pattern as a way for bacteria to invade their surroundings and to search for the nutrient in an optimal way. If it was a spherical way wave you would need many many cells to go far away. If you send digits like that very long then you can with very few cells you can go very far away and you can discover what is around. So this is an example of organization that you see in bacteria and mostly the question is open to know why this is organized like that. So you have mathematical models so what you see on the screen on the right is just numerical simulation so we know PDEs which satisfies that and this model that you see here is due to Mimora. Nevertheless we don't understand why these patterns occur. Physicists have been trying to give rules for the number of branches or for the length of the branches and so on but this is very heuristic. There is no rigorous approach to that. So this is a first example but there is what we understand better in terms of mathematics is a much simpler example of collective motion and this is what will be leading this talk. So this is an old and very famous experiment which goes back to the 60s not as far as Dirac but already many years. And what they observed is that if you put cells in a tube and by simplification you put them on this screen on the left it's on the left and then suddenly you open the tube then the cells, this bacteria, they will move with what we call travelling band which means they move collectively all together and you can see here on this time scale of 2000 seconds that it is really travelling with a constant speed. You can measure the speed because you have the scales on the screen. You can measure the scales and this is remarkable. So what you see on the screen on the right is the cell density in the red spot. So you have this red spot which shows the movement of the cells and the shape is also very remarkable because it's not symmetric. You would say okay you have many cells moving randomly, this is a Gaussian. It's not true. There is an asymmetry in the cell density. So the phenomena which is behind is basically all physicists agree on this for this kind of experiments. It's a chemotaxis which means that cells emit in the surrounding a molecule and they react by being attracted by attraction to this molecule. And so this makes that the cells want to stay together because they emit the product which attract them. So this is the reason why they stay together and on the top of it there is nutrient in the tube. They like to have food so when they have consumed the food they go to the right on this experiment to find a fresh nutrient. So the movement in some sense is due to attraction to the food and the fact that they stay together is because of this chemotactic effect. So this hypothesis has been checked by physicists. They know that the interaction with the food in these cells they move by swimming. They swim in the food so you could think that there is an interaction between cells and the food. So this is true but not for this kind of experiment. So for bigger cells it's easier than for escharicacoli. So this is not an interaction between the food and the food and the cells which is a completely different subject in the bacterial movement which is also fascinating. The phenomena is very robust. This means that cells you know that now they can manipulate genetically the cells and they can always observe this kind of phenomena. So all this is well explained, it's easy to explain, safe attraction plus attraction to the right you see what you observe. So is it true that mathematically you can give this explanation? So the first idea when you do chemotaxis is to write the famous Keller-Segel system. The Keller-Segel system is just very old from the 70s model. You say that the cells like n of tx is the number density of cells. So we are speaking of experiments with millions and millions of cells. It's not a small community, it's a very big community of cells. So n of tx is the cell density. They move back, bring in motion, they are swimming and moving randomly and they have an oriented drift which means that they like to go to the gradient both of the chemo-attractants C and of the nutrient which I call S. So these are two chemical molecules which act on the cells. We will come back on that to explain how it works. The cells are attracted by these two fields. So the C attracts cells together because as you see in the second equation, the chemical C, the concentration C is produced by the cells and by themselves. So this is the reason of attraction that they stay together. And the quantity S is consumed by the cell. So the nutrient is consumed by the cell and so when they are at certain locations, they consume the cell and they prefer to go further to find fresh food. So this model is very reasonable to explain the experiment except that it is not true. It's absolutely not true. It's a very interesting model in terms of mathematics and there are thousands of papers on that. I will not come back on that especially because the solution as a finite time blow-up phenomena which has been very much studied. But if you look at solutions, they cannot sustain the travelling bands we are speaking of. It's not possible. You can check there are many ways to be convinced that it cannot work. When you make a simulation of that, that is what you see on the small pattern here. At the beginning you have bands. So at the beginning it's fine. You can find bands if you do the correct nonlinearities but these bands are only transient. The K-larsegault systems want to generate patterns which are concentrations, point-wise concentrations and this is always what you find at the end and there are many reasons, theoretical reasons to explain that. I would like to point out that there is a big difference between this parabolic equation and the hyperbolic or the kinetic equation which we will present later. There are many simulations by Phil Bay, by Voschelet, by others which show that the same equations that are written at the level of kinetic equations will create bands. The patterns, the singularities are very different at the parabolic level. So it's not just that when you put epsilon going to zero, the models coincide. The patterns are very different. So this is a reason why you want to go back to kinetic by the way. So what kinetic will explain to us is why there is a much better model which is called the flux-limited K-larsegault equation which goes back to papers by Yasmin Doulak and Fisant Schmeisser by Erbann and Haute-Mair maybe 20 years ago. And the way they say that the flux is not a linear combination of grad C and grad S, it's a nonlinear combination where you diminish the effect, make some kind of saturation. If grad C is really big, you make a saturation by dividing by grad C. Or if grad S is very big, you make a saturation. So the flux, the drift is saturated by this nonlinear formula. So the velocity you know is bounded. And then you can simplify the previous equation and just write that C is produced by the cells, S is consumed by the cells. And now you can check this model admits traveling bands. These are solutions which look like the patterns which we have. And this can be fit two experiments. And this is what you obtain. If you fit this parabolic equation with the experiment, the experiment is the blue line which is a little bit of a see-atory. The green, the pink line is the numerical solution. You see that you can fit quite well the experiment with this model. And so the question is where does this flux-limited Kela Segel come from? And the kinetic formalism will tell us how it works, where it comes from. This is the purpose of this talk. So the model, the kinetic models for bacterial movement goes back to the 80s. So in that time, biologists were able to record the motion of cells and do understand that they don't really do it. For the Brunnen motion, as I said at the beginning, what they do is what they call renon-tembole, which means that bacteria, you see them on the left, they have this kind of flat Gela which makes them swim. So they swim thanks to this flat Gela, which are themselves activated by your biomotors, small protein motor, very small, it's a molecule. And when all these flat Gelas are coordinated, they make a long jump. And suddenly for some reasons, these models are correlated and then they turn and they take a new direction. So depending on the slides, either I will call V oxide, the velocity of the cells, in any case, the prime is the incoming velocity. So cells are turning from the velocity of the prime to a velocity of the prime to a velocity of the. And so they do this kind of renon-tembole, which is exactly what you see in the kinetic theory. So I have movies to explain all this, the problems that with BBB, if I show you movies, I lose a line. So I will show you the movie later to show you what is this renon-tembole on an experiment. And now I will continue and if we have time, we come back to that. So what these people, I think I mentioned them, Alte, Dunbar, Anand Sothmer, and Gela Stevenson, Tom Hillen, and Astor did is simply to write that cells are doing run. So that's what I told you, the V becomes the xi randomly in the talk. So they run with a certain velocity xi and the tumble. The tumble is just a change of velocity. The scattering is scattering except the scattering is modulated by the chemo-acrylactant and the nutrient s, the substrate. So if we forget c and s, it's simply that the cell of velocity xi prime turns to a velocity xi with a certain kernel, capital K. Okay, so capital K of xi prime and xi. So my notation is the calligraphy K is the operator. Capital K is the kernel you see in the operator. So capital K is not a symmetric operator. It has no reason to be symmetric in this point of view. By the way, it's an interesting question to know what is the form of this capital K of this kernel. So, for example, you could say that cells, okay, they know where they come from because they have been running. So they have been seeing the chemo-acrylactant c and the substrate s. So they have seen the environment. So they know what is the environment in the direction xi prime where they have been. But there is no way to decide on a better direction when they have to decide what is the new velocity xi that I should take after a run. There is no reason to know. They cannot see what is in front of them or they cannot feel the concentration in front of them. So in some sense, it's very unlikely that K depends on xi. It can depend on xi prime but not on xi. And so typically, what people have been choosing first as the first guess in the very early of the modeling is say that it's a function of c minus the average of the concentration of chemo-acrylactant or whatever is the molecule here will simplify and take only one which I call c. So you take the average and by a mean rule, it's simpler to say that x minus epsilon xi prime is an average along the run. And you decide to jump depending on the concentration you have. Okay, if the concentration is high, you are happy you will not run because you like this media. So you don't want to continue. If c is big, you are very happy you want to turn because you don't want to go away. If c is small, you want to go away because there are other reasons you like. So this is the first possibility to explain how the environment is interacting with the sense. So theory for existence, we did it at the very beginning of all this with petamarkets. We have a reasonable theory but there are many questions which are still open. First of all, our assumptions are very specific to this type of kernel k which depends on c of x minus epsilon xi prime which means that we don't have an abstract structure which works. And the other question is now if you assume that cells can measure rather the gradient of c rather than c, then there are papers by Angela Stevens also on this but they are very incomplete. So again, you have many questions about existence theory, simply existence theory for this model. So the strength of this kind of equation is known from the beginning is that if you rescale like a diffusion scaling as was mentioned earlier by Martin, then what you obtain is a diffusion equation and what you can find is that the K-Lorst-Segold system is satisfied in the parabolic limit and you can not only that but you can also discover what is the coefficient of diffusion and what is the sensitivity chi which is a parameter for the drift in terms of the concentration. And the important thing here in this formula is that the diffusion depends directly on the way of the kernel where you react to the chemo-attractant c but sensitivity will see the derivative. The variation of c if you want to go over of K. So this is if you want an abstract point of view on that, it's simply a question of symmetry, a symmetric part, an asymmetric part in the operator, on the turning operator. The d will see the symmetric part, the sensitivity, the drift will see the asymmetric part. So Pierre de Gaulle and Frédéric Poupaud were the first to point out this kind of behavior. In any case, this is not what happens in practice. So there have been experiments showing that cells do not react to the concentration, they do something more subtle. So what they do is they measure the concentration of chemo-attractant of this molecule, whatever it is, outside, which I could see, and they measure concentration. And along their run, as long as the concentration increases, they are happy, they like it. So they will continue as long as the concentration increases. But when the concentration c decreases along the path, then they don't like it. And then very quickly, very stiff, they will decide of a new direction. So in fact, the run, the turning kernel is not a function of c. The turning kernel is a function of the valuation of c along the path. So which means that what I call the kernel k, which enters in the calligraphy k, is a function of dTc plus side prime grad c. So side prime is the velocity, is the current, is the velocity during the run. So you get a very strange model compared to all those which are given by physics, let's say, where now you have this external field c, which might be emitted by the cells themselves. That's not very important. What is important is that the reaction is really along the path side prime. And again, you see the same thing here, by the way. You see again that the cells decide of turning only from side prime, where they come from. They have no way what is the best direction to take. So the new direction doesn't mean the new probability of direction is uniform inside. I will not make a long discussion about that. Believe me, if you scale this model correctly, the limiting model is the flux-limited color signal system. So this is something we did with Nikola Vosch and Zian Wang. And the small parameter epsilon that you choose to make this to rescale is the stiffness of the k. So k is very stiff. By the way, the first model, the first flux-limited model which I showed correspond to a discontinuous k, where depending on the sign of dTc plus side prime of c, you jump from one value to another one. So this is the equation. And this is the reason why the flux-limited color signal system is reasonable, because this is the reasonable model for cell motion. What I would like to add at this stage is these small pictures that I show here. You can recall what is the direction of the cells. And here you see, for example, that if you are in the front of the wave, then cells are mostly going to the right. Because the green part, which are cells going to the right, is much more important than the number of cells going to the left. So you can record now in the experiments on these cells exactly their velocity distribution. You can also record the time they do runs. And what they measure is that the time they do runs is also longer on the right than on the left. So the green part, you can see here the number of cells going to the right. It's also true that you get the same picture, which I didn't show because of space for the runtime. So which means that this assumption that cells are changing direction according to the variation of the nutrient is correct. So the next question we want to ask is, OK, so we are convinced that this equation is true. You will agree with me that Vlad of Poisson, Vlad of Maxwell, that the very beautiful shape where you understand that the picture of geometry comes in the game. Here we have something ugly. So can we explain now these thumbing rates, where it comes from? Because for mathematician, it's not a very reasonable kernel. So this is the next step. How can you do that? And so if you want to do that, now you have to understand how the cells are moving. And in fact, the cells are moving in the following way. So here you have a small growing, spinning that. So you have the external of the cell, which is on the top, and the plasma of the cell is inside. Through the membrane, you have what are called the trans membrane proteins, which are sensitive to the molecules outside. So let's say the chemoattractant outside will act mechanically on this trans membrane and will send a signal inside the cell. And this signal will change the molecular content of the cell of certain molecules, which act directly on these molecular motors. So I told you the flageras are moved by a molecule, molecular motors, and these molecules that are produced by reaction to the external field act directly on this molecule. So to model that, you see that you need now two things. There is the external concentration C and you need an internal concentration, which I will call M, because the methylation level of the receptors, which will give you the internal state of the molecule, which act on the motors. So now what time is it? Well, OK, it should go fast. So basically what you do now is to write a model in terms of a Tx size of velocity and this chemical content inside the cells. And there is a chemical reaction, which I call capital R, which makes that there want to be equilibrium in the internal state and the external state. And you look at this model now, you get an equation which is a very standard scattering equation, except that this methylation level decides of what is the kernel, the turning kernel, but it depends on the scalar quantity, small m, and that is the equilibrium between the external and the internal. If you rescale it correctly with an epsilon here, which means that you have two epsilon at two different levels, there is the one in the reaction term, which means that you have a fast reaction, a fast adaptation. And you have one in the kernel, which means that you need a stiff response, which is what I said, I mean these responses are very stiff. And if you do that, in the limit, you get that your active equilibrium in M, so the limiting solution will go to a direct mass in the internal state compared to the external state. And the function F, so you have some kind of tensor product as the before, but now in a different variable. But now if you look at the limiting function F of X, sorry there is a mistake here, X, X, I, T, F, will satisfy the scattering equation exactly with the kernel capital K, that we have found, which is capital K depending on the path variation of C. So this is the explanation why this model is correct, if you go to the molecular content. In one minute, I would like to say that you can also, this is supposing that the number of cells doesn't change, which is reasonable for these experiments in a few seconds. If you want to include cell division and cell death, you can do it. And again, you get pattern formation and these patterns are, we did that with Vincent Calvez and Shugoya Suda under here. I would like to say also that there is a large literature, the Nose of Branch of Literature, which is on a different phenomena and different experiments, where the experiment shows that you have, you have levee walks, that the diffusion approximation is not the correct approximation for this, and you should go to levee walks. This is something also we try to understand with Vincent and Mintang. It's very connected by the way to some models that Martin Frank looked at with Vincent and that Thierry Goudon also consider with renewal equation. See that it's an example where the, if you want the methylation level acts on the turning kernel in a degenerate way. And so you are back to the classical phenomena and kinetic theory, where if you find that the kernel is degenerate, then you can have a long levee walks or fractional laplacian delimit. So here it's also true. We're not completely happy on the scales and the model. I think there are many things to understand on the scales that you put on this methylation variable to expand this phenomena. It's very indirect. That usually you do the scaling directly on the scattering kernel. Here you do it on the secondary variable M. So it's a little bit more complicated. And okay, I think there are many things to understand there. It's not finished. Okay, so I will stop here. So this was a more or less a picture of what we have been doing in many years on this question of bacterial motion. Why is the flux limited Kela Seigl system answers to the question of the traveling bands on the alchemy of the traveling bands? It's one of these examples, which are now many, there are many of them now, where you can fit quantitatively an experiment data with a mathematical model. If you, there are not so many examples, but because physicists now are working very much in a living matter, they have many, many examples of that kind of the experiment. So you can model, mathematical model. And I wanted to point out that there are many mathematical questions with existence, singularities, asymptotic theory and fractional derivatives, which are not, which are still not solved, or which are not, I mean, there is a feeling it's not complete, meaning we are missing some things. So there are many, many, many, many open questions still in these different domains. I wanted to conclude by thanking all my collaborators on that, all this has been done with these people that I mentioned during the talk, or maybe I forgot to mention them, especially the experiments have been done by the group of Pascal Zeeberzon in a current city. So I stop here, thank you very much for your attention. Okay, thank you, we'll take it back. Do you have any question or comments? You can send your question also by chat if you prefer. You can join me. I have some question, they ask, maybe for the velocity is it xi are always bounded in this context? The velocity is the same? Yes, the velocity xi is usually taken on the sphere. During the run, the velocity is more or less fixed by the type of cells and the environment. Okay. The sphere and you just, you just tumble. It's like photons, if you want scattering, photons scattering. Okay. This is a usual assumption. You know bioges are complicated, so you can find other examples also. Okay, thank you. A question and a comment. So regarding the fractional, fractional Laplacian and fractional diffusion limits, do you have any idea what the alpha is? The levy wall, the exponent? Oh yes, sure. We can compute the alpha in terms of the parameter mu and s, but not only, that's the reason why I say I'm not very happy. It's also has to do, I simplify, there is the drift, there is that coefficient, the power depending on m for the degeneracy, because m is a positive quantity, it's a chemical concentration, so there is boundary conditions, there are coefficients, and all this comes in and the formula is ugly. So it's not what you want to do, okay, compared to, okay, we have very much influence by your paper with the answer, and it's far from this to be as simple. So the reason I say it's not, there should be something more. But maybe there are experimental results about this also, right? That's a good question. Yes, certainly. Okay, so the short-scoping is that this levy works are obtained in a swarming regime, which will be different from where this model, I'm not sure this model is very true. So to fit the explanation, this is not completely clear that this is the reason of the levy works in this swarming, it's a little different. And then I have kind of a smart-ass comment, if I may, so Benoit, you are much better mathematician than me, but I think you can only define the fractional Laplace, and if you define it for minus Laplace, because minus Laplace is positive, and then... It's a question of notation, it's a question of notation. I will correct that. It's a smart-ass comment anyway. Yes, that's correct. There are other questions, maybe on chat, if you can read it. Benoit, you can read the chat. Yes, yes. There should be a term which qualitative thing makes think that the movement is fractional. That's the reason this is our observed experiment. I mentioned one paper, but there are at least 10 papers showing that the motion is fractional. It's very fashionable in this area. Even for a chaotic sense, by the way, now they say they have long runs. What I don't understand, there should be a term that sustained propagation is a fractional equation. Here, understand, there are papers on that by Kiesenschmeiser, this Spanish guy, where you put the drift. Here we didn't put the drift. That's one of the weakness of the result, by the way. You should also put the drift in this. Again, the model is, even if you put the drift, the prime is still open because the assumption of Kiesenschmeiser and his collaborators is that then you have long runs, that velocity can change. You can have very big velocities for the cells, which is not what you want to do here. What you want to do here is what he said, that the velocity is in the sphere. It's not the usual phenomena that you have a fat tail for the velocity. How do you fit the model to data? The term, the term, the term, the term is known physically. The values of the term, the term, the term, if you make it very steep, you have two values, K plus and K minus, depending on the sign of this variation of C. These K plus and K minus are known because they can experimentally fit, measure the time being, can all of the single cell. So they have ways to fix the cell at some point and look at the frequency where the biomotors will tumble. So this is an example of the K plus, the two values, K plus and K minus are exactly known. So these values are known. The value you don't know in this question, the value which is you don't know the concentration of the quantities. It's something you cannot measure. So you cannot access to the parameters. So this one you have to fit one or two parameters to the solution. So that's the way.
|
At the individual scale, bacteria as E. colimove by perform-ing so-called run-and-tumble movements. This means that they alternate ajump (run phase) followed by fast re-organization phase (tumble) in whichthey decide of a new direction for run. For this reason, the population is described by a kinetic-Botlzmann equation of scattering type. Non linearityccurs when one takes into account chemotaxis, the release by the individualcells of a chemical in the environment and response by the population.These models can explain experimental observations, fit precise measure-ments and sustain various scales. They also allow to derive, in the diffusionlimit, macroscopic models (at the population scale), as the Flux-Limited-Keller-Segel system, in opposition to the traditional Keller-Segel system, thismodel can sustain robust traveling bands as observed in Adler’s famous experiment. Furthermore, the modulation of the tumbles, can be understood using intra cellular molecular pathways. Then, the kinetic-Boltzmann equation canbe derived with a fast reaction scale. Long runs at the individual scale andabnormal diffusion at the population scale, can also be derived mathematically.
|
10.5446/53478 (DOI)
|
Okay, thank you for the introduction and thanks to the organizers for putting everything together. Wonderful opportunity for everyone to meet, to meet each other during this tough time. So this is John with Changdu and I think, okay, and Yan Zhizhang and Chen Lintang of Sichuan University and some current PhD students, doing this one. Brief outline. I will brief motivate you what is the quantized what is in super-calendar fluidity. The mass mass of models, there was introduced a center water state like a solid ton in exchange of solid in 2D. And there there's a cal size. If cal size go to zero, so there's many water states, then there's a reduced dynamic law kind of set of OD system, which is governing the water center dynamics. So that kind of let what you that for like a particle model for the water center. Well business with dynamic law for get the lambda equation, we can talk a few sense there. Of course, I will talk about some this kind of sense for shooting an equation with 90 as well as a bounded domain something like that. We know a quantity of what is kind of a particle or topical defect. So they are so you are other product complex other parameter or counter skill field. So side. So at the water center, the other parameter has to be zero. So in your fluid language at the water center is a vacuum. And near the water center, so it was x zero, water center near the water center side can be locally written as square root of rho, it's when I fee. So rho is a density, fee is a face. So if any gamma is a closed curve around x zero, the face, the integral along gamma, your face has a two pi n, where n is called as circulation, index or winding number. So this kind of quantum what it says is kind of key signature to support so fluidity. So suppose they support the dissipation is flow. So it's going to flow when they flow, there are no friction. So this is kind of a cartoon to show like a quantum what it says in the three, you know, there temperature is like a macro Kelvin some, some of that this is cartoon. I've got here this is real experimental result like a quantum what is in type two, so conductivity conductivity, so conductive conductor. So each what thought is a quantum what is in what it says. So here also are experiment result for the kind of in quantum versus in both transition you there because the top one is so many particles in the harmonic trap, you know, they're here and there's no rotations of ground like a bump, they're Gaussian type. Well, if this system rotation frame, you can see that little holes, each hole is a quantum what it says. So basically the quantum what it says exists in the superconductor model by quantum lambda equation, degree of chemistry, they could by two for the model couple the system equation like you have a VN and the row and the velocity density for a number of component and row as VS for superfluid for superfluid component. So there's a couple together. So they're also using the sort of if you just got a gross place equation to model as a phylogic model to model this one. Also the both kind of session is kind of not a shredding equation or gross place equation to model you also in nonlinear optics or propagation of these are being there you're not in a shredding equation or nonlinear wave equation, even nonlinear categorical equation to model this kind of problem. So here is some kind of typical machine model, you know, like here psi is the other parameter complex other parameter or will function. Like you get the model equation for superconductivity, nonlinear heat flow, complex flow, you know, here psi t is time, psi will function the other parameter. You have to say I intentionally write all the equations, right hand is the same. Right hand is the Laplacian psi plus one of f square, y minus f square psi could be on whole space in two dimension or a boundary domain omega with different boundary conditions. So this is a general equation of complex general equation or nonlinear shredding equation or nonlinear wave type equation or even nonlinear kind of equation. So here psi is a complex value to function or other parameter. I have a parameter epsilon is the definition of parameter which is sort of cos size of the problem. You can see that's the problem. The nonlinear part is a key, epsilon small. So basically in space, the typical scale is other epsilon. And in time, for the equation or shredding equation, the typical wavelength is other epsilon square. Of course, in wave type equation, the typical wavelength is other epsilon. Okay, of course, if you introduce so free energy, you have functional, you can see since there's four equations, the right hand side is same. So if you introduce the free energy or your functional, you can see the shredding equation is a dispersive system. So that the energy or the density of the modified density, they are all conserved. So this kind of dispersive system, they automate so that the solutions, particular like solutions, like any 1D, like a solution in 2D, canx or what is this, what I show you here. If you can find out equation, this is dissipative system, they're added to dimension system. So the single or central what is this we call red tail. So if I consider this, because all we get is whole space in R2, you can see, take an ansatz, psi equal, psi of x equal phi n of x equal fn of r, kind of this is i n theta. Here r is the polyconnates, n is the winding number. So I plug this ansatz into a system, I end up with this fn for this kind of two-point value problem and f r equal 0 is 0, r equal 1. So this is, so top here is fn r for different n, you can see when, so r equal 0 is 0, r equal 0 is 1. And formally, so the middle part is like the absolute value of phi n, you can see the inside that the origin is 0 of the density and 1 outside. So for fn r, asymptotically you can see where r between 0 and other epsilon, which is fn r is constant times r over epsilon to power epsilon by the n. Well if r much bigger than epsilon is 1 minus n square over times epsilon square over 2r square, basically near the core the fn is like a growth in term of r polynomially. And at far field the leading order is 1, next order the fn r decay quadratically in term of 1 over r square. So this is the phase, you can see when n equal 1, it's only when you go one round, two-point jump. So this next one is n equal 5, if you go one round, let phi, see that going one round, there are four, five times two-point jump. So of course, purely interested, all this kind of particle type, particle like a solution are the dynamics stable. Of course, if you set all the parameters, rho is density, v grad s is called velocity and g is current. So of course, there are some analysis for Kinsbler-Landau type, but for the Schrodinger type always some kind of assumption. So of course, when I study in America, I do the time, I don't want to show you this movie, I just give you the result. So basically, if I just have one, this is what I state, I do some perturbation for Kinsbler-Landau type, I can do perturb at this data. If you do little perturbation, where a equal plus minus 1, that dynamically stable. So if you initially have one, one, one, one, one in number, n equal to one minus one, that after some, some perturbation in this data, then then, a long time later, the steel one in number one. Well, if n bigger than one, you know, the dynamic answer, basically, the pattern is the split to n, what is it? When t much bigger than, much bigger than one. This is for this, this is for Schrodinger-Wilth type equation. However, for Schrodinger-Schrodinger equation or DPE, so this time, if you perturb an initial data, that dynamically stable for n and n, because the algorithmic citation is conserved. However, if you perturb the, it's not potential, for example, then n equal to one minus one, the dynamical stable, if n bigger than one, after n bigger than one, dynamically unstable, something like that. Okay, of course, since n equal to minus one is dynamically unstable, you are interested in studying what is the interaction. Then initially, I take my size zero of x equal a product of j from one to n, m phi of mj x minus xj zero. So xj zero basically is the location of the jth vertices, and mj equal to plus minus one. And phi m of x is the station of what is state. And also I can multiply or fix the independent initial data. So basically initially, I have capital N vertices, and suppose they are well separated, then it's basically xj zero and xm zero, the distance is much bigger than epsilon. And also, yeah, okay, this one. And then you just suppose, okay, suppose, so suppose, then you plug this, suppose solution size of xt could be something like a product of size j equal to one to n phi mj x plus xjt, where xjt is the water center of the jth vertices at time t. So you plug, plus a order term. So you plug this and that into the energy, you do some analysis, you will find that as not as you know, the distance, initial distance, they are much much bigger than epsilon, or the distance equal to infinity. In other words, so because the distance is much much bigger than epsilon, the left epsilon is zero. Or the initial distance is fixed, that's epsilon zero. And xjt will follow ODE and then later can prove rigorously this, this, you just dynamic law. So, so you can see this here, give you. Let me capture x is x1, x2 to xn, which is the set of points in R to N, which xj of t is water center at time t. So then I have introduced a functional table of x, which is like this one here. So remember this here, mj mk could be one, could be a minus one. So for Kinsman-Landau equation, kind of dissipated PDE, my so-called my capital Xj will follow this second order, that is the second order ODE system, nonlinear. So basically, so Vj is x dot, sorry, sorry, sorry, following the first ROD. So Vj is xj dot. So the velocity of the jth particle will follow this one. Okay. And for Schrodinger equation, this, this business from joining you, of many people are countries since this included Wynne, and the Fang-Hua-Lin and his group, and also for Schrodinger type equation, you have a j is a rotation, 0 minus 1, 1, 0, j times xj dot Vj t of Vj equals j dot. Follow this one, same. And for another equation, which is a second order ODE, like xj double dot, this one. So here, again, here, this one is almost the same, but the left side here is Vj here, but the rotation matches capital J, and for Schrodinger equation, this xj second order ODE system, this one is similar as many of you have studied this, for this correct dynamics for some set. But remember here, mj and all can be taken in here, plus 1 and minus 1. Of course, they're like how to, they can study from PDE to ODE system, like for another equation, they show for African ODE, you can start to study the qualitatively, qualitatively behavior. For example, if only have a pair of vertices, then capital N equal 2, and m1 equal m2 equal 1, the vertices will undergo the repulsive interaction. If the vertex dipole, the n equal 2, there is plus, that one minus, and go attract the repulsive interaction. And for example, the pinning effect of the other, so from this ODE system tell us. So recently we started this ODE system more intensively, what we can see more from this ODE system. Then for Schrodinger equation, what is it, we have the pair of vertices, like at the ideal field, and the PDE, like a higher order, like a radiation, something like that. Of course, for this ODE system, for this ODE system from Gizmo-Langda type equation, of course, if you have a plus minus, the finite time, the two-quartets center will go together, that's like a blow up or some kind of a merging of this one. Well, for Schrodinger equation, you can see at least for n equal 2, 3, or 4, one can show that no matter you initially have all the same sun, all you have different winding number, like some plus, minus what is that, what is this? You can always show that this ODE system is global well-posed. So from our democracy initiatives, this ODE system has not initially, what is location, they are distinct. The system, they are never two, what is it, go together. Okay, so now we try to, from this ODE system, what can we see more. So first we consider this ODE system from the Riddio-Tangai law from Gizmo-Langda equation, so this is the ODE system. So of course here, xj dot, because this one. Again, I want to remind you that the mgml could be one, could be minus one. So at the front, so the mass center, so this x bar t is the average of all the location. You can show that the x bar t is the conserved quantity. And also the energy, the w, the w of the x, this is decreasing. Of course there are some, like that, problems for example, if you initial, for all initial data, you shift one point, the solution shifts one point. And also to a scaling, you can do also scaling for the solution, also do a rotation, so initial data with one, like a q-theta is a rotation matrices, the solution also rotation this one. And also for some, if I have n vertices, if initially they span the uniform on a circle with radius a, you can solve this ODE system analytically. For example, if a equal to, a equal to three, a equal to four, for n and n, basically they are, they are, they are moved outwards along the line, connected initial location and the origin. And if n could equal to three, and if I, in fact, I have one, what is it at the origin, the rest n minus one, they are uniformly along the circle. Again the, the single one doesn't move and the rest n minus one, they move outwards. This, this is easy to understand, just plug the, use the symmetry. If initial have two, what opposite what is, what is dipole? Then you can use to show that here plus minus initially, this from ODE. Of PDE, the, the quality is the same. And of course you can solve analytically, you can see the solution when basically at a t equal a square over two, the two will be moved at the origin. Next case is, if n vertices n bigger than three, but opposite, I have one origin, I have minus one, one in what is this, and the rest n minus one with plus one, what is this? Then you can solve, again you solve, origin one stays there, but the rest will move into here, you can see my a equal four is a, so-called as a critical number. Because if a equal three, they still move, the two plus go to the motor center, move with the n minus one, the a equal minus one, and after more, there are only one plus left over. But a equal three, which, sorry, a equal four, which is a, the, the, the, the, the, the, the, the equilibrium, but this is unstable. The a equal five and, and the upper and, and, and, and the more, then they move outwards. Because the a equal four, they don't, they don't, there's this equilibrium, and less than four, they merge, and equal, and critical, critical five never merge. Then also we can start using the school that, not autonomous first integrate. So of course here we, suppose initial data, we have a capital N vertices vertices, so from an OD system, I suppose among them, I suppose I have N plus, number of plus one vertices, and N minus negative, minus one vertices. Of course the sum together equal N, and I didn't have M zero for this one. That defines this H one, X t, H two, X t, and H three, X t, this one. They need to show that if X t, X t, X t, X t for this ODE, then H one, H two, and H three, they are three, nine autonomous first integral of this OD system. So with this one, we can show that if I have N vertices, they have same one-day number, then there are no finite time collision, then there's a, that globe exist in the OD system. And the, and the addition, so at least the two vertices will move together at the particular time, in fact, one can do us, we can prove us stronger, so result at the, for any bounded domain, when t much, much bigger than one, there are the most one vertices in this bounded domain, when t very large. And also we can show that the so-called, if I define the distance between two, any two vertices, d t l equal s j minus x l, the absolute value. If N greater than equal to, less equal to four, and the n vertices have same one-day number, then we can show that this is the minimum distance, and the, between any two, and the, the, yeah, these two are, they are monotonic in collision. However, if N bigger than greater than five, this no longer, there are some cases that are no longer true. So this d minimum is, I think, is the minimum distance between the, between the, any two vertices, and the capital D minimum is the, the, the, the distance square. And okay, we also have some cases, if n equals three, we can see more, long times of behavior. If n equals three, and the same one-day number, if initially the three vertices are sort of located along a line, then when t equals infinity, we can see that two vertices will go to infinity. The, the, the, the, the, the one in between will move the, to the, the center of the initial, of the three vertices as the interlocation. And if they are initially not, not, not collinear case, then the three vertices are on any triangle. They are at t, at t large enough, then so-called the three vertices will, will, will approach to a red triangle when t large. Initially any, the, the three vertices is, look at any, any triangle. Then when t large, the three vertices will, will, will approach to a, of course, a red triangle, of course, the, the, the, the, the, the radius of the, the distance to the origin is become, go to infinity, something, something like that. And then one can have similar, have this kind of, kind of, so for n equals three, we have something like a similar, like a orbital stability of a self-similar solution for number. Initially we can define some so-called, so-called, so, so, so, so this is something like an x-tool of t is so-called with, you can say, the, the, the, the distance, the radius, the, the distance to origin is like a grow, like at t to power one half. The x-tool of zero is some, initially is a red triangle. Therefore any, any data, so, so, so, so this, this one will, so, so, so, so, so, so, so, the distance of x-t minus some, some, some, a, a, a center value they will go to, to, to the, the, to this, this solution up to a rotation. Some of that. So we have this, this kind of results. So similar of this, this, this, this, this for, so, for, everybody's always the same one in number. For, for, for, for any, we have some result for n equals three, you have some long-time behavior. Of course, if the, the opposite one in number, that initially have some plus, some minus, that, that, that, we can show that the, so, so, the, the, n plus, minus, n minus less than n. So we, remember what we defined as m zero, which is m zero is here. So m zero equal n plus, minus, n zero square minus n to power divided by two. So if m zero less than zero, you're always the, the, the, the, the, you can always show that when tick approach team acts, then, so at least two, they are, they go to a same location. So if m zero equal zero, the solution of, of solution is bounded. There's also sure we, we, we go to the, the, the, the, so often m zero equals zero, they're very super set up. And if m zero bigger than zero, there's no fine time collision, then the team, that group existence of solution, then we have also some, some other, this I can skip here for, for, for, for, for simplicity. Again, also for, we also give some, next condition for collision. Let me see. If, if there's, if for inverses and for, so, if they form a collision cluster, then the m zero less than zero and h zero, h one zero equal n h two zero and h three zero equal n minus two h two zero. So then the three, so the autonomous, autonomous, so first integral have a first policy relation. And we also give a second condition for equilibrium. Then if the, if the, if the, if a, if form a clear equilibrium, then my n plus must be equal one over n plus minus square root of n. This one less than n. Some of that, some of that. So of course, if n equal four, so you say if n equal four, n plus equal three, n minus equal one, this service condition. Then we also have some collision pattern if n equals three. So here I have, suppose I have m one equal m three equal one and m two equal minus one. So we should, if initially, if initially I have three vertices, if the, if the, if the initially they are, so called the, see a red triangle, sorry, a triangle with two, two segments with same length. Then, so they will, so the, so the three vertices will, will merge, will collide at the, so called the, at the x zero, at the center of the triangle, something like that. And the, the collision, collision time is equal, h one zero over twelve. Of course, if initially they are, so called the, so called the triangle, one is long, one, one is large, the other is smaller, there's a two one with, with, with short distance, they will, they will merge to each other and some of that. So, so, so that's why. And then we also have some for, for Schrodinger type of equation. So this is the OD system. So remember here I have a G, G equals the rotation matrix here. Then we define a sun, the mass center x two dot t equal one over n m g times x, x g t. So this one, remember, of course, if m g is the same one number, so this one is the sun mass center as same as the previous one. But of course, if m g has plus minus sun, this is different. And for Schrodinger type equation, this sun mass center is conserved. And also the energy is also the capital W two dot equals this one, this one is also conserved from this OD system. And again, we can show that if initial of n vertices, the, the initial, the initial d-baron circle and the, they will rotate each other because this is the, the, the, the, the circle, but they are counterclockwise with the frequency n minus one over a square. And if I have capital N, grid equals three, initial one at an origin, the n minus one unit for the circle. Again the order of the center will stay there to symmetry. The rest m minus one will rotate counterclockwise with m minus two over a square, this one, frequency. And if initial of two vertices, you can use to solve, they are moved parallel. So this is a solution. And you can solve, you can use to show this solution set as the OD system. And again, initial, initial I have n grid equals three, what are the opposite of what it says? One at this origin, which is a minus one, one at the number. And the rest m minus one at the unit for the order of the circle with plus one, one at the number. So you just again, origin one is, stays there to the symmetry. However, because not again, n equal four is critical. And you call four, n equal four is a critical equilibrium. And n equal three, they rotate clockwise. And any critical five, they rotate counterclockwise. Again, n equal four is some kind of critical number. So here of course, so of course, so here, of course, for this, for this equation, you can use it for n very large to our understanding, there are no collision, but I cannot prove it because n equal four, we can do it. And you figure the four, of course, some is for this OD system, if initially you have n vertices, they are distinct, then I did too, they are not the same. So since they are, they stay on each other globally. No, no, whether, of course, if the same one number is approved, if the opposite one number seems we cannot do it at the moment. And also if I'm here, if you initially they are, if I have four, two are closed, the other two are closed, but this two clusters, then short time, the two clusters will move like a leading order, they are moving like a similarly. So here, similarly, if you have n equal two, if two, one number is the same, what is pair, they are loaded at each other, if what is dipole, they move parallel to infinity. Of course, if you are, so this is from OD, you tell us, for PDE, then of course, for Schrodinger, another type of equation, you can solve PDE, then basically the qualitatively the dynamic from PDE is similar as OD. However, for Schrodinger type equation, when the cos size, the distance, if it's distance is less than some cos times cos size, so if you have one number, the two one number, they are opposite. What is dipole, you can merge to each other. So OD tell us they go parallel, but PDE, they tell them more. If the cos distance between what is, what is it, is smaller or at the order of cos size, the OD basically gives the wrong dynamics qualitatively. Then people also study on Bonnet domain. But on Bonnet domain, Schrodinger equation is the skin is same, but if for principle unknown equation, you need to multiply the riskier, look at the longer term, you need to multiply PDE at lambda epsilon, lambda over log epsilon something like that, so that you see dynamics. So here, so if I give it initial data, of course, you have a different, but kind of like a typical Neumann or here there are three scaling. One is the size of domain of omega, the other is the smallest distance between two and two vertices. And the third one is the one in the cos size of the vertices. So of course, for fixed n number vertices, if epsilon is zero, there are a lot of results. Of course, if you fix like a left epsilon zero and also n equal to infinite, you know, the sub-It have wonderful results, five years ago on this, for Schrodinger, for this study. So of course, for fixed n, there are lots of results. Let's add epsilon equal to zero, so the host was a limit. Of course, if you let the epsilon equal to zero and n equal to infinite, they are very challenging to study. So for bounded domain, the OD system, like of course, you can say you do need the host space OD system plus, okay, here I have an R, so due to boundary effects, the R is something like a, so like a laplace X, the capital X is the location of the vertices and the X is a domain. So basically, like a equation with the particular or particular function condition, this is the OD system. Basically, it's kind of like the OD with a non-local R, which depends on the location of the OD, of the, what is location? For Github-Lang, for Github-Lang, for gross potential equation, this is some OD. So basically, the capital R for a different boundary condition is something like, I don't know, but here. And for, if for, Github-Lang equation for the noise boundary condition, this is the OD, this is the dynamic law, for Github-GP equation, this is dynamic law. I think my time is almost gone, maybe I don't know, I just show some results. So you can see this is what is paired for, for, for, for with data validation. The left one is for Github-Lang, the gross potential dynamics, so the right one is for Github-Lang. So this is the gross potential dynamics, the right one is for dissipating. You can see how they go. You can see for the gross potential equation, the Roto-Rouichi order, for Github-Lang equation, they move repossibly. This is for what is type, what is type, what is error. So here is for what is dipole, so then what is our antipodacity. So one is when the number one minus one. Again, the left one side is for, for, for Github-Lang type, for gross potential type equation, right side for Github-Lang type equation, you can see how to say, you can see for, so gross potential equation, initially they move parallel, they put a multiballary, they move. For Github-Lang equation, they move very quickly to each other. And okay, I don't know if this is a skip. And I also show some, like here I show some, what is the pair for gross potential equation with dipole notation, but with dipole notation. So you can see this is for no initial, so called the impending, here is the impending, like a h equal to x plus y. So this impending will affect the dynamics. And I'm just going to skip this one. Okay. So, and also maybe, okay, I'll skip this one. I just show you, so for what is, for three, if for, for starting a type of equation, if for example, you have three, what is it, initially you can see this one is, what is it, with three random numbers, and this one with two plus one minus one center, you can see how do they move. For Bonnet domain, you can see, so for the left hand side, the same random number, they rotate clockwise, and up some time, due to Bonnet effect, they, you know, they move very into dynamics, and the left hand side, you know, again, they rotate clockwise, but up some, they move. So that is never much. I think I'll finish this one with the last, so this one is also show, so because the, a shooting type dynamic, a GP dynamic, shooting that dynamics, this is what dynamics, and a rectangle domain with pure foundation, because initially I have three, what is dipole? So plus minus, plus minus, plus minus, the, the bottom one is the density. So then the blue is zero, the top one is the face, you can see, the each has a, what, when a number, so I have three pair. So I have three dipole, you can see, when time evolve, they show some kind of lip frog effects. So you can see, the one with the largest moves smaller, you can see, this is just like a lip frog, the different pair, they move like a lip frog, I show you again, you can see, they just show like a lip frog. Like this is also observed in three dimensions for what is written in the dispersive dynamics of what is written, they show this also, here, we can show numerically here, for unit 2D. Okay, I think I finished here, and my time is finished, I think I stop here. Do you have any simulation for n quite large? We do like an n equal 50 something, but the dipoles are epsilon, if you have them smaller, your n cannot be too large. Okay, sure. Because you have two scales, one is epsilon, the other is n, the other is the domain size. Basically, my question is how different is compared to, let's say, simulation of fluid dynamics. My size should be at the outer of the epsilon, the cos size. So my point should be the so-called the size of the domain over epsilon to power dimension. If 2D, basically, dimension size over epsilon to square, this number of points is the minimum. And my time step should be taken as like this epsilon square or something like that. I think that barely there that we should leave to solve this.
|
Quantized vortices have been experimentally observed in type-II superconductors, superfluids, nonlinear optics, etc. In this talk, I will review different mathematical equations for modeling quantized vortices insuperfluidity and superconductivity, including the nonlinear Schrodinger/Gross-Pitaevskii equation, Ginzburg-Landau equation, nonlinear wave equation,etc. Asymptotic approximations on single quantized vortex state and the reduced dynamic laws for quantized vortex interaction are reviewed and solved approximately in several cases. Colective dynamics of quantized vortex interaction based on the reduced dynamic laws are presented. Extension to bounded domains with different boundary conditions are discussed.
|
10.5446/53479 (DOI)
|
So thanks a lot for the invitation. I would have very much liked to be at CIRM, you know, like everybody else here, but we'll have to do and there is hope that we will be able to meet in person again in the future. And I guess all of us have realized what we're missing. So let me talk about a new thing that I've been working on or my group has been working on for a year with different collaborators who are listed here. And it's I think potentially also bigger idea for numerical methods for kinetic equations especially. So it's called dynamical low rank approximation. I will explain it using radiation transport because that's what I've been doing, but it's a general method that I believe is of interest. And I should say I'm not the only one or we're not the only ones who have realized that. So let me explain what it is because it's nice mathematics. So for the moment, imagine a very simple kinetic equation. And for the sake of simplicity, I'm going to assume that it's just 1d plus 1d. So it's 1d in space. And it's 1d in velocity or in radiation transport. It's an angle variable. So you have a probability density depending on time, one spatial variable and one angle variable. X might be an R and the angle in radiation transport is usually in the interval minus one one, but the details don't matter. And here I've written down just just just the just the 1d kinetic equation. It's 1d linear transport. So you have advection here this term. You have a scattering term here. And the fly is just the integral over velocity space averaged. There is a total cross section here loss of particles absorption scattering and there is a source here. But again, the form of the equation doesn't matter for now. I've put everything on the right hand side because I'm going to consider this to be an ordinary differential equation in a second in some functional space. Alright, now assume that that you discretize this equation somehow or semi discretize it somehow in in space and in velocity in these two one dimensional variables. And what I end up with is is a quantity that depends on two indexes, right? The index I is the spatial discretization, the index J is the angle of velocity discretization. Alright, so I I end up with a matrix. And if I don't discretize in time, then this is a matrix differential equation that I get. And in general, it can be any right hand side here for the next couple of slides. And it can be nonlinear, if you wish. Yeah, so why is your discretized kinetic density? And this is the right hand side you get. And obviously, this is this is something quite natural that you get you had these two indexes in kinetic theory. And you can also imagine that these matrices become quite large, right? So if you do a 3d plus 3d kinetic equation, you have more indexes than this, but you can put all of this into into a matrix, right? And your one direction is space and the other one is velocity. Alright, so so it's called dynamical low rank. So what are low rank matrices that's your linear algebra one or two lecture? Well, rank is most easily defined by a singular value decomposition. So that's what I've written down here. So I'm speaking of a rank r approximation. So this y gets a little index r here. And this looks like a singular value decomposition. So there is a matrix u times a matrix s times a matrix v. And an SVD usually looks like this, right? So you have your object, which should have been called y here. You have you have a matrix which looks like a towel, right? So this is m by m. This is an m by r matrix, this s is r by r. And this is r by n. If the original matrix was m by n and r is the rank. And in a singular value decomposition, this would be a diagonal matrix. But for now, we allow these matrices also to be full. So that's if I can do this now, then that defines a rank r matrix or a rank r approximation. If I take the full singular value decomposition and I just cancel out some of the singular values, then I have a rank r approximation of that matrix. Now, something very well known from your standard linear algebra course. I should mention a related concept here, which is sparsity. So rank and low rank is related to sparsity, but sparsity is a little bit different. Because in sparsity, you would ask for a matrix, for example, this s matrix here to have a lot of zeros, right, which is a different concept than asking for a low rank. So I will maybe discuss this in the end and maybe there are questions about this. Because ultimately, the goal is to save some computation, to save some memory. All right, so that's the definition of a rank r matrix. And the idea is that we want to approximate this matrix from our discretization just by a lower rank matrix, right, by a rank r approximation. And so that's what we do, right? So I view this as an ansatz, right? I basically say I assume my solution is of low rank, which means it has this SVD-like decomposition, and I insert it into the equation, into my abstract ordinary differential equation for the matrix that I have. Obviously, this truncated or approximated low rank approximation will not satisfy the equation anymore, right? If I insert it, I will end up with something that's residual, something nonzero. And now comes basically the trick of this dynamical low rank method. I do something that I also do in a Galerkin method. And I would explain it in such a way that this is now a nonlinear Galerkin projection. So in Galerkin, you would say the residual I demand is orthogonal to the ansatz space. That's the standard Galerkin thing that you do. Here, we do something different. What we do is we take the right hand side and we project it onto the tangent space of rank r matrices. Why do we do that? Well, this matrix doesn't satisfy the equation. We need to make sure that we get an equation that it satisfies, and we need to make sure that we do not run out of this space of low rank matrices, which is now not a linear space anymore, but it's a nonlinear manifold. And so in this way, I make sure that if I start with a low rank matrix, if I project the right hand side onto the tangent space, then it basically says the time derivative of this thing is in the tangent space, then I remain for all times in the space of low rank matrices. So that's why I need to project the equation. And again, this is a nonlinear object. If I'm at one point in this manifold of low rank matrices, which is y r, then at this point, there is a tangent space. And that's what I do for now. Now, the question is, what is the tangent space? Can I characterize it? And can I express this projection in such a way? Because ultimately, I want to design a numerical scheme. And the answer is yes, I can do that. And I want to give you a short argument of how the tangent space looks like. It's not a proof, but it's the essence of the proof. So if I have an SVD-like decomposition of a matrix, y is USV transpose, then you can do a simple product rule, if you wish. So if this is a variation, delta y, think of it as a derivative, then I have a product rule. I have the derivative of the first one or the variation of the first one, the second one, and the third one here. So this would be an element in my tangent space here. And then I can say something about these variations, because I want u and v to be orthogonal. So that means u transpose u is the identity. If I differentiate this, I get this equation here, namely that essentially my variation delta u times the u matrix, the original u matrix has to be zero in a symmetric way here. So this equation. And the same holds true for v, because it's also an orthogonal matrix. And for s, I just need that the delta s is also an R by R matrix. All right. So then what I do, I make this a little bit simpler. I satisfy these equations by demanding that each of these two terms in the sum is already zero. Right? In a certain sense, that's a gauge that you can propose. But this demanding this equation here, u transpose delta u equals zero, then this equation is automatically satisfied. Right? And if I assume this, then, and now we go toward the expressing this projection. If I assume this, then I can retrieve all of these factors from this equation. So I have equations for delta s, delta u and delta v from the delta y, which is then important for the projection. And let me explain maybe the first one, because it's simple. So if I take the delta y up here, I hit it from the left with your transpose, then this term cancels, your transpose delta u is zero. Here it cancels the u, it becomes an identity. And now I hit it from the right with v. Then what happens is that delta v transpose times v is equal to zero. So this one also cancels. So I'm just left with this term here. And the left one is the identity, the right one is the identity, so I have delta s. So there explains this formula down here. And the other ones can be explained in a similar way. Right? So I hit it from the right with v, one term cancels. And the other one, I can cancel with this one. So I can actually extract my variations in these three factors of the decomposition from the overall variation delta y. And the delta y is the thing I want to project, right? So in a certain sense here, I've expressed what my projection is onto the tangent space. And that's what I can do now. So coming back to the matrix differential equation, which looks like this, abstract matrix differential equation, I make this low-ranked ansatz, meaning I express y truncated at rank r as this singular value decomposition type factorization. And then I can insert this, I can project the right-hand side onto the tangent space. And what I end up with, very much in the spirit of what I've just shown, instead of variations delta s, delta u, delta v, I now get ordinary differential equations for s, u and v. So I get three differential equations for the three different factors. And this is why it's called dynamical low-rank. So I do not get the low-rank by doing singular value decompositions all the time. But the point is, and the nice thing about this, I get ordinary differential equations for all of the three factors. Especially what's interesting, I get ordinary differential equations for the bases. So the optimal bases in which I express the solution, they are collected in u and v, the orthogonal bases of the image space or whatever of my solution. They satisfy an evolution equation. So I get them in a certain sense automatically. The method selects basis functions by itself. All right. There are some ugly details to this, which I will just brush upon. But in principle, it's a very nice method. And now, coming a little bit to the motivation of this, if I look at now this system, if I now propagate these three factors of the decomposition instead of the full solution, what's my advantage? The advantage is that I only have a linear effort in the large discretization numbers. So think of my spatial discretization as n. Think of a 3D spatial discretization, a million degrees of freedom. Think of a velocity discretization in n, which might be in the hundreds or something like this. And typically, my whole quantity is of order m times n, right? So number of spatial cells times number of velocity. If I now look at these matrices, however, they are only low-ranked decomposition. So this is an n by r matrix. This is an n by r matrix. And this is an r square matrix. So the effort, if the rank is low enough, is only linear in each of the two. So linear in the number of spatial degrees of freedom plus linear in the number of velocity degrees. That's the big advantage. And even more important than computational effort is memory footprint these days. So the memory also reduces by a lot. It's also only linear in each of the unknowns. And by the way, you can do the same thing for space. If you're 2D in space, you can do another low-rank. And so you end up again with something that's linear, not something quadratic. That's the big advantage of this. All right. So now, ugly details. So it looks nice, but it turns out that this integrator is unstable. But there is actually a variant of this that is stable. So this has been investigated in the ODE community for a while. And I'm just going to write down this variant of the integrator for you so that you see how the algorithm looks like. So here's again our 1D plus 1D kinetic equation. Now coming back to that. And now I make just an abstract low-rank representation of f. And it's abstract on function spaces, mind you. So there is no discretization here. First, I argued with a discretization, but low-rank is actually, or dynamical low-rank is actually something that works on the continuous level in the first place. So I express my f with some basis functions in space. They are called x depend on t. So they change over time. My expansion coefficients, which I collect in this small matrix, are by r, s. And then I have basis functions for the velocity space. So an abstract low-rank decomposition of f on functional spaces. And then obviously I have orthogonal projections onto these spaces. So that's just a standard truncated Fourier series. And then, and again, here I brush a little bit over the details. I can again compute the projection onto the tangent space. And I can express it like this. It can be expressed in terms of these projections. And the method now, I made the font a little bit smaller so that it fits on one slide. The thing that makes the method stable, I'll try to explain, is that I do not propagate the factors u, s, and v separately, but I propagate u times s. Then I propagate s in a second step. And then I propagate s times v transpose. Don't ask me why there's a stable. This would take another talk, but that's a slight modification that ultimately makes this method stable. So in the first step, I have this decomposition. This k is actually u times s. And I can derive an ODE for this, which looks like this. Don't worry about the details. Then now I do need a qr decomposition or an svd decomposition because I need to retrieve the two factors again. So I take this k and I extract the x times s, which I can do using an svd. Then I propagate s and I get just an ordinary differential equation for s, which is r by r. I form what I call l, which is s times w times v in the previous notation. And I propagate the l, which is s times v. And then I do an svd again to extract the separate factors. So it's a slight modification of what I've just shown, but that's actually the algorithm that we put into practice here. Written down for a kinetic equation, using all of the notation angle integrals, space integrals are all in this discretization as well. All right. So that's how it looks like. Now what I do in order to put this into practice is I numerically represent these abstract basis functions. Now I can say I have my abstract dynamical low-rank basis functions, something in space, something in velocity, and now I discretize them somehow. For example, in space I take finite volumes, whatever you want to do. We know dg. And in velocity space, I also do something. I might use a Galerkin. I might use a collocation, something like this. And the thing that I need to do is I need to find efficient ways to compute these averages. So that's a tricky thing in order not to destroy the overall effort, but using quadrature rules, you can do this, actually. And then there's another point that I might explain maybe a year from now at a conference where we see each other, is that the hyperbolic discretization becomes a little bit tricky here, because I have this term in there. So this is a spatial integral of the x-basis functions and multiplied with the derivative of an x-basis function. So if you imagine that I express this in terms of finite volumes, I have something that is discontinuous, right? And I take the spatial derivative out of it. So I need to play with, you know, fluxes, and I need to think about stabilization here. We have something in the working, which is quite interesting, because you have to do something that you wouldn't think you would have to do, but I will be happy to report on this in a year from now. But you need to be careful about this. That's kind of the message. All right, so that works. And I'm going to show you a couple of numerical examples in a second. However, I also want to say that this type of method stays interesting, and I think for a lot of people in the community, because I hope I can convince you that it has potential, but it has a lot of drawbacks that need to be fixed. It has no positivity of the probability density function. I want that, of course, but I'm working with orthogonal bases, right? And if I have an orthogonal basis, it needs to be negative. Otherwise, it can't be orthogonal to the next basis function, right? So naturally, these basis functions are oscillatory. Think of them as, you know, sine functions or something like this in the Fourier series. So I have no guarantee that my kinetic solution will be positive, and I need to fix that. There's also conservation is not built into this scheme, right? Again, same reason. I'm using orthogonal basis here, and I have not built in conservation. I have not built in entropy in this thing, yeah? Again, so this is pure approximation theory based on low rank, so it's a total different philosophy of deriving these equations. So I don't have that, and I need to work in order to get this. And a final thing among many things that we like here in this community is asymptotic preserving, right? So it's Xi Jin's workshop here, so I need to say something about this. And that's also something that's not built in from the start, right? And we have many strategies to derive asymptotic preserving schemes, and I think it's interesting to combine them with them. And there are, I think, one or two works already out there of how to do that. So can I ask a question? Yeah. Well, I understand you have entropy because you don't have an entropy, but do you have like L2 stability, those kinds of things? Yes, yes. So there's a kind of L2 stability here, yes, because that's kind of, let's say, easy to achieve because it's an L2 framework, if you wish. So I'm working with orthogonality and scalar products. But actually, that I would say is also something new. And we use, so the thing that I just mentioned about the stabilization, it's actually an L2 stability analysis that gives us the stabilization that we need to do and CFL conditions and stuff like this. Thanks. All right. Well, so I've talked a lot about low rank. So I've not given you the motivation except for its low memory. So there is the question, why should we expect the solution to a kinetic equation to have low rank in the first place, right? And you can think about this for a second. There is a very simple and convincing answer to that. Before I show you this, and you're probably peeking at it already, that I think is something interesting to the analysts here in this room, because the rank of a solution is really hard to analyze. It's an object that you don't get from looking at a somal left space, right? The usual things we do to characterize solutions of partial differential equations, they are more related to sparsity, right? If the solution is in a high somal left space, it means my coefficients of a Fourier expansion go to zero fast, so I can set many of them to zero. But the notion of rank is something that's harder to kind of grasp with the analytical tools that are out there. So in that sense, it's also interesting. How can I see, on an analytical level, if a solution to an equation has low rank? But there is a striking argument in kinetic theory why we should expect the solution to have low rank, because we have asymptotic limits. So if you look at the equation that I've just written down, I put my usual epsilon small parameters in there, long time observation, small source, strong scattering. Then in the case of linear transport, as epsilon tends to zero, we know that our solution f tends towards something that is isotropic. It doesn't depend on velocity anymore. And that means if the solution loses the kinetic variable, it means it's rank one in that notion. So I can write it like this. I can write it with just a spatial basis function. I don't even need that coefficient anymore, and one in velocity space. So it's rank one. And the obvious hope is that if you're close to the asymptotic limit, your solution is of low rank. It might not be rank one anymore, but it's of low rank. But all of this is essentially an intuitive argument. The proofs would be interesting to see, actually, how can we actually characterize the rank. All right. So before I show you two numerical results, a little bit of a literature overview. This method actually is quite old. It was invented by Dirac for quantum dynamics in the 1930s. So 90 years ago. So I realized when I made these slides, I cannot just say 30 anymore, because we're close to 30 in the 2030. So it's almost 100 years old. It was basically discovered by mathematicians about 70 years later. So Lubisch, who was an ODE person, basically stripped it off the physics language and turned it into a numerical method. And that was done about 10, 15 years ago. And they have developed this method for ordinary differential equations. Then even more recently, like a couple of years ago, Lubisch and one of his postdocs realized that kinetic equations are a good application field for this. So they published something on Vlasov-Parsson in the first place. And then other people jumped in, including myself and others in the community. So starting with linear transport, there is an AP paper here, which I have in my list, which I should have listed here. And already people are starting to work on these issues that I've just mentioned, so conservation and all of these other things. All right, so just two numerical results to convince you that this method works. And it does actually also work far away from the asymptotic limit. So this is a test case that we like in linear transport. It's called checkerboard. And it mimics a nuclear reactor, if you wish. So what you have is a source of particles in the center. And then there are obstacles. And the obstacles look like on a checkerboard. So you see this here in the solution, you see where it's blue, there is actually obstacles. So it propagates through a field of obstacles, which look like a checkerboard. And it's a complicated test case. It usually, well, it's a torture test case, which shows the shortcomings of methods. That's why we like it. And at the bottom, you see kind of see a resolved solution, which however has artifacts, which are called re-effects. So you see these rays. Those are actually numerical artifacts, but you see kind of how the solution should look like. Time in the beginning. And that's your time evolution here. So it propagates out and it says all of these structures. And that was computed with a lot of degrees of freedom. So it's a 250 by 250 spatial grid. And it has 820 angular degrees of freedom. So quite a lot. You can do the math yourself, multiply these numbers. And then I show you here a rank 10 solution, and here a rank five solution. And you can also compute the reduction in memory and computational effort here. And you see that well, rank five doesn't seem to be enough, but rank 10 already captures the solution quite well. And it's a significant reduction, because I do have just 250 squared times 10 instead of 820. And the other evolution is just 10 times 820, not 250 squared times 820. So it's a significant reduction, and it looks quite good. Another torture test case that we like is the so-called line source test case. It's basically computing a Green's function, if you wish. So you have in the initial condition at time zero, you have a delta peak, which propagates outward into a medium. And it's again a torture test case. And you see something that's really highly resolved, which is now a spectral Galerkin method in Engel. They're called PN solutions. They usually show wave effects, wave patterns here. So it's like it looks more like dropping a stone into water than the actual analytical solution. But what we can do is now we can crank up the angular degrees of freedom. So this is P39, which is something like 1600 degrees of freedom in Engel. But at the same time, we can lower the rank. So this is a rank 210. But below it is a discretization that actually has 1600 degrees of freedom. And in a certain sense, dynamic or low rank picks the right, picks the right basis functions to actually have a solution, which is much closer to the original solution than this. And it has less degrees of freedom that it actually uses. All right. That brings me to the conclusion. I hope that I could convince you that that dynamic or low rank approximation is well suited for kinetic equations. I think there's a lot of work for everybody. And the thing that it seems to do, it even seems to work a little bit further away from the asymptotic limit. It gets more interesting if you're 3d plus 3d, because then you have hierarchical tensors and different hierarchical tensor formats. And there's a lot that can be done too. And the main selling point is it's a huge reduction in memory footprint and also computational cost. So that's the end of my talk. Thanks a lot for the attention. And I'm happy to take questions. So I have a question, if I may. So there is a class of method now, which people call tensor method. And it seems very connected to what you are doing. So are there subtleties that I am missing or is it in the class of a tensor method? Yeah. I don't know exactly what you mean by tensor method. So I would make a guess. I'm guessing you mean sparsity. Could that be the case? Like a sparse? Yes. But it's more than that. It's really to write, as you do, the solution as a sum of a simple product. So usually they put two. You put three, but usually they put two. And then they sum up. Yes. So if you do 3d plus 3d, that's what you would do. So then you would go to a tensor representation of your solution. Yes. But one thing I want to stress again. So there is a philosophical difference between dynamical low rank and by the concept of sparsity. So sparsity means I do a numerical discretization in some way. A huge tensor. So if I do a 3d plus 3d kinetic density, this is a six indexes. It's a six tensor. And then in the six tensor, I can do a hierarchical tensor decomposition. And there are many, many versions of this. And then I can demand sparsity. So then I can say I want most of my expansion coefficients to be zero. That's the notion of sparsity. And there are many methods to achieve this. There are versions of singular value decompositions and so on to do that. Dynamical low rank is a little bit different because it first of all works on the analytical level, right? I have a low rank decomposition of that. And I have actually evolution equations for my basis functions. So that's a slightly different thing. I hope that answers your question. Yes, absolutely. Thank you. Another question? Yeah, I have a question. Is there air estimating in terms of the rank? And how do you know what's the rank to stop? You know, you try like 100, 200. So that's a it's like our looking at shankation. Usually there's some framework for error analysis. Is that possible for this approximation? So, okay, so I'm just going to answer that question. So Lubisch, right? He's a really very good ODE guy, right? He wrote these books together with Haire and Wanner, you know, the numerics for ODE's bibles kind of. And there exists for ordinary differential equations, there exists one paper on an error analysis. And he told me it's the hardest proof he has ever done for ODE's. So I think for for partial differential equations, it's out of reach for now. But I think it's a very, very interesting field of research. That's my answer. And I think I'm too, I'm too stupid to do that error analysis. If he tells me that it's hard. But it's interesting. Yeah, yeah, I can send you that, I can send you that paper if you wish. I have a right, some of his papers in this direction. May I ask a question, Martin? Sure. Okay, what do you, what do your low rank extension become when you have boundary condition, for example, when you have boundary layer, for example? Oh yeah, thanks for that question. I should have put this onto my slides. That's difficult, obviously. Because what's really hard is to come up with this projection in together with boundary conditions. So we have actually no idea how to do that. And because you have to, well, we have an idea to do this. You can do this basically formally, right? But the question is, you would need to, for example, in space, I need to, let me go to the equations, right? So this is an equation that has a spatial discretization. And I have to come up with boundary conditions now for the, for the, for my spatial basis functions, right? Because, and that's a hard thing, actually. Yeah, so we can, we can basically take boundary conditions and project them. And that's, that's not, it's not a good idea. Yeah, so, so that's something that's hard. I mean, it works for, you know, vacuum boundary conditions, it works obviously for, for periodic boundary conditions. That's not a problem, but layers goes, goes on the list of things that need to be done for this. Okay.
|
The dynamical low-rank approximation is a low-rank factorization updating technique. It leads to differential equations for factors ina decomposition of the solution, which need to be solved numerically. The dynamical low-rank method seems particularly suitable for solving kinetic equations, because in many relevant cases the effective dynamics takes place on a lower-dimensional manifold and thus the solution has low rank. In thisway, the 5-dimensional (3 space, 2 angle) radiation transport problem is reduced, both in computational cost as well as in memory footprint. We show several numerical examples.
|
10.5446/53482 (DOI)
|
Thank you very much. Thank you for the introduction. I would like to thank the organizer for setting up this meeting and even if it's virtual, it's a real pleasure to see lots of friends at least on video. Okay, so I'm going to try to make a rather superficial review of several little models. So let me go to the list of the model I want to consider. There will be essentially two parts in this lecture. One will be devoted to just diffusion equations with some mean field terms, except for the first model by the way. And the idea, the simple idea is that a clever linearization, I mean a linearization adapted to the model gives you a nice setting for the linearized problem and eventually gives you some information about the large time asymptotics. So I'm going to stick with the easy side of the problem, which is the formal expansion and of course in this business there is always something which is dedicated, which is just do some Taylor expansion in nonlinear quantities and justify that the remainder terms are small and this can be quite tricky in some cases. And this will be the main limitation. And then in the second part, I will just show a simple application to a nonlinear kinetic equation and this will be an occasion to advertise a little bit for a two-iprocoacy. So let me start with nonlinear diffusion. So I will start with just a model of fast diffusion equation on which some of you have already heard for a long time. It will be in closing, who was one of the first one to work on the topic. Move to the Kela Segal model, which is a nice mean field model in the subcritical case, I will explain. And a third model, if I have time, is about a flocking model which was studied by I have seen really one of my former PhD students. So let's start with the fast diffusion equation. There is only one equation by the way. And so it starts with some work that I did with Adéan Blanchet, Matteo Ben Forte, Gderilio Grillo and Juan Luis Vazquez. And more recently, we have been working again on it in view of stability to the self and we will touch it in this lecture. But it is a very, very nice application of entropy methods, which in my opinion really open a new area until something I'm actually still doing with Matteo Ben Forte, Bruno Nazaré and Nikita Simonov, who is a postdoc here at Paris. Okay, so some basic about the fast diffusion equation. So it's an equation on the rule of Euclidean space of the MNCD with the non-linear exponent here, which is M, M is chosen less than one, close enough to one. I won't go very much to the details. And of course, when M goes to one, we just recover the equation. Now, there are plenty of properties and I stare at quite vague level. So if M is bigger than D minus 2 over D, by the way, then mass is conserved. This is no result of Herrero and Pierre. So if you compute the time derivative of the integral of u, it's equal to zero, the mass is conserved. Another interesting quantity is the integral of u to the M, which introduce a force of restriction on the exponent M and the time derivative, well, it has a sum, it's the integral of the square. Now, there is a very convenient way of rephrasing this by introducing u, which is f to the power of 2p, where p is the exponent 1 over 2m minus 1. Remember, M is going to be close to one, so 2m minus 1 is close to one, p is close to one, the both one, and the other limitation would be p less than the other D minus 2, which correspond to the sub-alf exponent. 2p is the critical sub-alf exponent, at least in dimension, bigger than this one. Anyway, if you rephrase this quantity, this mass, this entropy here in terms of f, one is the L2p norm of f raised to the power of 2p, the other one is the Lp plus 1 norm raised to the power of p plus 1, and the time derivative of the entropy is the integral of gradient x square, and these three quantities are related by the Galliadon and the sub-alf, which are written here. And it's a very special case in which you know exactly what our gap to the sub-alf function is. They are given by the so-called baron-blad profiles. Baron-blad self-similar solution are to be t's b of t and a's, which are of this form, which is time-dependent scaling, applied to some function g raised to the power of 2p, and g is a no banter and t profile in g of this form. So it's very similar to sub-alf inequality, except that it's in the sub-critical range. Okay, so you see, by looking at the Galliadon and the sub-alf inequality, you get an idea of the growth of u to dm, which governs the large time asymptotics, and if you use any entropy powers, and Jose is one of the experts with the typical scaling, for instance, then you can recover a lot of information about the asymptotic pp. There is another way of looking at this, which is to consider the equation in self-similar variables. What you do is you scale, essentially you introduce the variables which look like the self-similar variable of the baron-blad self-similar solutions, and by doing this, you get a gain, a first diffusion-tited equation, except that you add a drift term here, which corresponds to harmonic potential. Okay, and the fact that the coefficients are one just means that you have selected the wrong scale. There are two quantities which are very nice, which are the generalized entropy or relative entropy, or free energy in terms of thermodynamics, which is this quantity, essentially v to the n minus the same quantity for the self-serve profile, the baron-blad profile b, to a linear correction term, so that the minimum of this complex function in v is achieved when v is equal to b, at least when you normalize things properly, and again, it will be very certain that the baron-blad and the function need to have similar circumstances. So the baron-blad profile is now at least b of x, which is one of the one-point x correct, to the minus one over one minus n, and it's the oban-talent zero relative entropy, which we want to make a connection with the prototype. So now what is quite interesting is the fact that, well, the time derivative of this relative entropy gives you minus the fissure information, which is defined here, and fissure information is this quantity, okay, and moreover, you have a relation between the fissure information and the relative entropy, which is this, with an optimal constant, which is four, and actually it is entropy-entropy-production inequality is exactly the same as the Galliard and the relative entropy of the previous one. By the way, you can have a parenthesis. The idea that I am explaining with Don Fartley-Nazaré and Simonov is to make the difference, go one step further, use regularity property, and prove a stability result in the sense that the difference of these two terms will control a distance to the manifold derivative energy functions with an explicit constant, which is something which is not so far from the actual parenthesis. Okay, so now let me come a little bit closer to my topic, which is about linearization. So you see, in the previous slide, we were using the Simpson-Nazaré variable to replace the large time asymptotic problem by the convergence towards the Barhain-Bathe profile. So you expect that the solution of the problem V converge to the Barhain-Bathe profile B. Of course, it's very natural to make a Taylor expansion. Well, let's consider this V epsilon, which is on the top here, here, and Taylor will expand around the Barhain-Bathe and there is a one of the various expansion that you can do, which is what we will insert this way. So if you do this and you Taylor expand the relative Fisher information, what you get is simply the integral of gradient of square up to a weight which is proportional to the Barhain-Bathe. Now, for the relative entropy, which is here, okay, this term, when you Taylor expand, you just obtain an L2 norm, which is here, but with different weights. Okay. And of course, it's a Taylor expansion around the Barhain-Bathe. So for the Barhain-Bathe, both terms are equal to zero. The first order term are equal to zero because they are critical points. We have to go to the second order Taylor expansion. This is why you have this epsilon squared, which is chosen. Now, let's be surprised. Well, you have an inequality which relates the relative, the linearized relative entropy with the linearized Fisher information. And it's a kind of hardy Poincare inequality where we recognize the weights and they are not the same. So you have B for the directivity pile. You have B to the 2 minus N, which is something like D over 1 plus x square for the L2 norm. And there is an explicit constant that you can compute. It depends on the time. But in general, it's something which looks like 4. And the 4 should be a bell. T is the 4 that you add in the entropy production inequality. Okay. There is a condition for that. We need to have a zero average because it's simply because T is weight being integrable. Of course, you can plug constant and it cannot be true for a constant. Okay. Now, in terms of large time incentives for the equation, if you expand, so linearize now the equation, you insert the weight in the right places, what you get is DTF plus T-separator, which is here. Okay. And you notice that if you are in L2 with the weight B to the 2 minus N, then T-separator is self-adjoint. And tested again, F is precisely the Hickley-Pigone, which is here. So all this makes a lot of sense. And this gives you an asymptotic rate, which turns out to be exactly the same as for the nonlinear regime. Actually, it is something quite deep. This is related with the structure that you use when you use the back-thriller method. And again, Hosing was the first one to introduce this approach. But as I told you, there is much more to be done, but of course, exactly like in the stability result, if you are figuring in this code or all the difficulties lies in the justification of the clear expansion in linearity properties, it's using something definitely to hide them with the carpet. Okay. So we have this example in which we have a nice linearization in the nonlinear problem, which tells us that, well, we have the linearized evolution operator, we have a nice functional space in which the linearized operator is Hickley-Pigone. And you have nice concrete properties that you can relate with our terms and conditions. So let me go to the subcritical Knessi del model and show you that you can do something very similar. So basically what I'm going to say is something I obtained with a one compass, because a format which is treated here a few years ago, I've been working on this more recently, but I don't have any time to touch this topic. So, well, the critical Knessi del model you have heard of it several times, including today during the meeting. So here it's a model in part two, so no boundary conditions. We're in the range where you are in the subcritical range in the sense that I take a mass which is less than eight times. So essentially the global existence was known from the work of Diego and Luca was choosing the Yadin and Knessi del for the crazy, but this was not exactly sharp. And our input with the model was to find that with the logarithmic HLS and the quality of carbon and loss, actually you can see the gap and get existence up to the critical mass and the critical HGAP. Okay, and the nice property of this model is that you have 3 energy, so it is an F of u which is shown here. It has the usual form that you expect, because the first time you see it's u greater than 2 over u inside the del version, so it's just the heat kernel, so the lattice operator here. So new logarithms makes a lot of sense, coupled with a normal linear personal equation because b is linear and u and it multiplies u, so it's equality. And this is why we have this quadratic term here, which shows up with quite the wrong sum in some sense, because it's the attractive case, so it's something which is negative. However, you have two properties. The first one is that the logarithmic HLS inequality tells you no problem, which is its value from below, and the good property that you can immediately check at its formula from the equation is that the time derivative of energy is a kind of fissure information, which involves of course the self-consistent drift term, which is good. By the way, global rates of convergence to the solution of asymptotic or self-similar solution are not known, but at least we can get the asymptotic rate. So here is how it goes. First, let's do the rescaling. This is very sensitive in the sense that instead of looking at something which is in the compact, you want to look at something which converges to a stationary solution. So you rescale it and you rescale it with the scale of the drift equation, and that makes a lot of sense because of course you have the logarithm here, so the drift term, but also because of the drift of in dimension 2, given by the constant equation or given as convolution with the green kernel, is exactly the same scale in properties. So we go from U in the previous slide to N by doing T-scaling, and we are in the circuitry mass range, which means that there is not enough mass to aggregate the solution. In other words, the diffusion wins over the drift and the solution is cut. The diffusion is quite a very specific pattern, and that's something that you can characterize, like it says in the change of variable. So it is something that we have studied in detail with Adrien-Roucher and the other one, and the point is that the solution is going to converge now from the same one or relative entropy towards some limit-inflection resolution, and same for the drift term which is here. C means just the concentration of the chemotractor and the gradient corresponding to the drift, as well as given by P-artist, probably of the norm's model. So what you know is that you expect some intermediate center-dates in the original variable, and since intermediate center-dates, you can see the sense that the L1 norm of the solution and of the stationary solution are present at a difference of 0. Good. So a stationary solution, well, they are given by the T-Spoisson-Boltzmann equation, which is one of the equations that which has no real, explicit solutions. So here you can see just a plot. There is an explicit solution in the case m equal to h pi, which is the bubble of the people in the producer variation. So it's 1 over 1.6 square squared, our chemotractor, and in 8, you can remember correctly, and for m less than 8 pi, you can just solve this very clearly, because it's easy because you know your solution is way down. And when m approaches 8 pi, at least when you scale properly the solution, you see that your solution tends to approach 8 pi. Now the idea is to linearize around the stationary solution, or actually to make the difference with the stationary, and introduce a relative solution, which is of equal to n of n minus infinity minus 1, and same for j, which represents the presentation. So you can rewrite the nonlinear-keler-segel model as a resolution equation with a linear term here plus the quadratic term, which corresponds to the drift term, the nonlinear part of the drift term. And you see j is linear in f through the classical equation, so here you have a quadratic term as can be expected. And the linear operator, well, I write it a little bit in the same form as I was doing for the part of the equation by inserting weights, which does complete the stationary solution, which is way slower. So linearization, where you can compute the eigenvalues, and here is what you see. Actually there is one more, at the level 0 here, you have a solution that this corresponds to the mass degree of freedom, so if the mass is fixed, and it is the case along the equation you can see, so we will always work on the after-permanence of the mass, it is a one-dimensional term. So then there is a first eigenvalue, and as you can see it's exactly at the level 1, and it's associated to the translation volume, so you can understand why it's exactly wrong. The range here depends on the mass, so l equal to 8 times is about 25, which is why we have this one. There is another eigenvalue which is associated to some scaling properties of the equation, not scaling the volume, it's a little more subtle. And then you have other eigenvalues which correspond to other modes when we make an extension in this formula. So at least you can identify the lowest modes, which are 1 and 2, the lowest positive modes, and this is something you can exploit. So the first observation is that you can write the free energy in terms of the derivative entropy, and that's what is here, and this is nice because it is now as my sum. Now you can tailor expand around the Geerich minimizer, and this gives you a quadratic function in terms of this. It is a non-negative quadratic form, and it's actually a positive form on the orthogonal of the kernel, so there are still no ways that we have this zero mass addition for the propagation, and then this is a positive quadratic form, which is great because it allows you to define the scalar product. Now for the term coming, well this term can be rephrased in terms of the integral of the variance graph, this can be done in dimension 2 because the relative mass is zero, so this is this form, and this is dominated by a term which comes from this function, this which is simply this term. So this is, we have a quadratic property which explains you why Q1 is a positive quadratic form. Now using this quadratic form as to define a scalar product, you have now a nice function setting, and now the next observation is to see that the linearized evolution operator associated to the KACL equation is actually safe for joint, this scalar product, and that a left tested against F is actually the Taylor expansion of the Fisher information, of course, after a very long time, it's usually. Let me just insist on the fact that this scalar product is not completely standard, here it's just a way to get to space where we use as a weight the stationary solution, so it may be there, but here this is the non-linear term which comes, so the term which comes from the non-linear interaction through the positive term, the Taylor expansion, and it's a non-local correction term to this weighted empty scalar product. Okay, so in this setting you can check that L is a logarithmic operator, it has purely split spectrum, it still has eigenvalues 1, and then I leave it to you, this gives you that the asymptotic rate is given like, the decay is like e to the minus t, or e to the minus t2 if you're thinking something of mass appropriately, and that's exactly the kind of result you want to know, that the asymptotic rate is usually, but again it is a true asymptotically, and of course you have to justify the solution conversion in the uncertainties. Good, well one more example that can be done, so this is the work of Fickson-Yuli, and I want just to show a few slides, is a Coocombe-Smale model for the protein, so it is a diffusion term with a drift term corresponding to an alignment, so think to a flock of birds, these birds, they are flying in one or two dimensions for instance, so they are free to orientate the wish, and of course there are some preferred velocities, it's hard to fly at zero velocity, it's hard to fly very fast, so their preferred velocity is one minus one in one d, and higher diagent just think to a symmetry of rotation, okay, and they want to allow, in these interviews of tilt in these potentials, we get an effective potential which is now tilted, and u is nonzero, we get this, and t is uf is simply the average velocity, so they want to allow you the average velocity of the upper, a very very basic version of these belonging models, okay, and then, well there is an important parameter which is the diffusion coefficients, okay, and the question is what happens, so if d is very large, t is just chaos, t is both, there is too much randomness, they cannot align, and the unlistational solution is when they are zero average, which means that they are not coordinate, okay, so things correspond to the regime where d is bigger than this, this time, yeah it's a pattern dimension, and of course u requires zero which is the symmetric case, it's always a solution, but the point is when you decrease the diffusion coefficient below the star, there is a branching process, and you have a phase diagram which is non-trivial, and you get solutions which are non-zero, so a thing to one deal, right, you always have a solution, stationary solution with zero velocity, but you also have a solution with non-zero velocity and by symmetry, of course, you have the opposite velocity, in higher dimension, think you're symmetry of operation with non-zero velocity, okay, question, can you measure something? The answer is yes, and it is exactly the same strategy as what I showed to you, for the the Kela-Siegel model, okay, so there is a relative entropy, there is a fissure information, you have a deep state, you have to be careful because the relative entropy is respect to a stationary solution, right here, t0f, which is the average velocity of the product, of the product. Now, one of the results is, well, essentially it goes through the same strategy, but it has to be a factory, you have the expansion of the energy which gives you a nice norm, which gives rise to a scale of product, you have the expansion of the fissure information, it gives you, you can rise to directly, like, integral with an appropriate risk, corresponding to the average speed of the perturbation, say, okay, and q1 is going to be zero, and you have co-assivity which gives you some exponential convergence, well, actually you have to give, as you have seen this phase diagram, corresponding to stationary solution, is complicated, so if you are in the disorder of region t0, right here, you go to the average velocity, if you are in the ordered phase, then if the free energy is below the energy of the disorder of the stationary solution, then you are forced to go to one, and then you monitor the solution, and then you get the expansion to get the statistical expression. Let me skip this because I'm going to take too much time and jump to my final point, which is about extension to the two equations. So, okay, I'm going to rely basically on a method that I started to develop with Kiemann-Bernard and Christian Schmeiser, which had further development between the one mentioned by Kiemann-Bernard this morning, or on Monday morning, and I will rely essentially on this paper with Adorno-Hadala, T. Ubi, and B. Zafkiric. So, high-procoursivity method, basically you have two methods. One is the H1 method, which was developed by Sigrid and Zemar, and it amounts to the vacuum, the fish information, and the use items, which are very close to the vacuum. Let me skip this, and go directly to the L2-hypocoursivity method, which is a base on the idea that you want to identify the race course and do the macroscopic diffusion in the community. So, take the diffusive scaling in the community equation, where T is the transport operator, L is the new linear collision operator, like a scattering operator, like a soccer-plunk operator, and the idea is we'll make a free-bred extension, rough free-bred extension, keep blue as the spatial density, the rate of spatial density respectively the worst rate of all, and what you want to see is an example, which is your view is going to solve a diffusion equation, which is shown here. Here, in this type of approach, the key tool will be the underlying inequality. So, typically, if you have a Poincare inequality at the macroscopic level, which can be referred in this form, that will be the main tool. Now, it's standard that if you take the F2-0 for this abstract evolution equation and you complete it, if L is a degenerate operator in the scene that has a huge curvy, for instance, any max value, whatever the spatial density is, will be in the kernel of a soccer-plunk operator, okay, then you cannot expect to get exponential decay, because the projection onto the kernel, onto the autogonal decay, does not control it, but we know. So, the idea, like in all unprocursivity, now, we are going to change a little bit the norm and to introduce a tipped. So, we change this F2 norm by tifting it, this was the matrix C and P that Antoni was showing in the case of a mobile approach. And in our case, we are going to build an operator, well, you see, you recognize Tp star Tp, T the transport operator, P the projection into the kernel, Tp star Tp is like a diffusion, if you have no external potential, it will be minus that fashion in case of a regular limit. So, Ts will be 1 minus that fashion, the inverse of it, composed with Tp, and when you compute, what matters is when you compute, because a with Tp typically minus that fashion, composed with 1 minus that fashion, the inverse of it. Okay, so if you have a pancari in equality, then you automatically recover that T star is going to control T is an superspecable path, and T is the one that you don't control when you just evaluate this. So, now by combining, it is to kind of give you an idea of something about the Tk. Application to the less of course of the protocol in our system, so let's take a very simple case, far in my case will be a model suffix to the power i far, i far, bigger than 1, so it's going to contain inequality, and you have a couple of personal equations which gives you the moment, like for Kela Segel, you have this product here which is right hand side just difficult. Now, when you linearize, when you can tailor expand, collect linear options on the left hand side, and collect just the product, which is left, which is here. So, h is the perturbation written in inverted form, okay, s star h is the perturbation to f star the stationary solution, and you just linearize which is dropped, brutal it is. Now, be careful, there is a term which is here, which comes from the gradient term, which is not proportional to h, there is no h here, side g comes from linear one h, it's a linear term, but it's not h, so you have to be careful, this is the linear answer, that's what I'm saying. Okay, so it is the one we can use. Okay, so the result is that we take this one, and using our twist is actually like a machine to you, you are going to get a gradient, and it's a typical hyper crossivity result, in the sense that there is a price to pay, there is a constant c, and c is bigger than 1, and you get exponential gradient. What is the lambda? Well, the lambda you can directly identify with and then value is going to be the number larger than 8, so you know what is your best way. But of course, the part in which you do is the c which is here. Well, nothing exactly, because when we introduce the twist, we lose a little bit of distance, so please learn that if not actually exactly the best you can get. Well, anywhere, if you do a event expansion in this, and let me skip the details, of course recover a first order and the fusion equation in each machine here, and if you go to the results, what you get is the rate of convergence, which is percent to the power of the estimate. So far so good, so this is at the level of the linearized operator, the linearized equation. Actually, it's also true when you take the probabilistic theorem, which is the epsilon, and the big point in this part of the course that we have in such approaches is that the lambda and the c that you get here do not become the epsilon, if you stick the epsilon in front of the d2f and the 1 over epsilon in front of the collision operator. So this is something which is really consistent, which is different. Now, what can be done for the three minimum cases, not so much actually, because we are lucky enough to read these two names for a large time. So some things have been done, like the whole command, and they are able to do the information close enough to the asymptotic execution resolution, but it's too slow, minus three seconds, like the tradition itself. So in the large, you can do it in one day, because in one day, of course, the average in two looks not so far from the variable, and it's higher than the initial significant dimensions, so you have to do it in a few hours. So let me stop here, and thank you very much for your attention. Well, in the meantime, I ask a question to Jan. In the last part, you have the same difficulties for the lowly rowing at infinity for per-clin equation with the modulus of x to, I mean, the limit is alpha equals 1. So in the blast of Poisson on the growth of the potential. Yeah, okay. So I think under the past criteria, it's not done yet, but if you have a potential which is such that you have a stationary solution, but the growth is not enough to have a Pankaré inequality, then you can use with Pankaré, or you use some oddly Pankaré type inequalities with weights, which is good involved in moments. This is completely under control without the Poisson coupling, and I don't think that the Poisson coupling is going to change much, but of course, you have to make all the checks. So you know, in the abstract lipo-coercivity methods, you have to essentially four assumptions. One is over the Pankaré inequality or the width Pankaré or the other Pankaré, whatever inequality you want to use. And this you have to prove that the perturbation by the Poisson term does not change the nature of the inequality. Of course, it's going to change the constants. I think it is rather clear it would work. Okay. So there are two types of direction you can expand. Typically, when you change the local equilibrium or when you change the confining external potential, each of them gives rise to inequalities. And basically, the rate that you get is the minimum of the two rates, either the convergence was local equilibrium here for the homogeneous model, or the minimum of the diffusion limits. Let me just look at the final prediction.
|
This lecture is devoted to the characterization of convergence rates in some simple equations with mean field nonlinear couplings, like the Keller-Segel and Nernst-Planck systems, Cucker-Smale type models, and the Vlasov-Poisson-Fokker-Planck equation. The key point is the use of Lyapunov functionals adapted to the nonlinear version of the model to produce a functional framework adapted to the asymptotic regime and the corresponding spectral analysis.
|
10.5446/53484 (DOI)
|
I should start mentioning that the work I'm going to report today is in two pieces with different co-authors. The first piece is with Amid Anaf, Beatrice Signorello and Tobias Werner, and then the second part with Jean-Dorbault and Christian Schmeisser. Here is a short outline of my talk. Somehow, as a long and very simple introduction, I will start with hypercoercive ODE's. The key idea or the goal is to show how to obtain sharp decay estimates by writing down or constructing appropriately up on the functionals. And then the main essence of the talk at points two and three, I will discuss the Goldstein-Taylor model, which is a two-velocity BGK model, if you like, in two situations, first on the torus and then on the real line. But you should think of this very simple model that I'm discussing here just as one possible application more to illustrate how to apply this methodology that I'm describing here. So let me start with the simple ODE story. So I am looking here at this linear ODE with a non-symmetric matrix C, and I call this matrix coercive if it satisfies this inequality for some positive copper. And let me jump right at one example with this matrix here. So here the symmetric part has this entries one and zero. So obviously this matrix is not coercive. Still you have eigenvalues which are one half plus minus this imaginary part, which tells us that all solutions to this simple ODE will decay with an exponential rate one half. If you have such a situation with a non-coercive matrix, you will not be able to extract this exponential decay of a solution by a trivial energy method. And with this, I mean, if you multiply your ODE here from the left just with the transpose of x. And we can see that here in this plot, I plot the norm of this solution x as a function of time, and you see that it goes down in these wiggles. And here is a point like here or here you have horizontal plateaus, and it's exactly for these plateaus why the energy method doesn't work. Still, if you introduce this modified norm like here with a matrix P that is adapted for this problem, and here in this example, the matrix P would have the entries two one, one, two, then you see a very nice convex decay, perfect exponential decay like this red curve. And so one natural question is, of course, given such an ODE, how do you find or construct this matrix P, which gives you this modified norm as a Lyapunov functional. So just a little bit more terminology, I call a matrix C hyper-coercive in analogy to PDEs. If it has a spectral gap, so all eigenvalues should have a strictly positive real part with the spectral gap being mu. So typically for matrices, of course, you would call that positive stable. In such a situation, if you assume that all eigenvalues are non-defective, which means there are no Jordan blocks, then you have this exponential decay of a solution. And the spectral gap constant mu will always be larger or equal to this coercivity constant copper that we have seen on the previous page. So a simple condition that gives us this hyper-coercivity on this ODE or matrix level is if I decompose my matrix C into a school symmetric part and into a positive semi-definite Hermitian part, then I request that no subspace of the kernel of the Hermitian part should be invariant under the school symmetric part. So this condition could remind you possibly of the condition of hyper-elipticity, for example, in Fokker-Planck equations. So now let me briefly illustrate how to obtain this matrix P that gives us this Lyapunov functional as a modified norm for the problem. And in this context, we have this very simple lemma, so assume that this given matrix C has a spectral gap that I call mu, and I assume that all eigenvalues that have a real part which are exactly equal to this mu are non-defective. So on this spectral gap line, there should be no Jordan blocks associated to that. Then the statement is there exists a symmetric positive definite matrix P such that you have this matrix inequality. And the purpose of this matrix inequality in order to anticipate a little bit is that this mu in the end will give you the decay rate. In this case, the sharp decay rate because this is exactly the spectral gap. If you do have non-trivial Jordan blocks on this vertical spectral gap line in the complex plane, you lose an epsilon of your decay rate. So let me now just briefly show you how to obtain or to construct such a matrix P. Take all eigenvectors of the transpose of your given matrix C and tensor them in this way and sum up all these eigenvectors. Then this matrix P will be the matrix that gives rise to this Lyapunov matrix inequality. I should say from the very beginning that this matrix P is not unique, but no matter which of the possible P's you choose, the decay rate mu that you obtain in your estimate will be independent of that. And what I've shown you here for real matrices works exactly the same way for complex matrices. So just a very simple computation to show you how this matrix inequality can be used. So here is the Lyapunov functional, this modified norm with the matrix P in the middle. If you then differentiate this norm squared, then you get from the OD1 matrix C to the right and the transpose of it to the left. And then here in the middle you see exactly the matrix combination that we have seen on the previous page. So for this combination of matrices, we have the Lyapunov inequality, so we can estimate this by 2 mu times the matrix P itself. So we are back to this modified norm and we have this exponential decay estimate. So let me mention that strategies like this have already been used for BGK or Fokker-Planck equations. Just a final remark on this ODE level and I'm coming back to the very same example I've shown on my first page. So here I show you a picture in phase plane in a two-dimensional setting and here I show you this blue spiral which is one trajectory of this problem. So of course it spirals into the origin because the problem is asymptotically stable, but I want to point out whenever this blue spiral crosses the x2 axis, for example here, then this is tangent to this black circle which is the level curve of the Euclidean norm. So this means exactly at this point the Euclidean norm of your solution does not strictly decay. This is where you have these plateaus and same thing down here. So the way out was to introduce such a distorted norm with this matrix P and the corresponding level curves look like this red ellipse because then all these spirals will always intersect these new level curves at a non-trivial angle. So in this weighted norm you will always have a strict decay. So now let me come to the Goldstein-Teylor model and use ideas like those I've shown you before on the ODE level. So here is the Goldstein-Teylor model Vn1d, one x is the spatial variable. I will discuss that in two different settings, first on the torus and later on on the whole real line. So we have two species of say particles, one those with the plus that move with velocity one to the right, the other one with velocity minus one to the left. And we have a relaxation rate that may or may not be space dependent. So this Goldstein-Teylor model can be seen as a very simple toy model for a BGK equation. As I mentioned before, it's not the model itself that is of our interest here, but it's rather used to apply those tools. Original applications of this Goldstein-Teylor model go back to simple models for turbulent fluid motion or can be equivalently written as the telegraph equation. So in order to analyze that, I first introduce a mass density u, which is just the sum of these two families, f plus and f minus, and the flux density, which is the difference of these two variables. So in these new two variables u and v, the model can be rewritten like this. And the goal first on the level for the case of the torus is that for both of these functions u and v, I want to establish sharp exponential decay rates to the steady state. So the u infinity is the constant x independent, that is twice the f infinity, which is defined as the average of the two initial conditions. So just take the initial condition for the plus and the minus family and take the spatial average. This is the constant that the mass density will converge to, and the flux density will converge to zero. First I will discuss that for constant sigma, which is the easy case, the very well-known case, and then I want to use these ideas for x dependent relaxation rates. So in the case on the torus with a constant relaxation rate sigma, I Fourier transform the equation or the system for u and v from the previous page. So I have these two by two ODE for each spatial mode with the index k, with the integer index k. And this matrix that appears here is what I call the matrix Ck. So first, if we sort out the zero mode, so here these elements are then zero, so we see that the u and the v for the zero mode are decoupled, you have immediately their solution. So let's go to the other modes. Here let us first look at the eigenvalues of this matrix Ck. So when k, when the index k is small, you have two real eigenvalues like this solid dot and this dot here, and then when the modal index k becomes large, you have complex conjugate eigenvalue pairs like these two yellow dots. So they are here on this vertical line. So let me distinguish between three cases now, depending on the modal number. For the low modes, as I said, we have two real eigenvalues. And what is more important for the exponential decay is this small one. So what I call the lambda minus, that's this solid dot, because this will determine the final global exponential decay rate. And for this ODE here in this first line, we can construct this scaling matrix, what I called before the matrix P depends on the index k, of course. And if you computed in this example, this is what you get. I now, and this matrix P is a function of the modal number k. I now want to interpret this matrix family with the index k in the physical space. And this can be seen as this differential operator or the matrix with these differential operators here in the off diagonal terms. Then we have a case two, which may or may not be present. So when the modal number k is equal to the decay rate sigma over two. So this would be an eigenvalue, a double eigenvalue here at the intersection of this vertical line here at around 2.5. This case, the ODE is defective. So the corresponding decay would be an exponential factor plus multiplied with this linear term here. I will skip the defective case just assuming that the sigma over two is not an integer, just to avoid technicalities. Then we have the third case. In the third case, this is for the large modes. We have complex conjugate eigenvalue pairs. The real part is always sigma over two. And the corresponding scaling matrices look like this. So again, I want to consider them as a family of matrices with this modal index k. Now when I consider this matrix family in physical space, they become a family of pseudo-differential operators with this inverse derivative operator here in the off diagonal term. So with this preparation, we can now from a global point of view for the PDE distinguish three situations. In situation A, when sigma, the relaxation rate is smaller than two. In this case, all the modes are in the last case. So let me briefly flash back. So in this situation A, all eigenvalues, these complex conjugate pairs, they are on this vertical line. So they all have the same real part sigma over two. This means if I want to construct a Lyapunov functional for this example based on the modal information, I call this E for my Fourier transform variables u hat and v hat, I just add up all the modal contributions of these modes or of these norms with the modified norm p depending on k. So since this matrix, last matrix pk, briefly flashback, can be interpreted in physical space as the pseudo-differential operator, I can write down this Lyapunov functional as this pseudo-differential or here the two norms of the two components. And then for the mixed term, I get this entire derivative on the variable u. I put a tilde here because I subtract for technical reasons the average of this variable u of the mass density, which is preserved in time. But the situation B is when sigma is equal to two. This is defective and here for the purpose of this talk, I will skip that. Then we have the relevant situation C where the relaxation rate is larger than two. In this case, we have a mixed situation. The low modes are in the first case from the previous slide. So they correspond to real eigenvalues and they will only determine the global decay rate. And the high modes are in case three. There are these complex conjugate eigenvalue pairs. So if we again think back to the same strategy that we have done here, we would get two different simple matrices, simple for the pseudo-differential operator for the low and the high modes. So if you want to translate that back to physical space, we would have a pseudo-differential operator with a non-smooth symbol, which would be very unpractical to do computations on. So there is a remedy for this situation where we have these mixed modes. So first of all, let us observe that is only the lowest mode when k is equal to plus minus one, which actually determines the global decay rate. This is the spectral gap of the PDE that is inherited from the modal discussion. So that tells us that in fact, although we have computed the decay rates for the higher modes, they are faster than the global decay rate. So there is some margin where we can give up the sharpness in order to adjust the decay rates. So this means for the higher modes, it is fine to use suboptimal modal norms because their decay rate is in some sense too high anyhow. It doesn't matter if we lose a little bit there. So for modes with index two and higher, we will construct now or invent a different norm, not the one that we have derived systematically before in order to be able to combine this setting of treating the lower and the higher modes separately. So here in this small lemma for this situation sigma larger than two, I just cook up a new matrix PK. It looks similar to what I have shown you before, but in fact, it is different. It is different for all k. And again, this matrix family can be interpreted as a pseudo differential operator in this form. And the essential feature is on the matrix level, it satisfies this Lyapunov matrix inequality for all modes k. The important aspect is here, this family of Lyapunov matrix inequalities is uniform with respect to this mu, which is the sharp global decay rate. And the advantage over what we have derived before is here I have one shape of a matrix PK that can be easily translated in just one pseudo differential operator that will take care of all the modes. So in other words, this family PK that I've just found here is good enough for the optimal decay in the sense that this matrix that I just cooked up is for the lowest mode, the same one as the one that I derived for the lowest mode. This is important because this is the only mode that counts. For the other modes, it's different, but it gives you the same decay rate. And as I just mentioned, this matrix family gives rise to this simple pseudo differential operator. So with this information, I can now use this matrix PK as a Fourier symbol and the corresponding Lyapunov functional is of this form. So first there are two norms of the two components of the solution. And then for the mixed term, we have this anti-derivative. And the result that one can find with this Lyapunov functional is for small relaxation rate sigma, you have exponential decay of this Lyapunov functional with the sharp rate sigma. For the case where you have a large sigma, large relaxation rate, you can use the same Lyapunov functional. Just observe that the index up here was sigma, and now it is 4 over sigma. So you just have to adapt the mixed term, the strength of the mixed term. And again, you recover the sharp decay rate in this case. Now if you wish to go back to an L2 estimate, you observe that this Lyapunov functional up here is equivalent to L2 norms. And at this point, let me mention that these sharp rates and essentially also these Lyapunov functionals could also be recovered from this paper from Dolbo-Murray and Schmeisser from 2015 if one optimizes a tuning parameter there on a modal level, a tuning parameter that is included in their functional. So what I want to do the last step is from these decay estimates on the Lyapunov functional, I want to go back to L2. So here I show you the L2 decay estimates with the sharp rate. So here's the final result, which is in this joint work with enough signorello and Vera. Here this mu of sigma is the sharp exponential decay rate and here explicit multiplicative constants for these optimal decay rates. Just a phenomenological interpretation of what is happening here. In this plot at the bottom, I show you this decay rate here. This reads mu of sigma is a function of the relaxation rate. So what happens in this Goldstein-Taylor model is first if you increase the relaxation rate, the decay will increase linearly up to this point here, which is the point where you have defectiveness in the ODE. And then the exponential decay rate will reduce again and phenomenologically the reason for that is you have larger and larger relaxation. So naively one could think the convergence to equilibrium will also increase. But you switch so quickly between these two families with the two velocities that you don't let enough time that you also have transport in the x direction. And therefore the equilibration in x happens slower and slower. So now let me briefly show you the interesting extension when the relaxation constant is x dependent. So the unpleasant situation is of course a modal decomposition doesn't work anymore when you have space dependent coefficients, but still the Lyapunov functional that I've shown you before with this anti-derivative can still be used. I will not show you the estimates because they are rather technical, but here is the result. So with the same Lyapunov functional, you can get an explicit decay rate. The index has to be chosen here in relation to the lower and the upper bound for your relaxation rate and you also find an explicit exponential decay rate. So when doing so, there are pros and cons. So first of all, this strategy that I just sketched here can also be applied to BGK models with more than two velocities. And in this paper that I mentioned before, we have done it for three velocities, for example. There is a negative aspect. This constant alpha star for x dependent relaxation rates is not optimal. So a brief review to the literature for two velocities, the sharp decay rate for the case where sigma is x dependent is included or is the essential message of this paper by Bernard and Saivarani. But since this paper is based on the equivalence to the telegraph as equation, there's no chance to extend their strategy beyond two velocities. So now let me briefly towards the end of my talk give you a hint how to extend the whole story from the torus to the real line. And here for the moment, I will only talk about a constant relaxation rate one. So here's again the Goldstein-Taylor equation. Now on the real line, I introduce u as the mass density and v as the flux density. And I Fourier transform the system. It looks like in the torus case. The only difference is the modal number xi is now a real index. So compared to the torus case, there are now three problems that we should consider. First of all, let me look at this plot here. So I plot, let's first look at the orange curve. The orange curve is the decay rate, I call it mu, is a function of the modal index xi. So first of all, we see that the decay rate, the modal decay rate, will vanish when xi becomes closer to zero. So there is no uniform lower bound for the modal decay rates, which tells us immediately it is completely hopeless to dream of an exponential L2 decay in this problem on the real line. The only thing what we will get therefore is an algebraic decay. Second problem that we have to consider. You see that this orange curve has two distinctly different branches. The first curve which looks like a parabola and then this constant. So this is a little bit like the situation C on the torus, where we also had this mixed case between the low and the high modes. And there we also had to think of how to cook up a matrix P, the scaling matrix on the modal level in order to be able to treat both cases simultaneously. So the problem that we have already seen on the torus there would be if we would use the sharp decay function, this orange function, we would get a pseudo differential operator with a non smooth symbol. Third problem that is present here. In the torus case, I avoided for the purpose of the torque the defective situation. So that's when xi is equal to sigma of the two over two. By just saying sigma over two is not an integer, I don't consider these problems. And then I could just skip that. On the real line, this defective situation, this point cannot be skipped. It will always be there. And if you look at the level of estimates, the modal estimates, several constants will blow up because in that case, you simply don't have an exponential decay. So therefore, one possible remedy that I'm going to show you today to solve all these three problems is to somehow give up a little bit on the best possible decay estimate here in this intermediate region here and approximate this orange curve by this dash blue curve. So you see here close to zero, this is a really good approximation. And then for large side converges to this constant. So how can you do this approximation? Well first, you again have to approximate the scaling matrices. And this is cooked up. There's no systematic way to derive that. There you have to play a bit. And in fact, this matrix family now as a function of the modal variable xi turns out to be a really good approximation of the sharp, the optimal scaling matrices in both limiting regimes for small modes and for the large modes. Then from this cooked up scaling matrix, you can compute the corresponding spectral gap for the modal ODE's. And this is the function that you get. So this function, of course, doesn't tell us much. If you look back, this is exactly this blue curve. And we see that this is a reasonable approximation, at least in the limiting regimes for the perfect decay function. And it avoids the defective case, the technical one, because we have given up a little bit on picking out or following the sharp decay rate. So having this approximation, one can again write down a globally appanoff functional just by integrating up all the modal norms with the scaling function p tilde of xi that you see on the first line. And since this is a rational function of the modal variable xi, this can be translated into physical space with a slightly more complicated pseudo differential operator that appears here. But at least it has a smooth symbol. So there is hope to be able to do computations with such a Lyapunov functional. Actually in the proof that I'm going to show you on my next and in fact last slide, we used the L2 decay on the modal level. So here the approximated decay function and here the multiplicative constant, which comes about, so this is the numerical condition number of this matrix p, and this comes about by going back and forth between the p-norm and the Euclidean norm. So here is the final result that we obtained with Jean-Christin Schmeisser and Tobias Wörer. So in L2, our solution with the components u for the mass density and v for the flux density. So this decays as a function of time and on the right hand side we have this nasty term. So before going a bit and explaining a bit what we see here on the right hand side, let me briefly mention what has to be done here. You have to split the treatment in the estimates between the high and the low modes. Say high modes larger than some constant r. For the high modes, you still have exponential decay in L2 because those modes have a decay rate that is uniformly bounded. This is what we see here in this last term. So this is bounded by the initial, by the L2 norm of the initial data and here a uniform exponential decay rate. For the low modes, however, where you don't have a uniform lower bound, we invoke here the L infinity norm of the Fourier transform solution in order to be able to estimate that. And then one is able to extract at least algebraic decay. And this is what we see here in this first term. So this corresponds to the L infinity norm here and here's the algebraic decay rate. In the end, you also see an infimum over this truncation parameter r so you can optimize with respect to r. So let me conclude. In my talk, I've started with model ODE's and I've shown you how we can derive sharp exponential decay rates. And then I have applied that to a simple PDE model. First I've shown you how the scaling matrices can give you a hint how a pseudo differential Lyapunov function can be constructed with the goal to be able to treat all the PDEs with non-constant coefficients. And then for the hyper-coercivity on the whole real line, I explained the splitting between the low and the high modes leading in the end to an algebraic decay rate. So the story with the pseudo differential Lyapunov functional is included in this paper here and the story with the hyper-coercivity on the whole real line is included among many other aspects in this long second paper here. I think time is long up so thanks for being patient and thanks for your attention. Thank you very much. Are there any questions? May I ask one short question? Let's stay with the finite 2D system. It seems that you have a very rigid structure and a natural question is how much can you perturb it by something nonlinear? So do you have an idea of what would be the abstractions if you put a nonlinear perturbation to this linear ODE system which of course has the right behavior at the origin, the right behavior at infinity and is of course sufficiently small? So yes and no. Not on the level of the nonlinear ODE. This is in one of the programs of the things to do next but my yes part concerns nonlinear BGK equations that we started with Eric Harlin and Franzach Leitner. So the way we dealt with nonlinearities there was to linearize the PDE around the steady state and then to treat the linearized problem with the strategy that I've discussed here. And then in the final step taking into account the nonlinear perturbations and show at least locally around the steady state that they are smaller than what can be controlled by your exponential decay rate. So this was the first examples where we were able to deal with nonlinearities in a local way. But what you mentioned to include the nonlinearity in the ODE is one thing that we want to discuss or analyze next. Let me put it over to you again. They were very long.
|
We are concerned with deriving sharp exponential decayestimates (i.e. with maximum rate and minimum multiplicative constant )for linear, hypocoercive evolution equations. Using a modal decomposition ofthe model allows to assemble a Lyapunov functional using Lyapunov matrixinequalities for each Fourier mode.We shall illustrate the approach on the 1D Goldstein-Taylor model, a2-velocity transport-relaxation equation. On the torus the lowest Fouriermodes determine the spectral gap of the whole equation inL2. By contrast,on the whole real line the Goldstein-Taylor model does not have a spectralgap, since the decay rate of the Fourier modes approaches zero in the smallmode limit. Hence, the decay is reduced to algebraic. In the final part of the talk we consider the Goldstein-Taylor model withnon-constant relaxation rate, which is hence not amenable to a modal decom-position. In this case we construct a Lyapunov functional of pseudodifferen-tial nature, one that is motivated by the modal analysis in the constant case.The robustness of this approach is illustrated on a multi-velocity Goldstein-Taylor model, yielding explicit rates of convergence to the equilibrium.This is joint work with J. Dolbeault, A. Einav, C. Schmeiser, B. Signorello, and T. Wöhrer. -----------------------------------------------------------------------------References [1] A. Arnold, A. Einav, B. Signorello, T. W ̈ohrer: Large-time convergenceof the non-homogeneous Goldstein-Taylor equation, J. Stat. Phys. 182(2021) 41.[2] A. Arnold, J. Dolbeault, C. Schmeiser, T. W ̈ohrer: Sharpening of decayrates in Fourier based hypocoercivity methods, To appear in INdAMproceedings (2021).
|
10.5446/53486 (DOI)
|
Thank you very much. So thanks to the organizers, thanks to Xi and Mihai for organizing this beautiful conference. I'm very sorry that I cannot be in Marseille, but anyway, so it's already very nice to have a conference like this when you're locked down in Paris, even though locked down in a week since. All right, so what I want to report on today is a recent result with Maya Pierre-Gualdany, Cyrille Lambert and Alexi Vasseur on partial regularity for the Landau equation with collo-interaction. And so this is going to appear in the analysis of Ecole Normale Supérieure. All right, so here is the space homogeneous Landau equation with unknown f, the distribution function, that's the function of time and velocity. So the space variable is removed. So f, t and v is non-negative function and the Landau equation, which is probably familiar to most people working in kinetic equations here. So recalled for convenience is d by dt of f equals to the divergence in v of the integral of the R3 of A of v minus w grad v minus grad w of f of tv, f of tw, dw. So you integrate for w running through R3 and this equation is posed on the whole of R3. So here A of z is proportional to the Hessian of the norm of z, the Euclidean norm of z. So in other words, if you compute what it is, so if you put the coefficient to be 1 over 8 pi, so it's 1 over 8 pi the norm of z times pi of z, where pi of z is the orthogonal projection on the orthogonal of the line span by z. So you can equivalently recast this equation in non-conservative form in which case it reads d by dt of f of t and v equals to some diffusion matrix which you obtain by taking the convolution in v of the matrix Aij of z with f, so you take the convolution in v and this multiplies dvi dvj of f of tv and there is a term which is precisely f of t and v squared. The fact that here you have a purely local term is characteristic of the Coulomb interaction. So it's because when you take the second derivative of this, you get the direct measure at z equals 0. So here you have a purely local term which is f squared and of course this is a dangerous term. So if you have here a constant diffusion matrix, this is a semi-linear heat equation, we know this blow up in finite time. But here if you know of course this will promote blow up, but if blow up builds up, I mean if f increases, of course this will make the diffusion matrix increase as well and you can hope that the diffusion matrix will offset the effect of the f squared terms here. So in other words you can hope for a balance between the f squared term here and the diffusion term there. But nevertheless there is an open question as to whether there is global existence of classical solutions or finite time blow up for the Cauchy problem for the Landau equation set on R3. So however in the late 1990s Cédric Villanille came up with a good notion of weak global solutions defined therefore for all time. He called them H solutions because the Boltzmann H theorem plays a big role in the existence of the solution itself. So an H solution in the terminology introduced by Cédric is a continuous function with values and distributions. So it's non-negative function, already it's a measure. And it's also L1 in time with values in the weighted space L1 minus 1 over 3. So already L1 minus 1 over 3 refers to this family of weighted space. What I call LP minus k, well LPk sorry, LPk, so function g is an LPk if g modulus to the p is integrable with the weight 1 plus v squared to the k over 2. And you have this formula for the norm. So here it means that you are integrable with the weight proportional to 1 over 1 plus norm of v. So it's a function in this functional space which satisfies conservation of mass, momentum and energy with respect to the initial data and which satisfies the H inequality, the fact that the H function at time t is less than or equal to the H function initially. And the equation is satisfied in the following sense. The weak formulation of the Landau equation is the integral of f in of v times phi of 0 and v dv. That's the integral from 0 to t of the integral in v of f times dt phi is equal to this term that involves the entropy dissipation integrand that you find in the dissipation formula for the H function here. So here you have the integral from 0 to t, the integral in v and w of the difference of capital phi at v and at w, where capital phi is the grad of little phi, the test function. In the product with the projection on v minus w of the integral capital F grad v minus grad w of capital F, where capital F is the square root of f of v f of w divided by v minus w and there is a phi that is floating around. So this is the weak formulation of the, this is the weak formulation which Villani proposed to define global solutions to the Landau equation. Good, nevertheless for the purpose of proving partial regularity, we need more information on the solutions but fortunately with the same procedure by which Villani constructed global H solution, the same procedure, the same approximating scheme will lead, will result, will produce what we have called suitable solution by analogy with the paper by Kafferli Konean-Wer for partial regularity on Navier-Sos. So what is a suitable solution of the Landau equation? So it's a solution which satisfies the following truncated entropy inequality. If I look at the relative entropy at the level kappa, which I define as h plus of g and kappa to be the integral of kappa times little h plus of g over kappa dv, where h plus is the function generating the h function. So it's z log z minus z minus one truncated, so truncated for z above one, right? So if you want this is z log z minus z minus one times the indicator that z is larger than one. Okay, all right, so now h plus of f times t2 with respect to kappa plus some dissipation term which involves some constant c prime e integrated between t1 and t2, the integral from t1 to t2 of the LQ norm of the grad v of f to the one over q where f is larger than kappa. So you take the L2 norm in time of this thing to the square dt should be less than or equal to the truncated entropy at time t1. And there is an additional term which is proportional to kappa times the integral from t1 to t2 and the integral of all v of f minus kappa plus. Okay, so if you take the approximation scheme proposed by Villani to construct h solutions and you pass through the limit and you pay a little more attention to the properties of the sequence, then you arrive at this inequality which is satisfied for all time in on the half line except possibly a negligible set of times, okay, which is satisfied for certain values of q which I'm going to explain in a minute and with a certain dissipation constant which I'm going to comment on in a minute also. All right, so okay, now partial regularity in time has to do with the fact that you want to say that the set of times where the solution possibly becomes singular, there's no longer a classical solution, whatever you want to call it, is going to be small in some sense. So here we call a regular time of f, a suitable solution on some interval of the half line, it's a time tau in that interval, such that f is in L infinity on the time interval tau minus epsilon tau for some epsilon and for all these. So here we localize only in time but this is global in velocity. So this is the set of, this is singular times, so the set of singular times, in other words, the times which are not regular times in the interval i with a note by s of fi, okay. So now if you take suitable solution to another equation on some time interval cross r3 for all positive t, I'm sorry, I mean on the half line cross r3 with some initial data f in, so the initial data has a decay to any order in V, right, so it has fast decay integrating in V, it has bounded initial entropy, right. And in that case, you can prove that the house of dimension of the set of singular times for any suitable solution which has this f initial as initial data is at most one half, okay. All right, so let me say a few words, let me explain how the proof is organized. Okay, first, first we need, we need the global existence of such suitable solutions. So as I said, the suitable solutions that are constructed by using the same approximation scheme as was used by Cedric Villani to construct H solutions. And if you do that, here's what you get, if you take any f in which is in L1, non-negative on r3, and which is integral ball against the weight of normal V to the k, which has finite entropy initially, then such an f initial will launch a suitable solution with that initial data. With a Q, so if you remember in the definition of suitable solution, there is a, there is a Lebesgue exponent in the dissipation here. So with some Q that is related to the decay in V of the initial data, so specifically the Q which you observe in the dissipation term is related to the decay by this formula here. So Q is equal to 2k over k plus 3. And the constant C prime E, that depends, that's the dissipation constant in the suitable solution inequality, depends on capital T, depends on Q, and depends functionally on f initial. Okay. And okay, there is always some negligible set of times which you don't care about. Okay. Good. So this is the existence theory for such solutions, which as I said are constructed as the, by the same approximation scheme as was used by Villani for H solutions. Now if you look at the, if you look at the entropy production term that comes from the Boltzmann H theorem, this is a non-local term, but there is a rather recent and very important observation, very important theorem by Deville-Etts, which allowed replacing this non-local dissipation integrand for the Boltzmann H theorem applied to the Lander equation by purely local terms. So here's the, here's the Deville-Etts theorem. So if you take a function f, which is integrable and has finite energy, right, so L12 means you can integrate with respect to 1 plus d squared dv, right, and which has a finite entropy. So f log f is in L1. Then if you look on the right-hand side here, this quantity is exactly the entropy dissipation integrand which appears in the Lander equation when you do the Boltzmann H theorem. Well the Deville-Etts result is that up to some constant, which you can compute explicitly. This is bounded below by the L2 norm of the gradient of square root of f with a weight in v, right. So this is a Fisher information if you want with the weight of the order of v to the minus 3, okay. And again, so the constant cd that appears here depends explicitly on the mass, momentum, energy, and entropy of f. And therefore if you apply it, you know, this is an inequality which is true for all f, okay, you can apply it of course at each time on a solution to the Lander equation. The corollary of this theorem in the paper by Deville-Etts, which appeared I think five years ago, is that you have propagation of moments for a Lander. So if you assume that f initial is in L1k, so it has finite k finite moments with k larger than 2, for 2 which is obvious, this is the conservation of energy. Then and if in addition f in as a finite entropy, then if f is an h solution with this f in as initial data, then f is going to be L infinity over finite time with values in L1k. So in other words, you propagate k moments in finite time, okay. So maybe if you let t go to infinity, this is not going to be bounded in that space, but for finite time you propagate k moments, okay. Good. So with this, let me look at the truncated h theorem which is used in the definition of these suitable solutions to Lander. So you want to compute d by dt of the truncated entropy truncated at the level kappa. Of course, if you don't truncate, this is a standard h theorem. If you truncate at the level kappa, there is one piece that is dissipative, which I denote by d1, which involves essentially the same dissipation integrant as would appear for the standard h theorem in the case of the Lander equation, except that here instead of having grad v of f over f minus grad w of f over f to the square, I have the grad v localized where f is larger than kappa, and here I have the grad w localized where f is larger than kappa to the square integrating in all variables. All right, so that's the dissipation term, and this is the nice part of the equation, but of course, this truncation gives rise to other terms which are not nice and which we put on the right hand side. Nevertheless, these terms, however not nice, can be computed explicitly. If you compute them, you find that you have something of the form minus the integral of a times the grad of vf, where f is larger than kappa, times the grad in w of f, where f is less than kappa. You can integrate by parts in v and w, and you let the derivatives bear on the collision kernel. If you do that, that will be the integral of minus the divergence in v, the divergence in w of the tensor, a of v minus w. In fact, if you compute it, this is equal to the Dirac measure at v equals w, and here you have f of tv minus kappa plus times kappa minus f of tw minus kappa minus. In the approximation process, this is not going to be identically the Dirac measure, but this is at least going to be non-negative. This extra term here is less than or equal to kappa, which comes from here, times the integral of f minus kappa plus. Now we find that if you want, this is a depleted non-linearity. We call that depleted non-linearity because if you return to the non-conservative form of the Landau equation, which is written here, if you multiply both sides of the equation by log of f, and if you integrate, here you would observe something that grows like f squared log of f, which would be very bad. But nevertheless, if you look at, if you do this computation here, then the non-linear term that comes from this Dirac measure that appears when you take the double derivative of a, it doesn't grow like f squared log of f. It grows like f, it's kappa, the integral of f minus kappa plus, which is tamed as a non-linearity compared to the entropy. So this is good news because this is going to help us controlling the equation. Unfortunately, not completely. That's why we have partial regularity and not full regularity. Good. So now, as I said, the sketch should prove a proposition one, the existence proof. Well, I mean, you truncate, you replace a collision kernel of Landau with a truncated variance. In other words, you truncate one of the z at a level n. Right? You check that it satisfies this inequality here. You use the Deville-Let's theorem to bounce the dissipation plus the integral of f minus kappa plus below by this fissure information with the weight proportional to the v to the minus three. And to remove the weight, you use the Deville-Let corollary, so in other words, the propagation of moments, to remove this decaying weight at the expense of lowering the power that appears here. So by further inequality, you control this expression here below by the gradient with respect to v of f to the one of a q, where f is larger than kappa. And here, q belongs to the interval one, two. So q is always less than two. And you have this further inequality here. And you enter that in the truncated entropy inequality here. And you get the announced inequality in the definition of the suitable solution. Good. All right. So now we have suitable solutions. How is that going to help us proving partial regularity? Well, this is done by the DeGeorgi method. I mean, actually, the first part of the DeGeorgi method, which consists in gaining L infinity out of the energy or L2 bound. So here is how it works. So you take let f be a suitable solution to the Landau equation with some coarsivity constant c prime e positive. And here I'm going to assume that q is less than two, but at least 6 fifths. You need something slightly larger than one. And specifically, you need 6 fifths. All right. So that's technical. It doesn't matter. What is important is that q should be free to approach to as much as possible. What is interesting is when q is very near two. Okay. Well, so for such an f, then there exists an eta naught, which depends on q and on that constant c prime e positive, such that if the entropy of f truncated at the level of one half integrated in time between, say, one eighth and one is less than eta naught, then f is going to be bounded by two, right, is going to be bounded by two on a smaller time interval, say on one half one cross R3. So for time between one half and one and for all v. Almost everywhere. Good. So how does that work? Well, I mean, this is a standard, this is a standard, the Georgie procedure for power equation. So you pick a sequence of times which grow from one fourth to one half. You pick a sequence of levels which grow from one to two. And then you replace, you replace the entropy by this nonlinearity here, which if you want is comparable to the entropy. So you look at fk plus of T and v to be mu of f to the one of a q minus kappa sub k to the one of a q positive part. And out of this, you construct the quantity ak, which is the supremum of the integral of fk plus to the power q dv for T between tk and one and the integral from tk to one of the grad v of fk plus in lq norm in velocity, you raise that to the square and you integrate in t. Okay, so this is the quantity ak, which on which you apply the Georgie procedure. So you control this by using the truncated entropy inequality, which is characteristic of suitable solutions. And you can show that this ak is at a size and inequality of the form, ak plus one controlled by some constants. There is something that grows exponentially fast, some lambda to the k. What is important is that you have ak to the beta and here beta is larger than one. So provided that a naught is small enough, then ak tends to zero as k tends to plus infinity. So you control this a naught by the truncated entropy and you conclude by Fatou Lemma that since ak tends to zero as k tends to plus infinity, this tells you exactly that f is going to be less than or equal to two on that interval here corresponding to the level at which you truncate for k going to infinity here and the interval that you obtain for k going to infinity there. Okay, very good. So that's the standard Georgie arguments, but in the language of the lambda equation. All right, so now this is not enough for partial regularity and we're going to now apply another argument but at the same nature, but now we have truncated and zoomed on the values of f, on the set of values of f. This is absolutely characteristic of the Georgie method to look at the level sets of f. Now we're going to do some scaling on the function on the solution to the lambda equation. And so here's that first, suppose that you have f as suitable solution to the lambda equation on some interval 0, 1 say, and with an exponent q, again close enough to 2 by I mean here, here q has to be just between four thirds and two four thirds is higher than six fifths. But okay, so all right, so but still less than two. With this q define gamma to be five q minus six over two q minus two. What I'm saying is that there exists parameter eta one, which depends on q and on the dissipation constants in the truncated entropy inequality. And there exists a delta one, which is zero and one such that if the lim soup of the integral of the Lq norm to the square of the grad V of f to the one over q, where f is larger than epsilon to the minus gamma, which you integrate between one minus epsilon to the gamma and one, right, which you multiply by epsilon to the gamma minus three. So if you assume that this lim soup is less than eta one, then f is going to be an infinity on the strip one minus delta one one cross R three. Now, if you have this proposition proposition three and vitally type covering arguments on the half line, it's classical to deduce from that, that if you look at the house of measure three minus gamma over gamma dimensional house of measure of the set of singular times, this is going to be finite. Okay, three minus gamma over gamma. This is equal to q over five q minus six if you do the arithmetic here. And the main theorem will follow by translation. Okay, because here I just I have localized in a subinterval not a half line and letting q going to two, right. So I remind you that q cannot be equal to two because I've used this, the devilette propagation of moments to remove the weights in the in the Fisher information, the vanishing weight at infinity. So I've used a little bit of q. So q cannot be exactly equal to two. But nevertheless, you can let q going to two converge to q to two. And for q equals two, you see immediately that this is equal to one half again. So at the end of the day, I prove that the house of measure at any dimension less than one half of the singular set is finite. So that gives me that gives me the the house of dimension as an answer. All right, so how does one prove this? Well, first you do a scaling, right, like, like so. So you introduce fn of t and v to be epsilon n to the gamma f of one plus epsilon to the gamma t minus one and epsilon v epsilon and two to the minus n. So this scaling here will give you a suitable solution to the lambda equation. This preserves the lambda equation. What's important to give you a suitable solution to the lambda equation with the same creativity constant. That's the thing that is absolutely crucial in this business. You don't you don't change. You don't touch the the constant C prime E and okay, so then you define capital Fn in terms of little fn was the same nonlinearity mu that sort of replaces the the F log F by the mean of R and R square. Right. Okay, so that's not very important. Anyway, so this assumption to the fact that this limb soup is less than eta one tells you that there exists a capital N large enough such that this quantity here is going to be less than more than eta one say eight eta one. Okay, good. So then with this, you use the order inequality and the sub-elephant equality as in the proof of proposition two. You isolate the term grad V of fn plus one in L two in time Q and V, which is a folder eta one. And you show that if you look at xm, there's going to be the soup of integral of fn plus m to the Q for T between one half and one here. You don't shrink the interval. Interval is a waste. One half one. I mean in terms of capital F. And you show that this quantity here satisfies a recursion inequality of this form, right? Xm plus one less than rho times the max of one and xm to the alpha plus the max of one and xm minus one to the alpha. You start from initial conditions x naught and x one less than or equal to capital M. So capital M is going to be a large enough power of two. So the M is say the first N for which this is going to be less than eight eta one. Here alpha is going to be Q over three and rho is some constant depending on Q eta one to the Q over two. Okay. Whatever. Here Q is less than two. So the alpha is less than one. So this is the opposite of the traditional, the Georgie argument. So when you iterate this inequality here, there's an easy induction that tells you that at some point all the xm are going to be below one. Okay. All right. So x2m is going to be less than or equal to the max of two rho to the one minus alpha to the M divided by one minus alpha M to the alpha to the M. Alpha is less than or equal to one. So that converges to zero. So at the end of the day, you find that for some M naught, this is going to be small compared to one. And now with this M naught, you scale back to the original variables and you find that F capital N plus M naught plus three, if I remember, satisfy the assumption in proposition two. In other words, satisfies the fact that the truncated entropy at the level one-half on the time interval one-eighths one is less than eta naught. And then you find the N infinity bound which you need, which you want. Okay. Well, that's it. All right. So final remarks. Well, I mean, this is a partial regularity result in the same style as the Le Réthier theorem for Navier-Stokes. Right? I mean, the Le Réthier paper for Navier-Stokes, you already find the fact, it's not written like this in this paper, but you already find the fact that a half-storey dimension of singular times for Le Réthier solution on Navier-Stokes is the most one-half. Well, in fact, the Devillein, if you look at the Devillein theorem, it puts along that equation the same class as 3D Navier-Stokes in terms of the back exponents, right? For this weight, one plus V to the minus 3, which could be important or not. But I don't know. Anyway, so if you look at Navier-Stokes, Le Réthier retails you that the solution is L infinity in time with values in L2 and X, with grad U, which is L2 in time and X. So here, if you look at Langdorff, square root of F is L infinity in time with values in L2 and V. So that's the conservation of us. And the Devillein theorem tells you that the grad V of the square root F is L2 in time and velocity except for this weight. All right. But if you, yeah, apart from the weight, this is very much in the same number. These are very much the same exponents. So it's natural to ask whether you have partial regularity in T and V in the same style as the Kaffareli and Nihon-Bert. Why not? I don't know. But more generally, you could ask yourself whether you have conditional regularity in the style of the papers by Serrin and all subsequent papers where you assume some local Le Bag exponents above a certain critical threshold. And out of this, you deduce regularity. So I mean, if you take P equals infinity, so the Serrin criterion, if you copied for square root of F would be exactly this. So I don't know if this is true or not. This is only true for P equals infinity and K large number 5. So this is a paper by Sylveste in 2017. So it's also a result by Guadagnan and Gilan in 2016. And there's recently a result in that direction by my student, Emmanuel Ben-Carrot. Okay. Thank you very much. I think Renjun has some questions in the chat. I think he says he's referring to the stability in the nonlinear case. Just did you have any comments on it? Renjun, do I am asking on the chat? I was in it a question for Jean. Yeah. Not for... It was a question for Jan. I thought... It's a very strange problem. Sorry. It's a long one. So... Someone else is typing some questions. Yeah. Samir is typing. Samir is typing. Yeah, I don't know. I didn't see... I thought that one was new. I didn't just realize now. No, well... Well, maybe there's some stability for... Yeah. Yeah. Samir wrote there, yes? Yeah. So... Well, unfortunately not because... I mean, with this technique... Hi, Samir. So with this technique... You know, this technique, somehow you truncate the velocity. You get rid of the weight, okay? If you... Oh my God, I did the wrong thing. If you return... So where is that? So that's, I think, here. You see, at this step, you remove... You replace the weight in V as appears in the... In the devilette inequality. You trade that for the bag... The bag exponents slightly less than two. For some gradient of F to the one of a Q. Okay, Q is slightly less than two. So here... I mean, if you change the... If you change the potential, what will change is this weight, typically. Right? But this weight, you throw it away by this procedure of the propagation of moments. So I doubt that with this argument... I mean, I doubt that this argument, you will see the difference between the potential that will be less singular than Q. Of course, that will be a very nice question. We know that if the exponent is larger than minus two, I think no problem at all. You propagate regular IT and all that. Exactly. I think there's the answer by the way that... But it's not the same argument. This is a totally different argument. Right? So unfortunately, these partial regularity techniques will not see the difference between Coulomb and possibly nicer potentials. Right? For instance, we don't know what happens between minus two and minus three. I mean, probably this is the same as Coulomb period.
|
Whether there is global regularity or finite time blow-up forthe space homogeneous Landau equation with Coulomb potential is a longstanding open problem in the mathematical analysis of kinetic models. Thistalk shows that the Hausdorff dimension of the set of singular times of theglobal weak solutions obtained by Villanis procedure is at most 1/2. (Workin collaboration with M.P. Gualdani, C. Imbert and A. Vasseur)
|
10.5446/53489 (DOI)
|
It's a pleasure to present here, unfortunately, in our virtual. I hope everything worked out. So what I'm going to talk about is joint work with my now postdoc, Stefan Gaster, with Stilian Aachen, and former postdoc of mine, who is now in Tsinghua University. And some references in the end, but if there are any further questions on the references, please hesitate to contact me afterwards. So what is my interest? So in principle, what we wanted to do is we want to ask a control question, and in particular, we want, we are interested in feedback control of typically nonlinear hyperbolic systems, or also kinetic equations with a variety of applications like stabilization of gas flow in pipelines or stabilization of water and so on. So there are plenty of these things. And typically, the approach which has been used is that one defines a suitable the upper of function and shows that perturbations decay exponentially. And I want to show today how to apply such a framework or to design such a functionals in the setup of linear kinetic equation, which also have a relaxation term. And what I think maybe what is one outcome of this is that in a deterministic case, we are not able to control this in this exponentially fast way. But if we introduce a certain uncertainty in the relaxation rate, then there is a chance to get this. And so far, we have not transferred this yet to a real feedback control law, but today I want to show you what kind of techniques we use to stabilize the dynamics, at least on the full space and what kind in particular what kind of functional we used and how this approach worked. But in order to set this a little bit in the framework, I have now actually one and two slides on the general background. So for this stabilization has been discussed a lot by Jean-Michel Coran and his co-workers. And I have my wanted, even though it's very simple here, but I wanted to show why this gap in a function is interesting and how this can relate to feedback laws. And so the basic problem, if you boil it down to the absolute necessity, is the following setup. So here's a transport equation, you also scale and I have some remarks on the later one and say A for now is positive, you have two boundaries, you can of course control the last one. And the idea was, and as I said, there are many references that you can take for example, look in this book, the idea is to introduce a Lyapunov function with a certain weight and take the weight at a true norm. And if you multiply and integrate by parts, you'll see that if the boundary is dissipative, if this kappa is less than one, then in fact this Lyapunov function decays exponentially fast. And this result you can use of course to stabilize dynamics. So if you have a nonlinear dynamics, you linearize around your desired point and then you use a cross function like this and this Lyapunov function introduce certain conditions on this boundary, for example. So that perturbations of a state are damped out really fast. And this basic setup has been extended of course a lot and you can have more complicated boundary conditions, nonlinear dynamics, you can have systems, you can have networks, you can have uncertainty. However, one point which I want to stretch is the following. So usually of course the control of the nonlinear system is interesting and the way it works with this approach is by linearization. So if you have something nonlinear, then you can essentially control in a neighborhood of this linearization point. And of course the natural question is if you think of this as a nonlinear hyperbolic balance law, why don't I approximate this with a kinetic equation and try to find control or try to find the Lyapunov functions which are also stable in the limit and which give me some feedback control also in the limit of small relaxation. And this is how we came to this setup. And the simplest problem maybe two velocity relaxation model would be the following. So you have two components, one is transported from left to right, the other from right to left and you have a linear source term here with a small parameter epsilon which you want to drive to zero. And of course there is a global steady state for this problem and you have a transport like this. So sorry, this should be dx of course. So there's a dx missing here, it's really transport and there is this source term, everything is linear. If you would now apply this theory which I had on the previous slide, you will see that the cost function we have before will not decay as no way because the source term doesn't have the proper sign, there is not dissipation with this one. However, you can also already see from the notation I chose here, this is kind of what we want to apply. So in the setup of these kind of equations you have of course this hypo-coessivity framework which would give you an estimate of the solution deviating in some norm which I have on the next slide from the global steady state in terms of an exponential factor constant which also depends on this rate epsilon plus the distance of the initial data. However, if I look at this from the perspective of also having something in the limit epsilon to zero, then you can see for this small problem you can explicitly compute all of these things and I have here a small picture, I actually computed these values and I look now the blue line is the k of epsilon and what you see is when epsilon tends to zero, well the k of epsilon tends also to zero and so you lose your nice property of exponentially damping out possibly perturbations in the limit. And so the question is how can now, so first I will now introduce maybe the hypo-coessivity framework but only shortly just to set up the notation and then I will discuss a little bit on how to regain say a positive decay rate for an exponential decay of states deviating from some steady state in this linear kinetic setting with having in mind to extend this to boundary control and to also the limiting case when epsilon to zero for non-linear equations. So using this essentially a different way to approximate non-linear equations by linear assistance. So I, even though many of you here probably know this setup I want at least to have one or two slides on the notation and on the hypo-coessivity framework and I use essentially the same notation as in this paper of 2015 from Nolbo-Moron-Schmeisser. So we will look at kinetic equations which have this linear kinetic equations which have this transport term, they have the scaling here, the interesting case will be alpha equal to one, I have later slide to say what, what's how this alpha influences these rates and you have a linear source term and you have initial data which is always called F zero. The global equilibrium of the source term L is denoted by F, by S before and there are conditions on T and L which I have on the next page which ensure that you have for this equation an estimate like this with these exponential factors which are, which have the same notation as before. And I said that this is related, that we are interested in somehow using kind of Lyapunov function and this estimate is obtained using Lyapunov function, though this gamma is non-negative and has, is entered in this modified entropy function where you have a norm which is a weighted L2 norm with a weight which depends on this steady state plus this product of this operator which where the P appears and P is a projection of F to the null space of your source term. So this, as I said, so this notation is taken from that paper and the conditions under which you now obtain this decay which is stated here on these operators are again given here. I will not discuss those in detail but I will need them later and therefore I want to, wanted to state them that you have two cohesivity properties, one on the microscorpion on the level of F and then one on the level of the projected F and then you have boundedness of assumptions on some operators and again let me point out that in the case alpha equal to 1 with this acoustic scaling in general you observe that the k epsilon for epsilon to 0 tends to 0. And so this brings me to the next slide which says, okay, how does, how are the results in the limit in general? So I said we have this scaling and if you have alpha equal to, so I said it wrong, alpha equal to 1 is power-bollied scaling where your file is scaling here, then here, then is already been shown that even in the limit you get exponential decay with a positive decay rate which would happen. And this result has been also extended to random relaxation parameter which is somewhat sampled from them given distribution and then you have a similar result now in expectation. So expectation with respect to the randomness of this parameter. However, in the acoustic case you have the same result as I had for my toy problem, though in general the rate will vanish in the limit and this is something we somehow from a conceptual point we don't, we want to somehow relax a little bit. And so what we are, what I'm going to do is I will present of course a weaker result but in the acoustic scaling and I want to modify the problem slightly in order to also regain exponential convergence in some, in the weaker sense of this limit. And the idea is the following, the idea is that we take now acoustic scaling, though we have only one of our website on here. And so we replace this one of our epsilon by a parametric uncertainty which is called xi and some small, it can be arbitrary small number at the positive. And consider now this equation which is here in the second line, can see it, which is now a random equation, so the f depends on this additional uncertainty xi which is parametric so it's written in all these arguments and xi appears here and now be mimic the case that epsilon tends to zero by allowing that the realizations of xi tend to infinity which just means that the xi has somehow unbounded support. And so we have this small xi here, this is also on the next slide because of course xi can also have realizations which are zero and if eta would be zero then we would have lost completely the influence of the source term but we need the source term in order to recover somehow our nice limit. So therefore we need a deterministic eta positive and we take xi which is unbounded support. Our assumption is still that the, as in the parabolic scaling that the equilibrium is deterministic the initial data is also deterministic. This equation is now a random genetic equation though it depends on this additional parametric uncertainty which has unbounded support and clearly there are plenty of references and I have not written any here but I'm happy to give them to you on these things and the question is now can we for this equation gain somehow an estimate which would allow us exponentially k. And let me comment a little bit on what we take as assumptions or I said that the eta has to be small but positive in order to prevent the complete loss of the source term in case of realizations which are zero. I also said that the xi has to have unbounded support so what are distributions which fulfill this I'm not sure if this is big enough for you to see but there are pictures of the of realizations of xi so you can have exponential distribution you can have a la xi square distributions this is the realization of xi these are the realizations of xi plus eta where you can see the shift in eta and so what we took and this is now crucial though because our results depend on some sense on this choice so we took xi which has a probability density which is parameterized by alpha, bar and beta it's not important what these values are but with these values you can control if you are either exponential or xi square or l undistribution but so we take xi which is distributed with this probability density and we use later in the proof we use particular properties of this probability density more precisely we use properties of the orthogonal polynomials to this probability densities which are the misalaga polynomial so this is a crucial part so our results are really only for noise in some sense which has this distribution because we use properties of the orthogonal polynomials to this density. Then the next question you can ask is okay so I want I have now replaced my small epsilon problem by a problem of large xi essentially but now I have a random problem so I have an additional parameter so in which sense do I want to obtain exponential decay and we do it in a similar way as for the parabolic case with random relaxation parameters so we look at the expectation of the square deviation from the deterministic steady state which is this integral and this norm here is precisely the same norm as in the hypercoercivity frame where this is again a norm l2 normed weight with respect to x and v weighted by the equilibrium f and so if you have a problem like this so you want to a computer show that this decays exponentially and you have this additional equation you say okay but it's not particular challenging maybe because it's just another transport equation with an additional phase variable which appears also here as a first term and so the simplest way would be the second one what is called a non-intrusive way you say that you just sampled xi from your distribution and then for each fixed xi you apply the hypercoercivity framework from before because the operators are precisely the same as in the deterministic case and then you do you compute these expectations by essentially using the average if you would do this and I have a picture on the next slide then you don't see any exponential decays you end up with the same problem as before the decay rate will tend to zero for these larger and larger realizations which you get so this is not maybe the preferred way so therefore we pursue these what is called the intrusive way so we use a series expansion or generalized polynomial case expansion with respect to polynomials and these polynomials are now the ones is like a polynomial they are the ones which are orthogonal to this P and as all we obtain a system I would show the system in a minute that basic idea is to substitute of course the series in here you test against the other against the orthogonal polynomials and you get an enlarged system of schematic equations for the coefficients of the series expansion and before I show you this system I will show you the numerical result on the toy problem so you have on the right you have no exponential decay for the non-intrusive case as you see that this somehow tends to zero in this rate this blue rate which is essentially the expectation of the coefficient of the decay which you expect and in the GPC case or in the intrusive case you see that there is also decays and also to quite low value but there is a lower bar so you get exponential decay in the intrusive case but not in the non-intrusive case and I want to elaborate a little bit on this and how we got to this result and this is a computational result but we also have an estimate for this k bar for this lower bound and this computational result is for this two velocity model but the results I am going to show now on the theoretical part they hold under these general conditions on T and L. So what is what we have to do well the first thing is that okay you have your deterministic system which you expanded by this random by this additional uncertain variable and then I said that you now take this GPC expanded system though you apply the series expansion to test and you get an enlarged system. The initial data is deterministic so the initial data will only appear in the first component this is indicated like this and then you have of course all the other components of your series expansion there is not yet any plantation done so we have just written this as an operator equation for this for the coefficients and now the question is what are these operators so this bold T and the bold L they are the operators which come out of T and L and so we will get there is a T nothing changes so this T is in fact the operator T but now for each of these coefficients in L it is a little there is a little difference because you have as operator psi times L so what you get is you get a multiplication with a matrix which is this pk which has entries pki which are in fact the integral of this Galacian product with psi with respect to this probability density and this p somehow these properties of this p are exploited in order to show an exponential decay and then you have the other part which is eta times L which just gives you again this L operator copied to the first space. So now in which space do you do we do we get any estimates and so the point is that as I said so we use the properties of this p and so we look for solutions which belong to a weighted space so and the weight is the sigma k and here you can see alpha bar and theta you probably don't remember but these were parameters of the probability density p so I said that we can do that for unbounded noise psi which fulfills which is given by certain distributions and those distributions are essentially parameterized by alpha and beta and so this term really appears here as somehow as a median and then you have also this eta but eta is a fixed parameter it can be small and so what we are looking for is essentially vectors of coefficients such that a weighted L2 norm so this is then again this norm on xv with the weight 1 over f that says this weighted L2 norm is bounded and then we can show that the coefficients of the systems of the systems fulfill a bound of the following type so you have this f bar in this L2 sigma space is bounded by the initial data is 0 which also belongs to the space but we also have only one component and this bound this is a good part this is linked to the particular properties of this probability density clearly if you have a function if you have an f which belongs to the space then it also belongs to the space L2 without a bound and we have also this inequality which tells me that this is bounded by this norm for all f which belong to the smaller subspace so what is one point why we get this exponential decay if you want is that we somehow have a better control about the solutions which and this about the decay of the higher mode of the solution compared to a somehow Monte Carlo method where you would just sample and you would have no control on these distribution in f this is I would say one of these things now the second is of course now you have to check that these operators which are defined still fulfill the high probability framework so and I don't give any details here but the basic result is the following these conditions h1 and up to h4 and these operators T and L they essentially imply that also these bold operators T for the system of the coefficient and this operator L which is not just L but also L plus maybe I should show it again L plus L which is L plus L times P that they also fulfill these conditions and then you can essentially apply this hypercosy to yourself now to this extended system and this and then gives you after some computations actually exponential decay of the expectation with a constant kappa which is now which is now bounded from below this was is okay K e previous plot and I have it one more thing though here again this decay will be in this so the solution of the sd bound we had before as I said was in the strongest in the smaller space so there's a stronger bound this decay is not in this yeah so we have somehow here this we have here decay in this original L2 norm but we have it for but we know that the solution to this enlarged system belongs to the smaller the second thing is and this is actually crucial when you now do the numerics is the following this is a result which holds and principal for any size so you have we have no truncation but of course if you want to run this numerically then we have to truncate the gpc series at one point at K and then there's of course an error between what was approximated in the subspace K and the infinite series and we have actually control on this error also due to the fact that we know precisely what kind of distribution these original sign has and what I want to show is now somehow two numerical results which highlights these two remarks so the first one is that I said that we have exponential decay only in this L2 space but not in this stronger to sigma space and this you can see also numerically because here so this is the expected decay so the slope in this blue dash line and this is the numerically observed decay and to see that you are always smaller except at high times at at large time T where somehow this line now crosses so the diagonal would be below this I mean it's a small you have to zoom in here in order to see it but actually you also see numerically that we don't can that we cannot also expect the exponential decay in this stronger space the second is the following I said that there is a truncation error so we can show that the truncation error between series which only runs to order K and full series behaves like a like a constant with some power you can also specify and then times T so the longer it runs of course the worse the error gets and this you can see essentially in the first part of this picture so here you see the truncation error over time for the toy problem so again to two velocity model and you'll be around it up to time 20 and we compare with the with a lighter with a larger solution we have and you see for the different truncation errors one five ten fifteen and twenty how is the error over time somehow propagates and of course as expected this gets worse on the other hand if you say that you are interested in the solution at a fixed time then you can of course look at fixed time and you can run you can increase truncation order and then you see how the truncation how the error decays when K gets larger and to see this here four times five and here for time 20 and here the scaling is 10 to the minus one up to 10 to the minus seven and the same so we have this competing effect so of course we have the truncation error which increases but which we can control essentially by this estimate and depending on the given time and we have exponential decay in back in this L2 norm but not in the L2 sigma okay yeah finally of course we want also to show that if we now fix sometimes a here time 20 I want to show that you can also observe this numerical that you can also observe this exponential decay numerically if you have a sufficiently high degree of truncation so you have here again the lines in the color with truncation one in blue and then up to pink in 4k equal to 20 we have as a reference we have a high resolution result for I think 100 k100 triserious which is here the black dashed line we have the Monte Carlo which is called the aquatrature result and then you see these exponentially case for the solution also in this bearing and also of course here in these parts and so this is now a system which is thought the intrusive way with 2k coefficients yeah I think my time is almost up so let me just come to the summary so what we did is although far there is you have not seen our feedback control law but what we did is we extended somehow the acoustic scaling setting to parametric uncertainty in order to avoid this decay of the of the in order to avoid the degeneration of the decay rate when epsilon tends to zero and we recovered that by using this clearly this is a weaker result in the sense that we have now unbounded noise we have a particular unbounded noise which includes some distributions but it's really focused on that what what is interesting is that maybe the intrusive approach allows to get these exponential decays but the non-intrusive does not and this is strongly related to the fact that somehow for the intrusive approach we can get a better bound on the solution and the next step would clearly be of course to use that now on the domain to derive as in the beginning shown maybe suitable suitable feedback laws for boundary for example or to somehow extend this in the sense to a non-linear setting that we say that we want now to control this the results which I've shown here I'm happy to share also the frequent will appear in this paper and as I said this is going to work with my students in the Stefan Gastein thank you very much for your attention and I'm happy to take questions thank you Michael any questions okay let me start with so you mentioned that for intrusive method you don't get the exponent decay are there any intuitions on why does the k even including a stochastic collocation method yeah so for the non for the non-intrusive I don't get this because because I get because what is crucial in my opinion is the following that if you study this expanded system you get you get a better control of the space in which the solution lies right so we get really you have deterministic data and deterministic steady state and so we can really control the higher order modes and control the desire order modes as somehow indicates sufficiently fast and this is deployed in order to get the exponent and the non-intrusive I don't have this knowledge right I just have samples but into a certain probability and I cannot cannot get a better result I mean maybe it's I mean I would not want to include this is impossible but if you do just a sampling strategy then you don't see this you talk about the monocular sampling but if you do stochastic collocation I mean stochastic collocation if you do stochastic collocation then the question is here okay then there's another point I mean then you have to balance I told you that this I said that maybe it should be clear so this result is a result on the infinite system right so if I do additional truncation in this one I get an additional error which I've shown in the end which also increases so if you do collocation it's in some sense the same as doing a series expansion and truncate at some point which corresponds to the number of collocation points you take so if you do stochastic collocation then you have somehow to balance also to have to do a detail another is how to balance the truncation order or error against somehow the remainder which decays so it's not I would say it's not so not so straightforward to then distinguish these two effects but of course it should be possible in this case comes from the post processing right it's sort of a interpolation and I mean those steps destroy the explanation that's what it could be yeah okay what we have not done to be clear about this so we have not analyzed this stochastic collocation case okay and for sure would be interesting to see but I think one has as I said one has to balance these two things in a proper way in order to not deteriorate this result and in the non in the intrusive cases is of course much easier because somehow these operators regain the nice properties which you want which you already have and so it's yeah another thing is that yeah so we use actually happily the fact that we have a particular noise right so this and these Lagrange polynomials is really enter so this you yeah of course this respect also when you do the collocation but it's not completely clear at the moment to me how to utilize this in order to prove this fault or different case
|
We are interested in the stabilisation of linear kinetic equations for applications in e.g. closed-loop feedback control. Progress has beenmade in recent years on stabilisation of hyperbolic balance equations usingspecial Lyapunov functions. However, those are not necessarily suitable forthe kinetic equation. We present results on kinetic equations under uncer-tainties and closed loop feedback control.
|
10.5446/53490 (DOI)
|
Thank you very much, Mihai. So I would like to thank all the organizers and more especially Mihai for the invitation. It's a pleasure to present these two works. In fact, we have two papers here in one talk. This is a collaboration with my colleague Mohamed from REN, as me, and also Ana Maria Luz from Brazil. And these two papers with the HMF model. The HMF model is quite well known by this community. This is the so-called Hamiltonian Mean Field Model, and it is a sort of simplified version of the Vlasov-Puestron system of, let's say, well, I don't know if it's gravitational or plasma, but it's like a kinetic equation on some density of a distribution function F. This distribution function depends on one space variable, which is in fact an angle, theta. So the space variable is in the torus, and there is a velocity, which is, of course, in R. So we have the usual Vlasov equation, and the field is the gradient of a potential. A potential is given simply by the integration of the density, rho, so the usual density. So you integrate a rho on a cosine function, cosine. So this is a convolution with a cosine function. In fact, of course, you can expand this and get an equivalent function. In fact, the potential phi, phi, depending on F, depends only on two quantities, which are the integral of rho against cosine and the integral of rho against a sine. This model has been studied by many physicists, and also it's studied in mathematical community, in our kinetic community, as a quite simple model where we can test some methods. And here, the fact that we are in dimension one simplifies, of course, many things. So I will briefly give the properties of HMF, but they are the same as Vlasov, Vlasov Poisson. So this model preserves all the integral of the distribution function. We have a nonlinear energy, which takes the form of, so it's more like a gravitational model. I changed my mind. The energy is the difference between two positive quantities. The first one is the kinetic energy, the usual kinetic energy, and the second one is the potential energy, but the potential energy here is very simple. It is just the norm, the appellation norm of this magnetization. The magnetization is just the vector with these two components. So this is the nonlinear energy, which is nonlinear, of course. So linear plus quadratic, minus quadratic. We have also the usual Galilean invariance, and we have also, and okay, these are the basic properties. And what I'm interested in are the steady states and spatial class steady states. If you have, if by chance you have a function f of theta and v, which is another, which is a function, a one-dimensional function of the microscopic energy, the microscopic energy, the usual microscopic energy, which is v square over 2 plus the potential. So if you have a function of this, but of course writing this is already an equation on nonlinear equation because the phi here depends on f. So this is a nonlinear equation. If you have converged in some fixed point procedure or whatever, and if you have a function of this form, then it is a steady state of the system. So I will be interested in the most simple class of such a steady state, which is the class where the function capital F is decreasing. This is the usual simplest set of steady states. And our main result are the following one. So forget about this formula for the moment. So we have a steady state that I denote f0 if you have f0, f0 is a steady state. Then there is a criteria, a quantity that you can compute on f0. And if this quantity, if kappa, so kappa0 is less than 1, 2t less than 1, and f is decreasing, we prove that f0 is nonlinearly stable. In fact, nonlinearly, orbitally stable. And if this criteria is not satisfied, more precisely, if kappa0 is strictly greater than 1, f0 is linearly unstable. And in general, it is nonlinearly unstable, in fact, under additional technical assumptions. OK. So what is the criteria? So in this talk, we will let appear this criteria in a very easy way. This is rather funny. In fact, we found several methods to compute this kappa0. This criteria was already known by physicists. And what we did was to prove mathematically these nonlinear properties. I quote this paper by Ogaoua, but other physicists studied this criteria, I think. So my talk is very, as the following outline, which is very simple, I will spend some time on the stability result, which has techniques which are very different from the ones for instability, as you recall. Stability can be done in two steps, linear and then nonlinear. OK. So let me start with stability. Stability result uses a tool that we introduced with Mohamed and also Pierre Raphael in several papers concerning Vlasov Poisson. So we adapted these papers to prove stability when this kappa0 is less than 1. The main tool, which is quite a general tool adapted to the spatial structure of Vlasov Poisson, is a generalized Schwarz rearrangement function. So if you know this tool, this very well-known tool in analysis. So the Schwarz rearrangement is the way to prove many symmetry properties in PDEs, in the analysis. So take a general function in L1. Forget about the variable for the moment. If you have a function in L1, you can define its so-called distribution function muf. This is the measure of the variable, the set such that f is greater than s. So it is a one-dimensional. So f is a function in several dimensions, but mu is one dimension. You define its inverse. In fact, it's so-so the inverse. And then the Schwarz rearrangement is just this function f-sharp, this inverse, composed with some Jacobian function in order to put this f-sharp in a right space of variable. So this f-star depends on the same variable as the initial function. It is just f-sharp composed with the measure of a certain ball. And this is the precise formula of f-star, but its very important property is that it is a quimals measure of a variable with f. So this f-star has the same measure of level greater than s. And that's it. And the second important property is that the Vlasov equation. Since the Vlasov equation, I said it conserves all the so-called casimiers, it conserves all the integral of function of f. This measure is an integral of the function in the measuring, the fact that f is greater or not, greater than s or not. So this mu f is preserved by the flow, by the HMF or by a Vlasov-Quasson flow. So since mu f is preserved, mu f of t, mu is a global function of s. So mu f of t depends on t. This is all this function is preserved. So in this function, we code the preservation of mu f is exactly equivalent to the fact that all the casimiers are preserved. And then as a chain, the f-star is preserved. So this is the first key property. The second key property will come in a minute. Okay, let me state now our stability result. So the stability result is this corollary. I will comment the theorem after that. For the moment, let me let us concentrate on this corollary. So this is exactly in some norm. So the natural norm, so we are studying weak, weak solution and the natural norm associated to weak solution is the energy norm, the sort of energy norm. So you integrate, you want to control, you control, what you want to do is to control the difference between f of time and the steady state with the weight of the same strength as the kinetic energy in L1. So what you can prove is that if the initial data is close enough to f0, you have for all time f close to the same steady state a possible translation which is impossible to avoid in such generality because of Galilean invariance. So this is the so-called orbital stability. And the translation is exactly defined, yeah, or to define the translation for given f. But in general, you cannot control this translation or the control of this translation is a very difficult problem. So this is the aim to prove this orbital stability. And in fact, this result can be proved by proving some functional inequalities. In fact, the stability result is simply a consequence of functional inequalities adapted to the quantities that are preserved by the Vlasov Poisson flow, by the HMF flow. So what are these, what is the typical functional inequality? In fact, we can prove that if, so this first result is independent of HMF. If you take a function of this quantity, a function of an initial data, which is a function, all the steady states of HMF have more or less the same, this form. Then you can prove that under the stability criterion, the difference between f and this steady state, between any f. So this norm, the square of this norm is controlled by quantities which are preserved by the flow. So this is the key functional inequality which enables to deduce directly stability, orbital stability. Because if initially, since this is the nonlinear energy, if initially the f is close to f0, this quantity is small initially and is conserved by the flow, it will remain small for all time. As I said, f star, the rearrangement is conserved by the flow, so this quantity is also a constant in HMF, and one normal. So this is the main theorem we proved. I will have to be, not to enter too much in the detail of the proof, but I just want to insist on the main steps for those who know this kind of proof. The main step that I will develop in a minute is a new notion of symmetric rearrangement with respect to microscopic energy. So the previous Schwarz symmetry rearrangement was the function of distance to the origin, and now the symmetric rearrangement will be a function of microscopic energy. Then the second property, the key property is that the Hamiltonian is monotone with respect to this rearrangement, then you have some coercivity control. So the criterion appears here, enables to prove some coercivity, and then the last part is the consequence of all the other properties. So the first step is to generalize Schwarz rearrangement. So in fact, I already explained how to construct Schwarz rearrangement here. You just change the Jacobian function to obtain a rearrangement, which is, so take only this part. If you have a potential, any potential, any function, then you can define a function that we denote f star phi, which is equimagerable with f, and it is a function of the energy. So there is a unique function of this form. Then, okay, so we have an explicit contraction, but in fact, f star phi is also f sharp, the same f sharp as before. So the inverse of the new f composed with some Jacobian function, which takes, so it is clearly a function of the microscopic energy, right? And then the key monotonicity property is the following one. You have an algebraic formula, which is the nonlinear Hamiltonian energy of f minus the nonlinear energy of f0 can be decomposed into three parts. So the red part, the green part, and this one, the blue one. So the blue one is simply a function of one variable. Initially you have a function of f, right? The function of, here you have a function only of mf, so of one variable. This is the modulus of the magnetization. And here you have a quantity which only depends on f star, right? f star phi f minus f star f0 star phi f is the same function f sharp, taken at two different fight. So the difference between, so the red term is small as soon as the f star are small. And the green term is very important, so this is the integral of microscopic energy multiplied by f minus its rearrangement. And this term is by structure, by construction, it is positive. This is reminiscent from the following property of standard schwarz symmetrization. So with this property, and in fact, remarking that this quantity controls, not only positive, but also controls this part, you can prove the theorem. And the way to prove it is first is to study this one dimensional function whose properties are very interesting since this function J of m is its derivative is zero at the steady states and its second derivative is exactly one minus kappa zero. So this is exactly when this real number is strictly positive, you can have quercivity on J and so on, you can continue. So I stop here for stability. Here we just show in, I think I have only five minutes left, right, Niai? Or Sherman? Yeah, I knew the time. Okay, but I will not take too much time. So what was interesting, in fact, the stability criteria was already known, the linear instability. Okay, and linear was known under another form, and in fact, we could prove that it was the same. So linear instability uses techniques which are very different. In fact, I would like mainly to quote the words by Walter Strauss, Jan Bloh, Lien for the Blaiseuil-Poisson system. So in this series of works, the linear stability of some classes of steady states of Blaiseuil-Poisson were proved. The idea is mainly to integrate the characteristics of the linear flow and adapt of the pendulum. So what is very simple here is that the characteristics associated to this equation, so you linearize and if you forget this potential, this term, you have the equivalent of the free transport equation in our context. This is the equation in a pendulum. So the pendulum equations are behind all the HMF flow, which is quite usual, quite known by people, everybody studying this equation. And in a few words, I would say that to study linear instability, you have, of course, to study a eigenvalue problem. So the eigenvalue problem is you take the linearized flow and you define the eigenfunction of this linearized flow. Using the spatial structure of the characteristics, you can prove that being, having a eigenfunction is equivalent to the eigenvalue to vanishing a nonlinear function, J of lambda. So we have a solution of this Lf equals to lambda f if and only lambda is a root of this function J, capital J, and then you can study this function capital J. This is, it is explicit, it depends explicitly on the function capital F that you have to define your steady state. And of course, it is, you don't know a lot of things on the function capital J, but it is quite easy to prove that at the infinity, when lambda goes to plus infinity, J goes to 1. So since it is a continuous function of lambda, it is quite interesting to study the behavior of J at 0. And in fact, the limit of J at 0 is 1 minus capital 0, the same capital 0. So it is, the same criteria gives you that if 1 minus capital 0 is negative, this time is not positive, but negative, you have a negative value at 0, positive value at plus infinity. So if you have some root between 0 and plus infinity, this would give you a positive eigenvalue. So an instability or linear unstable steady state. Okay, and that's it for the linear instability. And the technique then that we applied to prove nonlinear instability also is adapted from literature, but with a few technical difficulties for those who are aware of these works, I would like to put the pioneer work by Emmanuel Grenier on the equation. Then it was applied by many authors, but in particular by Maxime Aure and Daniel Antoine for kinetic equations. So we use the same kind of technique, which is to prove linear instability under some criteria that we will, under the same criterion, criteria, or capital 0 rather than the one, but with additional technical difficulty, technical assumption, sorry. The technical assumption is that the steady state vanishes after some value smaller than the magnetization. I don't know if there is a physical explanation for that, but I will show why we need some kind of a criterion. And then you have a usual formulation for nonlinear instability. So F is a real solution, a nonlinear solution, a solution of nonlinear equation. And as you rule, you can prove that if the order of magnitude of the F0, if time 0 is close to F0 with the distance of order delta, the time for instability is of order log of delta. So of course, due to the fact that we use the exponential growth given by the linear instability. So the sketch of the proof is to construct, so introduced at least by Emmanuel, but also by many other authors for this kind of proof, is to construct some approximate solution, we construct an almost solution, which is for when we start with F0, then there is a perturbation delta. So the crucial term is this one, delta F1. So F1 is the eigenvector associated to the lambda we found in the previous step. You multiply by delta, of course, this object is a solution of the linearized flow, but not of the nonlinear flow. So you have to add some additional term, but by adding only a finite number of additional term, you don't have an exact solution, but you have an approximate solution. And you can compute the difference between the real solution starting with this term and this F app. So you have some explicit, well, some form like this. And also to estimate what I would like to insist on, and then it will be the end of the talk, to estimate how the approximate solution goes far from the exact solution. You have to make some semi-group, I would say, estimate on the HMF flow or the Vlasov flow. And in this, in this estimates, you need the technical assumption. For this estimate, we use that the period of the characteristics is, so the characteristics are periodic in the pendulum. And you have to estimate uniformly this period and the support of the consider function. This is the special place where we need the technical assumption for this linearized semi-group property. So I have absolutely not enough time to develop all the proof, but the idea is mainly to estimate and then to estimate all the series of remainder terms. With, of course, a few technicalities. The technicalities due to the fact that we have to ensure that this initial function is non-negative. You want to have a distribution function which has a physical meaning. And also you have some technicalities due to the fact that the function given by the linear instability is not exactly the one that you will use. And you have to perturb it a little bit and then to deal with complex value function. And then again, to take some real parts everywhere to have a function which has physical meanings. So it's just in a few words the kind of technicalities you have to deal with. So this is the conclusion. And I think I've already finished. It's the end of my time. Thank you very much. Yes. Yes. Thank you. Thank you Florian. We have time for questions and comments. Okay. There is a question by Chad by Jeff. I forgot to tell you that I was going to leave it to you. I was very happy to talk to you on Google. It is also a model with a proper rotation speed. I don't know. Sorry. I don't remember. I don't remember. I've already seen a lot of them. Okay. Okay. Thank you. Good idea. We'll check. Other questions? Yeah. I have a comment. So if you change the microscopic energy, you still be able to construct a rearrangement and yes. Yes. It is quite general. If you change the kernel, you mean the kernel of the micro. If you check. Yeah. No, sorry. I'm sorry. You can construct a rearrangement depending on any function, not only this. There are some hypothesis to be. Like there are two steps. The first step is just the new sharp, the F sharp. And then you compose with the exact sum Jacobian depending on the function. You impose that your rearrangement has to depend only on one function, which is explicit and then you can compute it by inverting.
|
The Hamiltonian Mean-Field (HMF) model is a 1D simplifiedversion of the gravitational Vlasov-Poisson system. I will present two recentworks in collaboration with Mohammed Lemou and Ana Maria Luz. In thefirst one, we proved the nonlinear stability of steady states for this model,using a technique of generalized Schwarz rearrangements. To be stable, thesteady state has to satisfy a criterion. If this criterion is not satisfied, someinstabilities can occur: this is the topic of the second work that I will present.
|
10.5446/53491 (DOI)
|
As the final speaker, it's my duty to thank the organizers, not of whom happen to be in the room right now, so if they do show up, we'll thank them later. So we'll start slowly. We want to construct an R matrix, and we saw in Eva's talk what an R matrix was. Let me just recall this. I'll start with a single vector space. The same construction will work over some collection, and the vector space will be over a single field. And there will be an operator that will depend on some spectral parameter u, some formal parameter. And it's an R matrix if it satisfies the Angt-Baxter equation. That is, I can take a tensor product of three copies of this vector space. I can act in all pairs of two factors in a certain order, or I can reverse the order of these operations. So this is just what Eva called write-a-meister 3. And I'll want to also impose the condition just that these operators are normalized so that in infinity they evaluate to one. Now that Travis is back, as the final speaker, we can thank Travis, Pong, and David, Gwynne, and Olivier in absentia for organizing a phenomenal and very stimulating conference. As well as the staff here at SARM for keeping us well-fed, comfortably lodged, videotaped. So back to R matrices. So I can view this R matrix as some rational value function in the spectral parameter, or sometimes, if I set the parentheses ui to be some kind of evaluation representation, I can think of it as taking values in some parameter ui, and it can view the R matrix as some operator like so. So we'll toggle between these two interpretations. And so the nice thing about having an R matrix is you immediately deduce lots of integrable structure. So once you have an R matrix, there's this quantum inverse scattering method of Fidea, Reshtekin, and Taktishan that produces a whole package. So for example, it produces the action of some hop algebra, which I'll call y, on these vector spaces and tensor products thereof. Over over, so this y, so maybe the generators arise as matrix elements of RU. I can read off also commutative subalgebra. I'll call it Baxter subalgebras, and y, and these arise if I have some endomorphism in my vector space that commutes with the R matrix, then taking matrix elements of this phi produces commutative subalgebras. So for example, if my fields are the field of rational functions in two variables, I can write like this. And my vector spaces is C2. One example is the familiar R matrix associated to the XXX spin chain. I can write like so. The identity just fixes this tensor product, and this permutation swaps it. The resulting y I get is what's called the Yangian of gl2. So this is some hop algebra deformation of the universal developing algebra of loops in gl2, and this is some familiar thing. And so what I want to talk about today is a certain way to obtain solutions to the same Baxter equation using geometry. So I'm going to give some background on the general context into which the results I'll present fit. And so that's quite a long story, but I'll present a very condensed version of work of Malik and Akunkov constructing R matrices in the co-amology, acting in the co-amology of Nakachimber for IES. Yeah? Yes? What would the phi in this case, I can tell you geometrically. So if you think of this as co-amology of some cross-monion, what would be is multiplication by Schoen classes of the tautological bundle over this cross-monion. There should be some answer algebraically, but I don't know it off the top of my head. So the answer to the question is, would the A-A-Baxter sub-algebra be just commutative? Yeah? Sorry? Isn't it going to be the H's in the union? Is who going to be the H's in the union? No. Okay. So Malik and Akunkov construct armature acting in co-amology of Nakachimber for IES. This is, I guess if you were to come here in a few months, you could get a book on this half price in the auditorium, but for now we'll settle for a basic case. What's the vector space V going to be? It's going to be torus-equivariant co-amology of… So it's appearing an asterisk in a few months? It's appearing an asterisk at some point. Okay. Yeah. So on a cosmic scale a few months, but I don't know. So we have some torus that scales the plane, this acts on the Hilbert scheme, and we get torus-equivariant co-amology of the Hilbert scheme, and I want to combine all the Hilbert schemes for various numbers of points together to produce an infinite dimensional vector space V. And this will be some algebra over the field k, which will be the fraction field of torus-equivariant co-amology of a point. Okay. Not only can I realize V geometrically, but I can realize its tensor products geometrically. So I consider instanton-modulized space. So this is the modulized space of rank R torsion-free sheaves on P2 with a framing along line infinity with second-train class n. And so if this seems esoteric, it's just the squarever variety. And I can set MR just to be, as we did for the Hilbert scheme, to be the direct sum over all n of MRn. So in particular, m1 is just help of C2, and the tensor product of these modules is isomorphic to the equivariant co-amology of this large modulized space. And here this C star R contributes the equivariant variables. So these spectral parameters are realized as equivariant variables here, and this is the torus that acts on the framing. So in particular, it's trivial for the Hilbert scheme, but you can still keep this parameter around. And so Mollik and Akunkov produce an R matrix acting in this tensor product of V with itself. And I'm not going to review the whole construction here, but I'll kind of say some words. So there's two constructions. The second I'll say much more about, the first I won't say as much about, it uses stable envelopes. So these are the same as the stable classes that appeared in Eves' talk. And let me kind of say the salient properties that one deduces from these stable envelopes. So one, the Yang-Baxter equation comes almost for free once you study some basic properties of these stable envelopes. So defining stable envelopes and checking these properties takes some work, but once you do that, the Yang-Baxter comes for free. And then second is that vacuum matrix elements of RU are given by multiplication by Turing classes of tautological bundles. This is classical multiplication. So let me say more precisely what I mean by two. So stable envelopes give me some condition as to how this R matrix should act on components of a fixed locus with respect to this Framon torus. And once I unravel that definition, I see that if I take the vacuum element to be just a generator of the help 0 of C2, that is of a point, and I compute this matrix element of C2, this is supposed to be equal to a ratio of Euler classes of these tautological bundles twisted by some characters of this torus. So if I expand this in 1 over U, well, I can write this as a product over I, where these xi are churn roots of my tautological bundle. And if I expand this in 1 over U, I can read off symmetric polynomials in these churn roots that give me classical multiplication by this bundle. So for example, I get 1 plus h bar n of U, where this h bar is the weight of the symplectic form on C2, that is this t1 t plus t2 that showed up in this R matrix earlier as well. And I can continue. And then finally, this churn class shows up and so on. And so the more I expand in U, the higher and higher order churn classes I get of these bundles. So once we have this R matrix, what do we get? We're running this quantum inverse scattering method that I talked about. In the beginning, we get some Yangian acting on the co-amology of these instanton monogely spaces, and that co-amology, that half algebra has a lot of names, one of them is the Yangian of gl1 hat. And so this Yangian shows up in all sorts of other geysers and constructions. For example, Schiff-Mann-Rovasserow showed that the Yangian constructed in this way is isomorphic. So the co-amological half algebra constructed by Schiff-Mann-Rovasserow, it's also the same as what's called the affine Yangian of gl1 defined by Fagan and Symbol-Uke. And so this algebra appears in a lot of places. Do you remember just briefly about Schiff-Mann-Rovasserow that Co-Ha is for what sort of half algebra of what kind of gadgets? Comological, ah, ch-ch-ch. Just like a curve. Yeah, so it's a half algebra of an elliptic curve. You can actually see the Comorish-Korosh one through the R-E-B category, which can be represented. And so I should mention that for f the quiver you start with is finite type. You can run the same procedure and you recover the Yangian action found by Viren Yelp. And then one also has Baxter subalgebra. So if I let say V sub q be the operator that acts on V by scaling the co-amology of the Hilbert's kind of endpoints by some element q to the n, then it's very easy to check that the resulting tensor product of this operator with itself commutes with the r matrix. And it's a theorem of Malik and Akunkov that the resulting family of commutative subalgebras coincide with the operators of quantum multiplication by churn classes of the tautological bundle. And so this quantum co-amology ring showed up in Kaderina's talk. It's some deformation of the usual co-amology ring whose structure constants are given by equilibrium, of which an invariance. Maybe if I'm going to be precise, I want to put a 1 here. OK. And so here's the kind of question I'd like to answer is what parts of this story can be extended beyond the setting of Nakajima varieties or more generally beyond the setting of conical symphactic resolutions? So let me just today concentrate on Hilbert's schemes. So what parts can be generalized to Hilbert's? So here I can specialize kind of Q to some complex number. And if I fix a Q, then I get some operator that generates a commutative subalgebra. And as I vary Q, that corresponds to the family of subalgebras over here. Yeah, yeah, no, no. But Eric is right that somehow this Q is not the same as this Q. Yeah, yeah. It's not maybe I should say z here or something like this. Is that there related? So certainly at 0, they're related. So somehow I'm supposed to require, if I said 0 here, recover classical multiplication. And if I said to 0 here, I also recover classical multiplication. But as you vary, I don't know if I have my head. Sorry. Yeah. You probably have a formula for the neighborhood of the whole one that's double zero. But maybe does it make sense to try to? Possibly, yeah. I mean, yeah, certainly this is defined everywhere and this has poles somewhere. So yeah, what parts can be generalized to help us? So one motivation for asking this question is to better understand the quantum co-amology ring of the Hilbert Schema points on a K3 surface. So in general, there's some kind of expected relations between quantum co-amology of the Hilbert Schema points on a surface and Donaldson-Thomas theory of three-folds fibered in these surfaces over curves. And when your surface is symplectic, there's this kind of correspondence supposed to be as nice as possible. But one can't just use the molecule can go kind of package here because there's no torus action on a K3 surface. So one needs some way to get around this. So the whole package can't reproduce the whole package. And so I'm thinking about what parts can be. And so here's some kind of start of results in this direction. So this is a quantum multiplication by C1 of what? Of the tautological bundle. This is one element. Yeah, so this generates the quantum multiplication ring of a Vinson-Malusk. The whole algebra can be given by the whole Baxter subalgebra. Or the whole quantum co-amology ring. And yeah, in general, if I have any Nakajima variety, its quantum co-amology is generated by turn classes of the tautological bundles. So what can we do is, one, we can still construct an R matrix for general S. And two, we can modify this construction to obtain classical multiplication by churning classes of tautological bundles associated to other line bundles over the surface. So I mentioned here, when z equals 0 and 2 equals 0, over here, you recover classical multiplication by these divisors. And you can do the same on this side. In general, you can't expect to upgrade this to an arbitrary surface. But at this z equals 0 case, you still can match. So let me explain how to do this. So what's the setup? When I say general S, there's some kind of mild assumption we have to put. S either has to be proper, or I guess the proper is a subcase of what I'm about to say, but I have some torus that acts on S, such that the fixed locus is proper. And the reason I want to do this is because I want to have some notion of integration over the surface. So then I can integrate over the surface. And moreover, I can have some pairing of two classes on the surface given by minus the integral of their product. And then the equilibrium co-amology ring has the structure of a Frobenius algebra over equilibrium co-amology of the point. So I'll drop all equivalents now and just kind of assume everything's equilibrium. And so in particular, if the torus acts on the surface with proper fixed locus, then the same is true for the Hilbert scheme. And so if I, as I did before, group all the co-amology of Hilbert schemes, what I get is a Fox base for a Heisenberg algebra whose generators are labeled by elements of the co-amology of my surface. OK. So let me just fix the notation. For some positive n and for some co-amology class of my surface, I'll let alpha minus n be an operator going from the co-amology of the Hilbert scheme of m points to the co-amology of the Hilbert scheme of m plus n points. And it will be given. So essentially what I want to do is add a fat point of length n along the co-amology class that's point, along the homology class that's point-created dual to gamma. In other words, I have some cycle in this triple product. And this consists of triples whose support only differs by a single point, p with multiplicity n. And this comes with three projection maps onto each of its factors. And if I apply this operator to some co-amology class x of the Hilbert scheme, what I do is I pull back my co-amology class. I pull back the co-amology class sitting on the surface. And I intersect with the fundamental class of this cycle. And finally, I push forward along my third projection map. And so maybe I need to localize some of these co-amology groups for everything to be well-defined, but that's OK. And for positive n, I can define using an essentially similar construction or by taking the adjoint operators to these. And it's a theorem of Nakajima and Gronowski that these operators form a Heisenberg algebra. In other words, if I try to commute two of them past each other, if i plus j is not 0, then I get nothing. And otherwise, I get a Heisenberg algebra incorporating the pairing on the surface. OK. But before we weren't just looking at VS, we were looking at these evaluation representations with some spectral parameter. And so fortunately, we have one degree of freedom to kind of tell us how the spectral parameters should be incorporated, and that will come in by the zero modes. So I've told you what I said that n was a natural number here. When n is 0, the zero modes will act by a scalar that's that sum function in you. And so now I want to upgrade this construction to r matrix. Or instead, I'll phrase it in this form, where I'm acting on the tensor product of two evaluation representations. And the first thing to do will be to break up this tensor product in a different way. So I want to let VS with either a plus or minus as a superscript be the algebra, maybe be the subspace generated by either alpha n tensor 1 plus 1 tensor alpha n or minus, depending upon a 5 plus or minus here. And I want to take this and act on the vacuum vector in the tensor product. And so then I can rewrite this tensor product as a tensor product of this positive part, where now the zero modes act by u1 plus u2, and on the other factor, the x by u1 minus u2. And so to give this r matrix, the idea is to extend a second construction of the r matrix from Malik Kukunkov that's not, I think, as well known, but will work for a general surface. And so how does this construction arise? Well, when s is equal to c2, what can one say? On the right side, u is plus plus u2. Is it a plus or a tensor? Here? That's a tensor. That's a tensor. So that's just a tensor. Tensors vector spaces over this large, over frack of ht of a point. So you use the extra ones to do some work? This is isomorphism of vector spaces. And so yeah, isomorphism of vector spaces. OK. If I need to do another thing. Yeah. So somehow I have an algebra that acts here and a separate one that acts here as well. But it kind of mixes. OK. So the first thing to, so using some general properties of r matrices, one can show that r you commute with half of these operators that is those with a plus. So that tells us that r has to act only in this second factor. OK. Then the second is that any operator satisfying one that is one that only acts in here, that is that can only be written in terms of these alpha mys, is determined uniquely by its vacuum matrix element. OK. Do you need a set of sine 1 and a wave master? Or just any operator that is saying? Any operator that commutes with these kind of Steinberg type of correspondence, these fn plus. OK. And then finally, we can pin down the exact form of r u by computing how the stable envelopes, intertwine classical multiplication with a tautological bundle on the moduli space of rank two sheets. And what one sees in this case is that the r matrix can be expressed as a certain reflection operator. And so one place to read about this construction is in Malikinokukov's work. This construction was also used in work of Braverman, Finkelberg, and Nakajima, studying the comology of instanton monolid spaces. So there's lots of papers with these three authors. This one is called Instanton Moduli Spaces and W Algebra. And so far, these are the only two places where I've seen this construction used. And so the nice thing about this construction is it can kind of be adapted nicely to a general surface. And if you are careful about how you modify this construction, you can actually get a little more out of it. So let me tell you what this construction is. So a reflection operator will be a certain intertwiner of highest weight V ersoro modules. So anytime you have a Heisenberg algebra, you can use the Fagan-Fuchs construction to produce a V ersoro algebra action. And how do I do that? Well, in this particular case, I'm going to write the generators. I'm going to write the expression in terms of these minus generators here. So maybe I'll set alpha n minus gamma to be the Heisenberg generators generating this minus part. And it's some quadratic expression in these Heisenberg generators. So here, gamma i and gamma i prime are the coonith components of the co-product of gamma under the co-product. So I have some kind of quadratic word. And then I can introduce a formal parameter kappa that we can think of as the weight of the cotangent bundle of our space. And so this kappa here will just be a formal parameter for now. And Molokinokunkovic specialized to be the weight of the symplectic form, but you don't actually need to do this for this construction to work. And it's important not to if you want a meaningful answer for k3 surfaces, for example. And then finally, there's some kind of correction we'd like to the zero mode. That's not particularly important. So it just looks like the usual way to pass from Heisenberg to Vyrosoro with some kind of decorations coming from co-inlogy of the surface. And these Vyrosoro algebras also show up geometrically when you compute the commutators of this Heisenberg algebra with the first churn class of this heterological bundle or equivalently intersection with the boundary. So these have some kind of geometric meaning as well. So here's a proposition that in this level of generality is essentially due to Molokinokunkovic, but generalizes it's a pretty easy generalization from the argument for usual Vyrosoro. And that's that these operators I've written down do indeed satisfy a Vyrosorotype relation. So if I commute two of these past each other, I get this usual first Vyrosoro term. Only I need to multiply the co-homology labels. And then when n plus n is not 0, then I'm done. But otherwise, I need to multiply by this usual factor. And then there's some expression coming from co-homology of the surface. And so it's this expression here that should be thought of as the central charge. For now, it's just some formal parameter. And Molokinokunkovic specialized to be the weight of the symplectic form. I treat it. Maybe you think of this as the formal parameter times the identity. So it just comes out of the integral. So equivalently, for example, in C2, this is just some equivariate parameter. So it comes out of the integral for free. So you could also think of this is 6 kappa squared times the integral. OK. And so we have the central charge. If we want to classify Vyrosoro modules, we also need the conformal dimension. So this second factor, generated by these minus Heisenbergs, is the lowest weight module for this Vyrosoro algebra, generated by these allangama. And to compute the conformal dimension, we need to compute how these zero modes act. And so directly reading from that formula, one gets the following. Then there's some factor corresponding to the integral times my vacuum. And so this expression here is the conformal dimension. And so once you have the central charge and conformal dimension, you know these completely classify lowest weight modules. And in particular, we see that kappa only enters quadratically. And so in particular, if I were to define a new operator, say ln hat of gamma, ln bar of gamma, that was the same as ln gamma. Only I flip the sign of kappa. So in this linear term, I just flip the sign. Then the resulting module should be isomorphic. That is, we have an isomorphism. And I'll call r minus s from this second factor to itself. That's given by taking a word and these vira-serro generators applied to the vacuum to the same word in these kind of right-moving generators. That is, where I flip the sign of kappa. And so I've said how this r is going to act on the second factor. And on the first factor, I just want to mimic what happens in the C2 case. It acts trivially. So that is to say, I want to set this rs to be the identity on this first factor and will act by this reflection operator. So sorry, that's plus here. And so this is related to the reflection operator from conformal field theory. Same kind of expression shows up. So I have some r matrix. And you can guess what the theorem should be. The theorem is that this r matrix satisfies the Ingbackster equation for any s. Mine. You put it in A next. So don't say that. Thanks, Tim. And so I also made a point to talk about before how, if you look at the vacuum matrix elements, you can read off expansions of the multiplication of Tern class of tautological bundles. And the same is true here. The vacuum matrix element of this rs encodes multiplication by Tern classes of tautological bundles. This is a consequence of Lenz result. It's not dependent of Lenz formula for, in some ways it is. So Smirnov has this computation of the instance on r matrix that just passes through a stable basis and rederives Lenz result. And once you have that, then you can do the same thing here. So somehow this is completely independent of Lenz result. The argument is both of these formulas are sufficiently universal, that if you use Elling's regertia Lenz, they're going to be equal. And even if you want an exact formula, you also don't need to use Lenz. You can use Smirnov. And then finally, so here we have multiplication by a single tautological bundle. But over Hilbert's Scheme Points and Arbitrary Service, there's other tautological bundles I can pick if I have bundles on my surface. And I can modify this construction. And all I need to do is modify how the zero modes act. So before I had these zero modes act by minus u times this integral. And if instead I replace this by the same expression, only I shift by the churn class of my line bundle, I repeat the same construction to get a new operator that maybe I'll label with the line bundle. And the result doesn't satisfy Yang-Baxter anymore. But the vacuum matrix element still will encode this classical multiplication. Can you set a cycle to the ball of Yang-Baxter? Some kind of shifted. So you just have to shift the spectral parameter by this homology class and you'll recover. So it's not so far from Yang-Baxter. But so if I look at the vacuum matrix elements, then again I get this similar kind of formula I wrote earlier. But this h bar gets replaced by kappa and so on. So I get the same expansion just with h bar replaced by kappa, which is important if you're working over a k3 surface. You don't want all these terms to vanish. And I have my trivial bundle replaced by l. And so maybe I should say the matrix element here of this restriction. So the proof of the satisfies Yang-Baxter eventually the kind of strategy starts with a surface and reduces it to C2, at which point you have to match up with the stable envelope construction. So one kind of interesting question to me is, is there a proof that this construction, even for C2, that doesn't use stable basis? So here there's some kind of relation between V-R-S-O-R-O and Yang-Baxter, and it's kind of mysterious and I'd like to understand it better. Also, if you were paying careful attention, this construction, all it really used was the Frobenius algebra data of homology of the surface. And so the fact that we had a surface wasn't important until I run this proof. And so the construction can be reproduced. If I replace homology of a surface, I can instead look at homology of some higher dimensional variety x, or I can even look at some general skew symmetric graded Frobenius algebra a that doesn't come from geometry at all. And you can run the same construction, and you can ask whether these r matrices still satisfy Yang-Baxter. So I tried it for some projective spaces, and it still does. But then I realized that those Frobenius algebras can just be written in terms of Frobenius algebras of a surface, so that's not so interesting. So yeah, I don't even know of an example of a skew symmetric graded Frobenius algebra that doesn't come from co-homology of a variety or specialization of equivalent homology of some variety. So any kind of thoughts on this would be interesting. So the question here is, are you still satisfying Baxter? And an answer either way would be interesting. If somehow this construction used something special about the homology of surfaces, that would be cool. And if it holds in general, that's also interesting to me. And then finally, does this constrain quantum homology of Hilb K3 in any meaningful way? So Overdeck and Overdeck and Pickston have some conjectures. And some of these conjectures are given by some matchings, which are vaguely reminiscent of what's going on here for VRSERRA. So it's possible that the resulting Yang you get from this construction can help you understand better some quantum multiplication operators in this Hilbert scheme. But I'll stop there. Thanks for your attention. Thank you.
|
We explain how to use a Virasoro algebra to construct a solution to the Yang- Baxter equation acting in the tensor square of the cohomology of the Hilbert scheme of points on a general surface S. In the special case where the surface Sis C2, the construction appears in work of Maulik and Okounkov on the quantum cohomology of symplectic resolutions and recovers their R-matrix constructed usings table envelopes.
|
10.5446/53494 (DOI)
|
So today I want to explain some big picture explaining physics of the Langmuir reality. So the goal is by understanding these physics, you get sort of new context. And in fact, sort of the main bulk of the work is to actually set up not just translation but actually translator so that you can use that translator in other settings as well. But today I just decided to only explain sort of the resulting translations because sort of the translator part is kind of harder to follow I think. And I can sort of discuss more and also even just translations that's enough to give some new conjectures. And by explaining several different versions of Langmuir reality, I hope to convey some sort of big picture better. And one thing I want to say before actually starting a talk is that this is not really just a translation in the sense that we do see some research which are new for both mathematicians and physicists. Okay. So the aim here is to explain that Langmuir reality comes from what is called S2 reality in physics. And I want to explain this with examples. So this is sort of my plan for sections. First I want to explain what I mean by field theory in the sense of physics and then explain this with two main examples, category of course, Langmuir reality and simple reality. We have heard a lot about these sorts of topics. First I'm going to mention a few things about some other version of Langmuir reality. And one thing I also want to say is that because I decided to talk about sort of the big picture, I won't touch any sort of the deep parts of the story. But I want to mention that at each section there is something like mathematically new and physically new. But I won't be discussing that part. And you can kind of ask questions to Chris Elliott there and Justin here as well. Okay. Any questions? Do I have a clock somewhere here or no? Okay. Thanks. So I want to start discussing what I mean by field theory. Field theory for me is following. So an n-dimensional field theory has three different input data, n-dimensional manifold m and space of fields, which is sections above wonder and an action function which is a function on this space of fields. And then, classical field theory actually studies what is called the critical locus, also known as the solutions to equations of motion. So EOM stands for equations of motion. So when I want to emphasize dependence on s, I write critical of s. When I want to emphasize dependence on this m, I may write EOM of m. One example is trans-simon theory. It's a three-dimensional theory. So I need to consider three manifold m and the space of fields. In this case, it's given by one form with values in the algebra g. Say, same as in the algebra. And action function is given by this formula. Then for classical field theory, you need to compute this critical locus. And critical locus is just given as one form satisfying this carburetor being zero equation. And we obtain the space of flat collections or local systems. So that's what you care at the classical level starting from trans-simon theory, this input data. The claim is that whenever you are given the ordinary sort of field theory, you care about and list this at the classical level. Equations of motion? OK. So I want to introduce some more terminology which is going to be useful for me to use. Important terminology quantization. So I want to specify what I mean today. I just want to give some examples and maybe explain more later. So the famous example is this mirror symmetry context. In this case, people say that when you are given a Kelle manifold, you can think of this two-dimensional field theory as I explained. This n equals 2,2, you don't need to worry too much about that. That's why it's called supersymmetric field theory. It just says that it has an additional amount of symmetries than more than expected usually. And as I said, a classical always means a solution of space to equations of motion. And quantization sometimes needs some additional kind of puts some additional constraint on your situation. And then it's kind of giving some other output data which I'm going to discuss more later. Another thing I want to discuss is what is called a twisting data. Again, maybe for today, I just want to discuss the output. So when you have supersymmetric theory, there is a procedure called twisting. And this procedure is making the theories simpler. So remember in this first example, I mentioned that given X, say Kelle vl, you can do this and define this 2D n equals 2,2 theory. This twisting procedure is making it simpler. And it is known that there are two distinguished ways of doing that, called the A twist and B twist. And the resulting guy is called A model and B model. So when I say A model, say we target X, it is a particular two-dimensional TQFT. You can think that way. So what I'm saying can be summarized as follows. 2D theory labeled by X gives two topological theories, say X, A and X, B. And what I mean by duality is an identification between two possibly different looking QFTs. So this example of mirror symmetry is most famous one, so I want to use that. So the claim is that the theory given by X, A, remember that was 1, 2D TQFT, is equivalent to another theory, X check B, for some X check. So whatever you do for given theory, if you are doing that in a physically meaningful way at the quantum level, they should be coinciding. That's what duality is kind of expecting, making you to expect. So for example, it's a famous and rejured, counting curve invariance on one side is same as some kind of hot theory on the other side, some period. And another manifestation is that what is called the homological mirror symmetry. It is saying that there is an equivalence between categories, what is called the Foucaille category and the derived category of coherence sheets on this, so dual manifolds. But this is not supposed to be unique for this X and X? I mean for today's discussion, let me say unique, but I think there is some kind of different things one can do. Any other questions? Okay, so again, you can do a lot of things for given theory, but today we are interested in Langman's reality as appearing in geometric representation theory, which is in particular algebraic. And given that, I want to mention this table. So supersymmetric theory, whatever it is, it is non-algebraic thing. Now you start with some kind of Riemannian manifold information, and if you just care about algebraic information, you don't need to know much about this supersymmetric theory itself. The idea is that this topological theory, which I claim is simpler than the original one, may capture everything you care about at the algebraic level. So you just may as well just study your theory after this topological twist. Maybe I should have said that topological theory just means that dependence on your space time manifold becomes topological. So mathematically, it is often the case that classical topological case can be described by the subject called the Schiff-T simplex geometry, which I won't discuss much today. But what is more important for us today is this part. Of course, you can just read off topological quantum and field theory. So it's a TKFT, sort of the writing to consider here. But somehow, not exactly quite correct, that's why I put this quotation mark there. So I'm going to say maybe a few more words about why we need this quotation mark. Are you using the topological as a pain in the supersymmetric by throwing away some information or something? Correct as a first approximation. But it's algebraic, you're saying, whereas the supersymmetric is non-algebraic? So there is much to be said here. When I say non-algebraic, I certainly mean in terms of dependence on, say, domain. After topological twist, it is still not algebraic at the level of dependence on the domain, but somehow the entire modular space tends to be more algebraic. So we are going to see some examples. Other questions? OK. So given this, I discussed classical field theory and then discussed quantization, twisting, and duality. Given what I discussed, the reasonable program would be as follows. Starting from a field theory, identify the solution, moduli, after twist as an object of shift is simply geometry. Then try to understand its result of quantization. So you'd end up with some kind of quantum information. Then understand consequences of duality, which usually yield some conjectures, sometimes new conjectures, and try to prove them. That's one thing one can do. OK. Is the strategy clear? OK. So now I want to start the second section. If you don't have any further question about introduction to field theory, that was the end of the introduction to field theory. OK. So I want to say a few words about Jim T. Langlands. And you start with a smooth proper curve C and with duct tape group, then Jim T. Langlands correspondence expects an equivalence of DG categories of the following form. D modules on bun G of C and quasi-queryon shapes on flat G check of C. And bun G of C is just a modulate of G bundles and D modules and DG categories. And this is my notation for modulate space of flat G check bundles. So this is what is known as the best of conjecture. And now I want to explain this couple of things theory. So basically this is my one slide for eight hours lectures. And usually written. So one slide summary. Or one sentence summary would be this. S to LH140 and equal 4 supersymmetric gauge theories gives Jim T. Langlands correspondence. Maybe I should say that it's not like that important to understand sort of everything I'm saying. It's like there are a lot of parts in the story and each sections are kind of independent. So and I'm going to say a lot of sort of physics words as well. But don't be frustrated if you don't follow. Just a story. I mean it's just like there are a lot of stories and that's why I want to use slides. OK. So first of all, for the equal 4 gauge theory is essentially sort of fixed by group G. And if you remember 2D 2,2 theory was fixed by Columbia many for X. And just like that, in this case, it's kind of just fixed by group G. And in this 2D case, I claim there are two different twists called A twist and B twist. And I'm saying in this case, there are P1 many twists. So just like XA or XB specifies a particular 2D TQFT. In this case, I can write, I can say that G Pc, when Pc is an element of Cp1, specifies TQFT, 4D TQFT. I mean, you need to be a little bit more precise how you get this side parameter out of this twist. It's not just this parameter. It should be coupled with what is called the coupling constant. So I mean, this is, I think, correcting to C. OK. And S duality, which is an example of duality, is supposed to be identifying two different theories. So one theory named as G Pc. And another theory, you know, T by G check, minus 1 over Pc. And the next claim is that upon compactification, this theory becomes equivalence between two dimensional theory. So I'm not really going to explain much about this compactification now. But again, this is a process. You start with a four dimensional theory, and you compactify along this two dimensional manifold, then you end up with a two dimensional theory, where 2 is computed as a 4 minus 2. So if you started with n dimensional theory, compactified along k k manifold, then you'd end up with n minus k dimensional theory. So this was my notation for 4D theory, 4D theory. And compactified along this two manifold, you'd end up with two dimensional theory. The claim is that this two dimensional theory has name, as I explained before, namely, a model with target cotangent monorabungi, and b model with a modular space of black connections. When you say that, this is attached to what? This is attached to the point after compactification? I mean, I don't necessarily follow any kind of the model. I don't think of that. I just kind of explain that this 2D anchor 2 comma 2 theory is determined by a clubia manifold, and then it has a twist and b twist. And then whatever you do from there should be equivalent. In particular, what you assign to a point should be equivalent, but I didn't get there. If you think the usual 2D tick FT, the language, extended tick FT, in particular, what you assign to a point should be equivalent. That's exactly what I'm going to do next, actually. But what I was saying here is that a model with target cotangent monorabungi just makes sense without thinking of the particular model. Any other questions? Because these become a twist or a model and b model, this points 0 and infinity, which were just named for a particular points of p1, are called a twists and b twists. Then as Travis is pointing out, you can try to identify what you assign to a point or say the category of boundary conditions, which in the case of a model called a brains, and in the case of b models, it's called b brains. It is known that a brains of cotangent bundle of something is related to d modules on that something. That's how you see these a brains are like d brains, d modules on bungee. On the right-hand side, b brains on something is, as I explained before, it's like quasi-query shifts on that something, and that's how you see equivalence like this. This is summary of Kapo's Witten's claim. Any other questions? These are theories that have to take something algebraic. This is an algebraic occur affair. So that's a really good point. But Kapo's Witten theory didn't really have any way to say dependence on algebra structure on the sea because they think of topological theory, topological twist, and in a sense, by definition, you'd end up with topological dependence on sea. So that's why some experts of Jomit Langlands were not really happy with, or rather, they didn't necessarily believe that this Kapo's Witten theory is going to give some new ideas about the original Duram-Jomit Langlands. But I'm going to explain a bit how you can actually capture this algebraic dependence. Other questions? Looks like my clock just ran away. Okay. So I want to summarize what has happened so far and then try to explain how to think of that from a more mathematical point of view as I'm motivated in this introduction part of the field theory. So the claim is that Langlands' reality is S2ality, and this categorical Jomit Langlands is S2ality on this collection of boundary conditions of this Kapo's Witten theory. And this Ormorphic versus Galois corresponds to A-twist versus B-twist, as we have learned. And although I don't think it was kind of explained in Kapo's Witten, it's kind of clear that the Glover versus Local corresponds to a compactification along C versus S1. And for given Fourier Encore 4 theory with gauge group G, you can just try to compute the result of, say, any choice of these things. Say, if you think of the Glover case, namely, computation along C, and A-twist, and then do this quantization, then you end up with this, and same here. And this local case is a little more technical, so I don't want to discuss it today. But you can sort of see how you can sort of see expected equivalence of Jomit Langlands' theory. So what I'm saying is that because Astiolity is an equivalence between theory, say G0, which I denote by GA now, and G-check infinity, which I denote as G-check B, if you read off sort of the same thing, sort of Glover, then Glover of this is D-modules on bungee, and Glover of that is becoming that. And that's how you see sort of the manifestation of Astiolity in this particular case. So so far it was just a story, kind of repeating what Kapu's written said. And now I want to sort of explain how you may get this in a mathematical way. So first of all, in any such theory, namely sort of topologically twisted theory of this four-gain rule four, for fixed gauge group G, you can compute this solution space to equations of motion. And you want to have this as an algebraic object. But as again, Trayv pointed out, it's not really clear why or how you can get algebraic structures, namely sort of algebraic dependence on this curve C. You know, Jomit Langlands is about equivalence between these two categories, where you do remember the complex structure of C or algebraic structure, whereas by topological twist, you are not supposed to be getting algebraic dependence on C, sort of by definition. So the idea is that to capture this algebraic dependence, you can actually compute some other twist so that this A-twist and B-twist can be realized as a further twist of this thing. So this minimal twist is sort of having more information. And there it's kind of clear that you can actually see algebraic dependence. That's one idea. Right. So because it's, I mean, this minimal twist is called the holomorphic twist, and that means that you can put sort of holomorphic objects, you can put holomorphic spacetime. And to get couples' witness claim where you do compactify along C, you need to consider the special case when X is sigma times C. So this was just kind of the strategy. And now I want to show the result and then maybe spend some more time on this slide. So there's a lot going on. Let me say a few things now. So first of all, as I said, the resulting guys are expected to be an object of shift disimplicated geometry. And these are objects of shift disimplicated geometry. This is shifted cotangent bundle. Surely it should be object of the subject. And this having this minus on shifted cotangent bundle or rather minus on shift disimplicated structure is expected from what is called classical BV formalism. I don't want to spend more time on that. But it's just kind of a version of cotangent model, but kind of the homologically shifted and take total space. Right. And the claim I was making is that these A-twist and B-twist can be realized as a further twist of this holomorphic twist. And turns out this holomorphic twist can be written in this way. So here, d'or means the d'or boss stat and d'oran means the d'oram stat as in the theory of Simpson, like Narevelian Haas theory. So in particular, if you think of mapping stack out of the d'or boss stat, you get Higgs bonders modulite. And if you think of mapping stack out of the d'oram stat, you get the flat modulite, modulite space of flat connections. So five definitions. But the point I want to make here was that this holomorphic twist can be written in this way with d'or there, d'or, and this A-twist and B-twist, which I claim are realized as a further twist, just amounts to changing one d'or to dram. And the claim is that changing this outer dram and inner d'or to dram, that exactly corresponds to the fundamental asymmetry in geometry of Langlands. If you know the state of the Langlands, somehow the direction of flat is a little different. If you think of flat connections, it's dependent on curve is flat. And if you think of d'or, it's dependent on the entire thing is flat. D'or is like a flat connection. So that fundamental symmetry is realized in this way, A-twist and B-twist. What is the result here? The result is that the left-hand side, which I didn't define, but can be defined. So what is called the equations of motion after twist? It is a thing. We identified what they are. And these were kind of expected to be minus on shift disimplactic. We identified them to be actually minus on shift hypotangent bundle of something, which is a simpler case of these minus on shift objects. And then, yes? What is the difference between the A and B in the first row, the table and the A and B in the second row? Right, right. So I'm getting there. So before getting there, I want to claim that this way of thinking makes clear why we are seeing modulate space of flat connections as opposed to say character step and D-modules as opposed to some kind of Foucaille category, which are not algebraic. Because... Just one thing that might also help when they ask, what's the result? When you did this with Chris, you started with a super manifold that was much bigger than anything that's written on the board. And actually went through the physical process of twisting. Right. So supersymmetric theory is kind of the really big objects. We set up, say, process of twisting in this kind of algebraic framework and then compute to cut out sort of the enough in the sort of the... In a way, and then we end up with these nice algebraic objects. It was not algebraic at all before, but by cutting out this process of cutting out things, you end up with these algebraic objects. Could you say what this H is? I didn't get there yet. I'm going to say yes. So you already said that the second A and B... No, not yet. I'm not here. I'm somewhere here. Okay. Yeah. Can I ask a question about first? Absolutely. What is outside the Torbundram? What do you mean? You have like sort of doldol or doldram or doldramdol. The outside ones. The subscript on the mapping. Yeah. You take a staff and you have an operation. The rest is the DRAM or doldol. Yeah. I mean, Tor makes sense for like much bigger generality. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. So you take the thing without the DR that we own doldol now. So that's a something, some kind of object. And then you apply an operation to it, DR or doldol. Right? I guess you can think in that way. Yeah. That's not how you get it. That's not how you think about it. Okay. I mean, it's just really kind of the, realize there's sort of something caught out by this process of twisting. For a definition, you can certainly make the definition. Even so x, you can define x-door or x-round. OK, so I was just trying to explain why we are actually seeing this algebraic dependence. The thing is that because things are realized as the deformation of Durbo stack, which are like the x-motorline, you do really see flat bundles as opposed to character stack. And D-modules, because you do see the drum stack. And in a sense, by definition, quasi-couple in shifts on the drum stack is D-modules. So you don't really need to think about focaya category here. OK. And another thing I want to say is that, as I said, some version of quantization of this would give geometric quantization, believing as duality. So what do I mean by quantization? One can say much about that. But for now, let me just note the following. So in this case, I specialize my x to be sigma times c to have this compactification. Maybe I can explain a bit. What's the time now? Justin? It's over. Oh, OK. Maybe. OK. Sorry. Change the mic. I need to stop at like 30 here. Is that right? That would be optimal. Right. So OK. So this is my description of, let's see. When you have xA or xB, the way to describe is classical equations motion. It's exactly the same in this way. T star minus on mapping stack from sigma to x drum. In this case, T star minus on mapping stack sigma drum to x. That's sort of the description you'd see for A model and B model with target x. And by thinking of this special case, when x is c times sigma, and you do some manipulation, you end up with this description. And it is exactly saying that you actually get A model with the target xg and B model with a target flat g, as expected from Kapo's mitten. What is a sigma? So sigma c is just a smooth, proper curves. It doesn't appear on the right hand side. x is same. So if you remember two-dimensional theory, I didn't need to specify the two-manifers. It is defined for any two-manifers. And sigma is that thing. So when I said. There is no sigma on the right hand side. That's right. So I'm saying when I say xA, it is two-dimensional theory. So you could put sigma there. xA was two-dimensional theory. And just like that, I write this way. And when I say xA, I didn't put sigma either. OK. OK. But right. And this way of understanding categories of geometry langlines in this way already cut up these new conjectures on geometry langlines, this category of geometry langlines, for any given c and g. But that's the other topic. If you're curious, you can ask questions. So I think to go from the first row to the second one, you just replace x by sigma times c. Correct. Is it just some normal trick like? That's right. So you're kind of making a junction variable. And then? I'm out of the problem. Right. Right. Essentially that. I'm going to write the problem in the second factor. That's right. But it's a little less trivial in this a-twease case. But essentially, that's right. That's what I mean by computation. Some manipulation. OK. So another thing I want to say is that this ht stands for holomorphic topological. So you can do actually different tweets. a and b tweets were the ones considered by Kapu's mitten. And we considered this holomorphic twist to capture this algebraic dependence. But you can try to compute other tweets. Holomorphic topological, meaning its dependence on c, is holomorphic and dependence on sigma is topological. Then you end up with actually b model is xg, b model with a drum segobongi, and b model with a flat g of c. And given this, maybe you can believe how you see the original categorical parameter in Lannes. Because if you are given a b model with target x, the category boundary conditions you are going to study is the quasi-currency of some x. So quasi-currency of some flat g, quasi-currency of some drum segobongi, which is the module sum bungee. And it is known that s reality is between holomorphic topological tweets and holomorphic topological tweets. And although this is still a quantum statement, it's a result after quantization, it is giving what is called the clash of the middle of the Lannes correspondence. And s reality between this ht of a and ht of b gives the user the Lannes correspondence. But we did use different tweets from couples. And other tweets actually give what is called quantum Lannes. OK, so that was the end of the section 2. 12 of 10. Thanks. Right, so simplecate duality is a conjecture about dual pair of algebra simplecate varieties. And there are nice examples. And people realize that it is related to what is called the 3D mirror symmetry. And 3D angle 4 theory, let me say that just in terms of input and output data, input data is a connective complex reductive group and a simplecate space with the Hamiltonian G action. And today, again, I want to focus on this particular case of n d simplecate space is a Quotanjian bundle of linear representation. And then we have learned that Higgs branch and Cron branch can be defined in this way. I think I don't need to spend much time on this here. And the open webster, it's kind of great that everyone is here, if I mentioned somehow, simplecate duality holds in this case. So what does modularity of vacua mean? So Higgs branch and Cron branch are parts of what is called the modularity space of vacua. That's pretty hard to describe. But again, I want to follow the strategy, namely, these are sort of algebraic objects. So we can try to understand that after a twist. And for a TQFT, this modularity of vacua can be kind of easily defined just by taking spectrum of these local operators. And the claim, again, as I said, I want to do a twist, starting from 3D theory. Yet again, there are twists called A twists and B twists. And these A twists and B twists are exactly done in such a way that its modularity of vacua is corresponding to Cron branch and Higgs branch, respectively. So modularity of vacua was kind of the maybe bigger thing. But somehow, by thinking of this topological twisting, which I, again, claim is cutting out some information, you'd end up with what's really just seeing a Cron branch or what's just seeing a Higgs branch. That's the claim here. And in this easy case, you can kind of find heuristics for how you see the Higgs branch and Cron branch. OK. But now that you have this context of physics, you can do more than just computing modularity of vacua or local operators. Namely, you can try to understand line defects. And again, some interesting statement would be coming out of some duality statement. So I just want to start with the assumption that there is a nice example that there is. So let me just say that t of GOV has a mirror theory, which can be described again as a t g trick v trick. So in that case, you can compute line defects of theory by this shift disinfected geometry and this geometry quantization business. That's not really even a hard work, but it automatically gives a nice conjecture made by myself and Justin Hilburn. And its properties are studied by D. Moffat, Garner, Hilburn. So this is new conjecture already. I mean, we didn't really need much work to get there, but seems like it's a nice conjecture. So also the stuff that Joel and Alex, the Springer theory, is the left hand side of this. OK. Good to know. I have a question. Oh, I was going to ask. Why is this a conjecture? Because of the reason. Is it true that you have this equation of motion line? Is it true that if you take quasi-carnation from those two stacks, I get the two sides? That's right. Somehow, when I say this, this is at the level of physics. And they have. No, if you do the two stacks, we're as more efficient. No, no, they are not. Obviously, not. They are not. They just come. I mean, just like the geometry of the landlines, they look really different. But some of the claim is that once you take this quantization, they became equivalent. Or mirror symmetry. Yeah, like mirror symmetry. It just looks really different. The claim here is at the level of, say, Foucaille category, or coherent shapes, or something like that. So it's really different at the classical level. Just given a simple example, Calabi or Manifold, you don't know any comparison. But somehow, after some sort of linearization, you get equivalence. Yeah, like one very simple example. So you don't have to put a vector space there. But the very simplest example that you could do is put in v being 0, g being c star. All right, let me let v be c star, g be 0. And then let me let v shriek be 0 and g shriek be the dual c star. Like that's the simplest example of an equivalence. And this is a true, and this would be telling you d modules on the boots of the torus would be the same thing as quasi coherent sheets on flat connection, flat c star connections on the punctured disk. And that's a result of valence. So like, but they're very different. One is like, lutes and a torus. And the other one is flat c star connections on a punctured disk. And those are totally different spaces. And if you take d modules on one, that should be quasi coherent sheets on the other. But I'm saying d modules are also probably coherent sheets on the wrong side. Yeah, yeah. But I'm saying that the underlying spaces are obviously not the same. Like if you take a torus and take a stromal stack, you obviously don't get flat connections. Like, OK. Yeah. OK. OK. Any other questions? And I was informed that the Abelian case is being proved by Sasha and Dennis. OK. So it's a summary. Simplex utility, in a sense, was about local operators and maybe modules thereof. And this unleashed the version of simplec utility, which you can actually embed the original simplec utility to our version. It's in a sense about a comparison of line operators. OK. But my title of the talk is about long majority. So I'm going to explain how this is related to long majority. If I have time. I think I have a couple of minutes. OK. So again, just like a couple's written was an input for what we did for the Encore 4 theory. For 3D Encore 4 theory, we need additional input from a physicist, Coyote and Witton. And the claim is that this 3D Encore 4 theory with flag-bar symmetry g, which is a version of symmetry, it defines a boundary theory of this 4D Encore 4 theory. And the realities of these boundary theories are understood. So what do I mean? So S2-RT is something you can do on 4D Encore 4 theory. I'm just giving a relation between the 3D Encore 4 theory and this 4D Encore 4 theory. Whenever you have a 3D Encore 4 theory, you can think of that as a particular sort of boundary condition in a sense. So that you can try to apply this S2-RT and see what's happening. So maybe let me give some examples. And before doing that, I need to make a claim. I did take these tweets at the 3D level and 4D level. I want to say that they are compatible. But given that, you can do some chat. So let's consider 3D B-Tweasty theory, also known as Rojan's Guitand theory. We target Kotanjen model of V. Whenever you have a Holomex-enfrapping manifold, you can do that. Maybe you just care about Z2 grading. So let's just think of Kotanjen model of V. And if this V is a representation of G, then you have this moment map, which is G equilibrium map from T star V to G check. And that being G equilibrium map is exactly saying that this T star V mod G is defining a boundary theory of this 4D Encore 4 theory. So T star 3BG is standing for 4D Encore 4 theory. And T star 2V mod G is for 3D Encore 4 theory. So that is what is called the enriched Nohiman boundary condition. And when V was 0, it is Nohiman boundary condition. And the claim I want to make is that if you remember this X branch, how you just define, it was just going to be defined as a mu inverse 0 mod G. And that is realized as mu inverse 0. That's just kind of taking fiber product of T star V and 0 over G dur. And you're taking this mod G. And you're taking mod G everywhere. That's how you get it. So what I'm saying here is I'm thinking of this 4D theory and some boundary condition here. Boundary condition here. And if you think of, say, Nohiman boundary condition, which is BG. And this is T star 3BG. And this is T star 2V mod G. And as a result of this fiber product, which you interpret as dimension reduced along this interval, you get 3D theory. We target T star 2V mod G. So starting from this T star 2V theory, you get this other theory. So this is a way to construct 3D and N-core forward gauge theory, starting from 4D and N-core forward theory together with two boundary conditions, two particular boundary conditions. OK. So let's see. So I kind of summarize what's happening here. Now, sort of on the A side or the Coulomb branch side. So Coulomb branch definition, you can compute this as a harm of some certain sheet category. And this picture was meant to be for this equality. I'm thinking of 4D theory on S2 times interval with two fixed boundary conditions, say B0 and B1. In this case, as I said, Nohiman and Enimish Nohiman. Then from the first point of view, you reduce along this 0, 1, and B0, B1. Those are exactly sort of the giving T of GV after A twist. And that's why you should be seeing Coulomb branch. From the other point of view, you can read this in slightly different way. This is harm objects on this category Z of S2. In that case, these boundary conditions have different interpretation, possibly as line operators. And that's how you see this equality. This is the right hand side comes out of the second interpretation. And once you have this interpretation, you can try to do something more fun. I would like to mention that this was seen in the previous video, where I was doing the same thing. So let's see. How much time do I have? You've got seven minutes. Oh, OK. It depends if you want to have questions or not. Because we're going to have to finish it to all three of these. Oh, wow. So it's a great time. OK. So this simplex utility or this enhanced version can be understood as a relation. OK. Maybe let me now say more about this part. Let me read those remarks. This seems to have some new thing to say for physics as well. And if you read off the translation and just read the guide to it in some way, you are getting a description of the corner for local geometry for GLN. Modules are maybe difficult part, namely the theoretical action. But maybe more fun part. What I'm sorry to hear about this. So you get local, different local geometry for all the geometry for GLN, modular, I mean. Modular is kind of the dual group action. Right. And I think this is written in your lecture notes. And I think more fun part is actually trying to understand the relation between this simplex utility and local langlines. And somehow we have some nice picture of understanding this enhanced version of simplex utility using both inputs of the local langlines and some string theory sort of argument. What is called the Hanani with the moves and things like that. And again, I'm just kind of throwing words out there just so that people can ask questions afterwards. If you're curious. No, not after this talk. Maybe we can just chat more later. OK, so I mean to say a bit more about some other version of langlines royalty. A lot of langlines royalty actually can fit in this kind of philosophy. It's coming from some astrology on 40 and equal 4 in some way. And this fundamental local equivalence also is an example. The claim is that it is an equivalence of line defects of this boundary theory. Right. Could you return that? That was just the end of question, I guess. Yes, I did. Any other questions? Why? Absolutely. OK. So the thing I was going to say is that this local local geometry langlines also admits this physical interpretation as I hinted before. And all the boundary condition actually defines some objects there. But there is an important distinction to be made. There are two sources of object of these two categories. One is what is called the boundary conditions compactified along S1. Another is surface defects. And whenever you are sort of giving some boundary condition, you can try to identify what it is as a mathematical object. And these are sort of the answer. And again, S2L property of these are understood. Let me just end with this slide, I guess. And once you have these defects and boundary conditions, now you can give interpretations of these kind of harm objects in terms of physics. And again, that's not the end of the story. That's the beginning of the story. Once you have this new interpretation, you can do a lot more. For example, this causal and lucid category is a category of blind defects of the Neumann theory. And this vertical category is, I mean, with Dylan Busson, who's my collaborator, we named this theory induced on this non-poor boundary condition as a vertical theory. And the category of blind defects of vertical theory is this vertical category. And the category of usually boundary conditions, category of blind defects of this degenerate boundary theory induced by the usually boundary condition is D. Majorsson-Grassmannian. And actually, you can do this thing for not just a generic level. And this observation, or maybe from this observation, Frank and Gaioto went on to do some interesting work using some vertex algebra. And there is some kind of follow-up work with W. Gaioto. And you just heard about this Mirabollic's attack-equivalence by these six authors. That also has interpretation. I mean, the version Innsbruck spent the most time on is a case when you have, let me just say, words. And maybe you can ask more later. When you have sort of ND3 brains, ND3 brains, and the D5 brains in between, or NS5 brain in between, that configuration is given together with loop rotation turned on. That's what's giving Mirabollic's attack-equivalence. And if you have different number of brains, MNN, you get some kind of super-casmoody, or some kind of radical reductions. And that's also the competitively this sort of interpretation. OK, let me end here. Thanks. APPLAUSE Thank you. Thank you. Thank you.
|
It is believed that certain physical duality underlies various versions of Langlands duality in its geometric in carnation. By setting up a mathematical model for relevant physical theories, we suggest a program that enriches mathematical subjects such as geometric Langlands theory and symplectic duality. This talk is based on several works, main parts of which are joint with Chris Elliott and with Justin Hilburn.
|
10.5446/53498 (DOI)
|
Thank you very much for the invitation. I want to talk about symplectic singularity and the nilpotent orbits. Nilpotent orbits is playing an important role in algebraic geometry and geometric representation theory. Today I want to characterize the nilpotent orbit closure or finite covering of nilpotent orbits. Let me start with the definition of symplectic singularities or conical symplectic variety. Let X be a affine variety. In this talk I will always, it is normal affine variety. And then the definition, the pair X omega is conical symplectic variety. If the following holds, if the first omega is a homo-phic symplectic form on the legla part, then the second R is graded. It is positively graded. All degrees are positive and of course R0 is nothing but constant. This means that R has a good sister action. Second, the omega is homogeneous with respect to the sister action defined in the second condition. Finally, this means that if you take T from C star, it gives the automorphism of X. If you pull back this automorphism, then this can be written as some T to the power L omega. L is, in our case, this is a positive number and L is called the weight of omega. This is the weight. And finally, for any resolutions, for a resolution say X tilde mu star omega extends to a homo-phic form, homo-phic two-form on the resolution on X. Here you have a resolution and here the omega is a two-form on the legla part. And you have, of course, you have a two-form on this Zariski open set. This condition says that this mu star omega extends to a homo-phic two-form on X tilde. This is the final conditions. This is the notion of conical, simple, variety. What do you think will happen if the L is positive but this can be more than positive? Yes. Actually, if you impose this condition, L becomes automatically positive. What about the converse reaction? I don't know. I don't know. So this is always positive if this holds. But converse, I don't know. So let's start with a complex loop, any complex loop. And take its, let G be a real algebra of G. Then G acts on the dual space, the co-adjoint action. Then every co-adjoint orbit, orbit admits natural symplectic structure, symplectic form, which is called Kililov constant form. And, but when G is some simple, this dual space, it is naturally identified with the original G by the Killing form. And here you have a co-adjoint orbit. And co-adjoint orbit corresponds to adjoint orbit by this identification. So every co-adjoint orbit admits natural symplectic form when G is some simple. Yes. So the first example of such conical symplectic varieties, the following example one, this is just a nilpotent orbit closure or its normalization. So let us start with some simple, complex, some simple V algebra. And take nilpotent adjoint orbit. Then by using this, this has a natural symplectic form. And, but, and take its closure inside G. This is a fine variety, but it is not necessarily normal, so you take the normalization. Then the pair, or tilde, and here you have omega, omega, and you pull back this omega. And the pair, or tilde, and a Killing constant form is a conical symplectic variety. In this case, this sister action is given by a natural scaling action. Here you have a natural scaling action, and this action extends to the sister action on or tilde. And by using such a sister action, this becomes conical. And in this case, the weight of omega is just one. This is the first example. And the second example is the little bit generalization of this example. The second example is this is the finite covering of nilpotent orbit. This is the covering, covering of nilpotent orbit. So let's start with G, the simple Lie algebra, and take a complex Lie group, such that its Lie algebra is nothing but this G. And we assume that this is simply connected. Let us start with and take nilpotent orbit, nilpotent orbit, and assume that we are given G equivalent et al covering of O. Let's say with X0, this is G equivalent et al map, et al covering. And usually the nilpotent orbit has a finite fundamental group. So you can take the universal covering, then you get such a situation. But you can compactify this situation to this. Here you get the closure, and you have a function field, the extension of the function field. In the function field of this X0, you can take the normalization of this affine variety. And you can get the affine variety. But now, of course, this is also G equivalent, but equivalent. This is affine, but et al in co-dimension one, in co-dimension one. Not necessary et al, but this is affine variety. Yes, such a variety was extensively studied by Brilansky and Kostant. Brilansky-Kostant studied such situations. And here I want to show that this is the conical, simple, practical variety. But here you have a c-staction, of course. If you have a c-staction in the downstairs, the scaling action. We call this sigma. Sigma is just a scaling action. But the scaling action, the action, the scaling c-staction sigma, does not extend to c-staction on X0. I want to extend this, but this is impossible. For example, if you consider the minimal need potent orbit in the symplectic, something like that. And this is the undetect closure. And this is nothing but the finite quotient of affine space by Z2. And in this case, the scaling action, c-staction, doesn't extend to the c-staction of this affine space. But Brilansky and Kostant show that following fact, the lemma-Kostant, Brilansky-Kostant. And here you have a map corresponding to this c-staction. You have a map from c-star to o-t, o-t, o. And you can take the double covering of these things. This is nothing but sigma. And you compose it, and we call it tau. A tau gives another new c-staction. Brilansky-Kostant show that the c-staction, action tau, always extends to a c-staction on X0. So the new c-staction, tau now extends to the c-staction X0. And so this c-staction also extends to the c-staction on X. So c-star act on X by tau. They call that this c-staction is right c-staction. By using this c-staction, X becomes a conical symplectic variety. But the symplectic form here, you have a symplectic form omega kk. And you have a pullback here, and this is nothing but omega. And pair X omega becomes a conical symplectic variety. But in this case, the original c-staction, this two form has weight one. But the new c-staction, tau, the weight of the symplectic form is two. So of course, the first example is the special case of the second example. Because if you take O bar and you take its normalization, this is nothing but the first example. So anyway, I want to characterize such conical symplectic varieties among all symplectic varieties. And this is the topic of this talk. So let us start with something. So let's start with arbitrary conical symplectic varieties. So let X be as a conical symplectic varieties. And of course, R is graded, positively graded. And take for any L, any L, any J, fix a positive number J. And we define Rj as the subring generated by, this is subring generated by Rj. This is nothing but by definition, this is the image of the symmetric algebra generated by J, the J-spot to the R. This is the subring. This is our notation. So our main theorem says that our main theorem is the following. The first gives the characterization of nilpotentobit closure and its normalization. The main theorem, theorem one is the characterization of nilpotentobit, normalization of the nilpotentobit closure. The following equivalent, the following equivalent. The first, R is the normalization of R1 and weight omega is just one. Normalization means that this J is now one and the quotient field is the same and R is nothing but the integral closure of R1 inside the quotient field. This is the meaning of the normalization. Geometrically, it's nothing but the normalization of the affine variety. And the second is just x omega is our nilpotentobit closure and take its normalization and omega kk. This is the first theorem. And the second is the characterization of the example. We introduced example two. We call such a variety. This variety is called a conical, syntactic variety of type Bk, Briegunski constant. The function is the x omega is a conical, syntactic variety. If you start with the conical, syntactic variety in arbitrary one, and you take the R1 and assume that R is normalization of R1, then it becomes, it's nothing but the normalization of the nilpotentobit closure. This is the theorem. And the second gives characterization of the conical, syntactic variety of type Bk. So the following equivalent. You have to use R of j to be bigger than one. Yes, yes, yes. I mean, wouldn't it be more natural to be generated by everything less than or equal to j and not just only degree j? Usually the degree becomes increased because this is algebra. If you have something in degree one, then you can never get, I mean, you're always missing that if you use R of two. Pardon? What? I think if you have something in degree one, but it's not better than there, you can take R of two, then you'll be missing everything in degree one. But in our case, this is the assumption. And the nilpotentobit closure case, the coordinate ring is almost generated by degree one part. But in the second case, we use the new system action, the correct system action. So the following equivalent, R is the finite R2 module. In this assumption, you take the R2 and R, and we assume that R is a finite R2 module. And weight omega is two. And then it is same as the, this is of type BK. Yeah, this is our characterization. Let me start the rough idea of the proof. So, yes. The key notion is, the key notion is Poisson structure. So let us, let's start with the original assumption. Let's start with the first set up. And then we have omega, and omega has a weight L. This is positive, and omega naturally determines the symplectic structure on the regular part of X. And so the coordinate ring R is also Poisson structure. This is Poisson. R is Poisson algebra, Poisson C algebra. Yes. But it comes from omega, and omega has a weight L in general. So it, this Poisson bracket is, has degree minus L. So R, I, R, J goes into R, I plus J minus L. Because this, yes. So in particular, degree R, degree L part becomes, by using Poisson bracket, RL becomes R algebra. Yes. This is an important remark. So let's start with the proof of theorem one, theorem one. We assume, so proof of theorem one, proof of theorem one, from one to this implication. Yes. So let's start with arbitrary spec R. Yes. But the assumption is that R is, contains of course one, and this is the nothing but the normalization. So you take the map from spec R1. Yes. We call this Z. Yes. So this is a, this is a normalization. This is a finite morphism. And this is birational. And so I want to prove that this is nilpotent orbit closure. I want to prove, want to prove Z is nilpotent orbit closure. Orbit closure. Yes. And but there is a natural map from symmetric product of degree one part to R1. Yes. This is a natural subjection. And correspondingly, you have a map from Z to, if you take the spectrum, this ring, you get the, we put this R1 to be G. This is the D algebra. So in our case, this is just one. So we put this D algebra G. So this is nothing but the G. So this is nothing but a fine space corresponding to the dual space G. So I want to show that Z is a joint orbit in this space. So, but first one can check that G has no center because this is proved as follows. If G has a nontrivial center, take an element from such center, nonzero element, and you can consider the Hamiltonian vector field. The Hamiltonian vector field is vector field on Z. And of course, such a Hamiltonian vector field lifts to the normalization. Here you have here, you have a Hamiltonian vector field, but this Hamiltonian vector field lifts to a Hamiltonian vector field HF. And by the assumption, this F is contained in the center. So this is zero because, yes, because Hamiltonian vector field given by the rebucket, but F is contained in the center. So this is zero. So this is also zero. But if you consider the sub-variety inside this X, this is the divisor, but this is the Poisson sub-scheme, co-dimension one, but X has a symplectic variety. So almost everywhere it is non-degenerate. In this situation, it is impossible because X has outside the co-dimension two locus, it is the smooth symplectic varieties. In the symplectic varieties, there is no non-trivial Poisson sub-scheme, but this gives non-trivial Poisson sub-scheme in the smooth locus of X. So this is the contradiction. So G has no center, and moreover, we can consider the following two sub-groups inside K. This is nothing but this. You have a, first, you have a joint. G is a joint group. A joint group is the sub-group generated by the following element inside GL, GLK. And this is a joint group. And I have one more sub-group. This is the automorphism group of Z which preserves the Poisson structure. This is nothing but the group consisting of automorphism of Z and the Cistaequivariant or automorphism and preserving the Poisson structure. So the Cistaequivariant means that it preserves the degree of the coordinate ring. So, of course, the coordinate ring of Z is generated by R1. So this induces the GL of this. So it's naturally embedded in this group. But one can show that this is isomorphic. If you take, it is not necessarily connected, so you take the neutral component, then these two sub-groups coincide. This is the second remark. This is the small g. But g has no center. So this is nothing but the g is g. So the statement that you said that if you start with the automorphism Cistaequivariant or automorphism, you take the reaction of that, you get back to 4g. Yes, yes. Yes, so now g acts on this space because g has a natural coadjoint action. But by using this, this is nothing but the automorphism of z. So g preserves this z. So we can get z is some coadjoint orbit closure inside. This is coadjoint orbit closure. Yes, one can show that. And moreover, in general, this is just a complex resub manifold of glg. It is not necessarily closed, but this is a linear algebraic group. This is a very algebraic one. So this adjoint group should be a linear algebraic group inside glg. This is the corollary of this statement. And the next step is the most essential part is to show that one, two, show this g is same simple. This is the most essential part of this talk. Yes, in general, it's arbitrary, the algebra. So I want to show that g is simple. Yes, so let us start. But here we have a following situation, x to z. And this is spec r. And this is spec 1. This is normalization. Yes, but we have a following result due to colliding that z has a Poisson structure and z has only finitely many symplectic leaves. Yes. This is important because x has a symplectic singularity. You have an extension property of the last condition of the conical symplectic variety. So from the last condition of the symplectic variety, you get this. Yes, but now we want to show that let g, our g. Yes, and take its unipotent radical. And let n be the linear algebra of the unipotent radical of this algebraic group. This is linear algebraic group. And this is nothing but a nil radical of g. And here we have key propositions. Key propositions are key propositions. Yes. Let g be a complex linear algebra with trivial center. With trivial center. Yes, and assume that assume n, this n is not 0. So assume that this is not simple. Then let o be a co-adjoint orbit. Co-adjoint orbit. Such that the first always preserved by scalar system action. And second, the tangent space at origin of the closure. Closure contains the origin and let us consider the tangent space at the origin. This is nothing but the whole space. Then if we assume this, then o bar minus o contains infinitely many. Co-adjoint orbit. This is the key proposition. The situation is like this. You have a gay star and you have some very big orbit. And if you take the closure, in this situation, the closure of o, this is o. And if you take closure, this is nothing but o bar. And complement consists of infinitely many orbit, co-adjoint orbit. This is the typical situation. If you start with some Borel type, the algebra, such things occurs. And by using this key proposition, colliding results and this contradicts, because z has only, in our case, this is nothing but z. By the way, how do we know that z, if we do the excess final we slightly modified the colliding result. Colliding result only states for the normal things, but for such a situation, you can extend his argument and you can prove that z also has. Anything is normalized by a symmetric state. Yes, yes, yes. You can check this. So this is a modified result of colliding. And this and this contradicts. Finally, we know that n is 0 and so j is simple. So this is the key part of our proof. Yes. And I want to move to the second theorem. So now theorem 2 and consider the following situations. And in this case, this is nothing but spec r, but in the assumption of theorem 2, this is u should consider the r2. And this is a finite map, finite subjective. Yes, and we put it z. This is not normalization. It is just a finite covering. And we take its normalization in r and you take r2 tilde. r2 is the normalization. This is normalization of r2. So this is a birational finite movement. This is a normalization map and this is finite. Yes. But in our situations, r2 has graded. But degree of all elements is even, yes, because it is generated by r2. So it has only even degrees. And we have a automorphism. This has a natural system action and r2. Yes. And this is the natural system. But this ring has only even degrees. So this system action factors like this tau. This is t2 t2. So by using this new system action with respect to this new system action, t2 becomes 1 and this ring is generated by only r1. And of course, this system action naturally extends to the normalization because if system action r2, then system action is normalization. So tau extends to the normalization. So we have system action by using the new system action. By using with respect to this new system action, everything, the situation. The situation is the same as theorem 1. Yes. This is generated. This part? The. The. The normalization. Yes, yes, it's okay. It's okay. Every, you take, you take nipotent orbit and its closure and its normalization, it always has always been. Yeah, it's okay. I see, I see. It's okay. It's okay. It's okay. This is, this also has some practice. Yes. Because this is etaryl in co-dimension 1. Yes. Yes, yes. Yeah, yeah, yeah. This is also simple. So we can show that you need some argument. If x is in practice, then this becomes in practice. You need some argument. Okay. This is in practice. So, so every situation is same as theorem 1. So, z is nothing but some nipotent orbit closure. This is, of course, this is simple. And you get the map from here. And this is nothing but type BK. Here you have G-act, but G is simply collected. G acts also on x. And we, we can recover the type BK. Yes. Yes, this is the, this is the argument of theorem 2. And finally, I want to, I want to remark the, our result with shared orbit. So the final section is the shared orbit. Brilansky constant studied the finite covering of nipotent orbit closure. And they introduced the notion of shared orbit. And in our setting, the shared orbit naturally appears. I want to explain this. This is the, this is some kind of the converse, converse implication. So, we, we start with the situation of theorem 2 and you get, you have a type BK. This is G, G-equivalent finite covering of nipotent orbit closure. Yes. And this is the, G. And, but take its coordinate ring, R and S. Yes. This is finite covering. So R is finite, finite S module. And, of course, this has a new correct system action. So weight omega is just 2. And the coordinate ring of S is generated by the degree 2. Yes. And here you put, you, you consider this part. Of course, S consists of only degree, S, S generated by some elements of degree 2. So S is naturally contained in R2. Yes. But, and S1, S, sorry, S2 is some re-algebra. And R2 here is another re-algebra. It is not necessary. This is not, it is not, S and R2 doesn't coinc, don't coincide. And so we have inclusion of G into G prime, new re-algebra. And take spectrum. Yes. Then you get X2 over 2 sum. This also near potential bit closure inside G prime and G. And here, yes. And this, we call this map mu. Yes. So this is finite, of course. Not necessarily by rational. This is, of course, finite. Yes. But G prime is bigger than G usually. So we have an orbit here. Mu inverse O is contained is O prime inside G prime. This is the, this is the G prime orbit, orbit, adjoint orbit. This is the original G orbit. And this is the risky open. And Brilansky constant says that such a pair is shared orbit. This prime is called a shared orbit. Shared orbit. Yes. And I want to finish my talk with some example of the shared orbit. This is well known, but this is the example due to Lubasul and the smith and the Borgang. So their example start with the take eight dimensional near potential bit inside G2. And its closure. It is known that this is not normal. And take its normalization. This is normalization. This is a typical situation of type BK. And start with such things. You put such things and you have new G prime. In this situation G prime is just SO7. Yes. This is SO7 inside O prime. And this is seismophism. Yes. In this situation. This is seismophism. But this is not seismophism. This is not normal. But this is normal. So our construction is closely related with shared orbit. This example shows. So finally I want to comment that some open questions. When that's a conical, symphactic variety of type BK have a Creepant resolution. Creepant resolution. It is not that clear. Yes. So I want to complete my talk with some examples. Example. This is some interesting example. If you start SP6 and consider the orbit of type. Orbit corresponds to the partition of the six. So we consider such orbit and take closure. For an important orbit closure we know complete answer when the closure has a symphactic resolution or not. And in this case this is not recharged. So no Creepant resolutions. No Creepant resolutions. But we have partial resolutions. The partial resolution is given by. This is not recharged. But this is induced orbit in some sense. And the partial resolution which is called Q factorial termination is given by the following. G is nothing but SP6. This is the reargeblur. And take N plus O2. Yes. What is this? Q. G is the regroup SP6. Q is the parabolic subgroup of flag type 1, 4, 1. The flag of type 1, 4, 1. This is the isotropic flag. And what is this? The Q, the reargeblur of parabolic subargeblur has a Levy decomposition. Q is decomposed to the nil radical and the Levy part. This is the Levy part. And this n is nothing but nil radical of Q. And what is this? This L contains SLSP4 as a simple factor. So O2, 1, 1 is the simple factor. And this is a minimal nil potent orbit. Minimal nil potent orbit. So this is the Q factorial terminalization. This is by rational and good partial resolutions. But this is a very good property. This has a finite covering of C4. And it is divided by Z2. Yes, this is finite covering. And now let us consider the BK conical symplectic variety. Because the pi1 in this case is Z mod 2Z. We know this. So by using universal covering O, you can get some conical symplectic variety of type BK. This is 2, 2, 1 covering. And the question is, does X have a clip-unt resolution or not? The answer is it has. The clip-unt resolution is nothing but the double covering of such things. This is the double covering. And so this is C4. And we get this map. This is 2, 2, 1. And let us consider the map, the composition map, and take its Stein factorization. Stein factorization is nothing but this X. And this is smooth. Because this is smooth, so this is smooth. And this is clip-unt. Clip-unt resolution. Yes. Anyway, but in many cases, it is not that clear when such X has clip-unt resolution. Yes. I want to stop here. Thank you very much. Thank you.
|
I will characterzize, among conical symplectic varieties, the nilpotent orbit closures of acomplex semisimple Lie algebra and their finite coverings.
|
10.5446/53499 (DOI)
|
Merci. Et merci aux organisateurs. C'est toujours un plaisir d'être ici et je n'ai pas d'amélioration pour cela. Je vais parler des liens entre des résolutions ou des déformations de variables et des représentations de groupes réductifs. Le lien est, à ce moment, un week. Je vais juste expliquer que l'exemple sur les deux sites, nous avons vu que l'exemple sur les deux sites a coincé, mais nous n'avons pas d'explanation pour ce facteur. Mais je pense que c'était intéressant de parler devant cet audience, parce que par raison d'analogues, les questions sur les groupes de résolution finales peuvent nous permettre de faire des questions qui peuvent intéresser les géométriques simplectiques sur les autres sites. Je dois aussi confier que je ne suis pas un géométrique simplectique, donc je me demande pour une certaine indulgence quand je parle de géométrie simplectique. Je dois aussi dire que c'est une très longue stand, et encore en travail progressiste avec Raphaël. Et le contexte de ce talk a été enrichi aussi. Merci à la discussion avec Guine Bellamy, Russe-Lan Maximo, Pankshan et peut-être d'autres. Ok, donc ce que l'a été fait, c'est que c'est très petit, j'ai juste deux objectifs. Un espace de vector de dimension final, un groupe final actuel en V. Et depuis que j'ai dénoté le groupe de V, je suis forcé de imaginer que c'est généré par réflexions. Une des réflexions est le set d'éléments dans V, comme la dimension de la fixe de point, c'est 1. Donc je ne dors pas les réflexions en ordre 2. Je vais aussi introduire l'arrangement de la plane hyperplane. C'est le set de réflexions de planes hyperplanes. Et je vais aussi avoir, dans mon talk, un paramètre K, qui est une famille K, omega i, où omega est le orbit W de réflexion des planes hyperplanes. Et i est entre 1 et E omega minus 1 et E omega, c'est juste l'ordre de la fixe de plane hyperplane en W, de quoi est-ce que l'on peut s'y déranger. Ok, donc ce sont les paramètres que vous avez quand vous travaillez avec l'échelle égale, l'échelle chérénique, pour exemple. Donc pour ce datum, je vais considérer dans ce talk 3 différents types d'objets. Le premier est que je vais dénoncer par G of Q. Et le final groupe, le groupe de point rationnel de la file de field de g de réduction, qui est une groupe réductive avec une groupe W. Donc ici, c'est le cas où W est rationnel. Donc, quand W est rationnel, j'ai un groupe de groupes de fin de la file de réduction. Et je suis intéressé, et c'est principalement mon sujet, je suis intéressé dans la théorie de cette groupe de représentation. Le deuxième type d'objet est la déformation de la variante diagonale, qui est appelée la spécifie de l'espace de calochérone généralisé. Et je le dénoncerai par ZKW. Je le définirai plus précisément un peu plus tard. Et le troisième type d'objet sont des résolutions partiaires que je dénoncerai par XKW. Je ne vais pas parler beaucoup de cela, mais je le disais. Et en ce cas, je dois imaginer que K, omega i, a des valeurs en Z. Et que les séquences ici sont augmentées. Vous verrez plus tard. Donc K, omega 1, c'est moins que le cas de K, omega 2, etc. Je pense que c'est bien de commencer par un exemple qui illustre ce que je vais parler dans ce talk. Je suis désolé, je n'ai vraiment besoin de la plage. Et je vais regarder l'exemple de la groupe symétrique. La groupe symétrique associée à la groupe de fin et de la fin de la réduction peut être détenue pour être GLN. La déformation de la course ZKW est la espèce de calochérone. Nous avons dénoté par CNN, qui est un set de métro, comme la rank de la commutateur plus de l'identité, c'est moins que ou moins que 1. On module la réaction de GLN, c. Et la résolution, je vais considérer, est la scheme Hilbert HN. Je vais comparer les combinaultures de toutes ces choses. La première chose que je suis intéressé, c'est le set de caractères unipotent de GFQ. Les caractères unipotent de GFQ sont lesquels l'on a été défendu par Loustik. Et dans un sens, ils sont les hortes de la théorie de représentation de groupes fin et de réduction. En ce cas, c'est bien connu qu'ils sont en projection avec des caractères irréducibles de la groupe symétrique, qui en en termes sont en projection avec les partitions de N. Je vais dénoncer la map comme ça. La carte lambda est un caractère unipotent unipotent de lambda. Vous voyez que c'est aussi un facteur très connu que le point de fixation de c'est la position de l'air de caractère, la action de c'est élevée par la multiplication de xi et de xi-1 en y, et c'est aussi en projection avec les partitions de N. Et je dénouis par la carte lambda, le point correspondant. Et encore une fois, c'est aussi un facteur très connu que si vous étiez en place le point fixé sous la action de c'est en direction de la direction de l'air de caractère, vous avez quelque chose qui est en direction avec les partitions de N. Et bien sûr, pour le moment, ce n'est pas si spectaculaire si vous voulez trouver dans la représentation géométrique quelque chose qui est en direction avec les partitions de N, vous avez beaucoup d'exemples, mais je pense que cet exemple doit être continu. Donc, pour exemple, on peut considérer le degré de la carte lambda. Il y a une formule qui dit que......c'est expéré en termes de partitions, mais je le dis de cette façon. Il commence avec un terme avec une coefficient 1 et une lambda A. La carte lambda est le degré, la petite lambda est la valeur. Et pour nous, quand vous travaillez avec la représentation des groupes de finite-radio, ces invariants sont très importants. Donc, ça peut sembler pas si intéressant, mais vraiment, c'est quelque chose qui a beaucoup de sens pour nous. Et dans le set-up de Carl Jorom's Earth Space, je ne peux pas rétriever exactement la invariance de la petite lambda A et de la capitale A, mais je peux rétriever la invariant A lambda plus de la capitale A lambda. Et c'est la valeur naturelle que je appelle la function de la plus grande lambda. Et je mets les brackets parce qu'il y a des normalisations. Mais vous pouvez rétriever cette invariant. Et la function de la plus grande lambda est celle qui dit que la xy est la trace de la xy plus de la yx. Et il y a une fonction naturelle sur le space de Carl Jorom's Earth Space de la laquelle vous pouvez rétriever à moins des variantes de cette forme. Mais la situation est un peu plus meilleure dans le Scheme de Hillbert, parce que le Scheme de Hillbert est dévoilé avec un chiffre naturel, que je n'ai pas dévoilé par O1. Et je dois écrire la lambda. Donc, c'est un idéal de dimension n, c'est de xy. Et O1 est une bunde de line, mais qui est aussi une variante de c'est. Je ne peux pas considérer que c'est une fibre à la lambda. Donc, c'est juste une ligne. Et dans le Scheme de Hillbert de c2, vous avez pas seulement une action de c'est, mais vous avez aussi une action de c'est, cross c'est. Donc, cette ligne, c'est juste c. Elle est en train d'une action de c'est, cross c'est. Donc, pour cela, vous pouvez associer deux weight, parmi les actions de c'est, cross c'est. Et de nouveau, vous retirez les variantes, petit lambda et capital A lambda. De nouveau, je mets des brackets. Il y a des normalisations. Donc, en ce cas, je peux retrouver mes variantes que je suis intéressé de la géométrie de cet objet. Ok, on va un peu plus tard. Encore une fois, dans la représentation de groupes de groupes de groupes de fin de la vieille réduction, nous pouvons compter des matrices de décomposition. Il y a une série de celles. Et dans le cas de GLN, quelque chose qui fait un rôle important dans la série, c'est la ordre dominante entre les parties. Pour exemple, la plupart des matrices de décomposition sont triangulaires avec respect à cette ordre dominante. Et c'est le fait que je ne sais pas comment retirer cela dans le cas de georom dans leur espace. Mais nous pouvons retirer la ordre dominante dans la géométrie de l'îlebert scheme, comme il se passe. Et on va retirer ceci, cet équivalent, pour dire que si je prends, je vais écrire h et h de l'îleambda. Donc l'îleambda est un point fixé. Je considère l'étude attractive de l'îleambda pour la réduction de l'îlebert. C'est la séparation de l'îlebert scheme. Je considère la clôture de ceci. Et je suis intéressé dans la question quand cette clôture met la séparation attractive associée à l'Imu. Et quand c'est non-empté, et c'est non-empté, exactement quand l'îleambda et l'îleambda sont dans l'ordre dominant, peut-être rétendu de cette façon, mais ce n'est pas un grand problème. Donc, encore une chose qui est importante dans la séparation de l'étude attractive peut être rétendue de l'étude attractive. Je vais parler de des connections plus profondes. En parlant de modulaires, je fixe un prime n° L qui ne divise pas Q. Je dois dire que qu'est-ce que l'on a l'heure quand on a le compte Q est le paramètre compte. La clôture est vraiment un prime n°, mais en tant que quantum paramètre, il se dévient en tant que compte. Je considère L'n° L qui ne divise pas Q. Et je n'ai pas d'Od. Et c'est un facteur de la même manière que... Quoi? Vous pouvez voir, je suis désolé, je ne veux pas le faire. Ok, donc Rolanda et Romyu sont dans le même n° L. Si et seulement ils ont le même décor, le décor de lambda est equal au décor de Romyu. Ok, donc il y a une partition de caractères unipotent dans deux blocs. Et ceci est décrivé par cette simple rôle combinatrice. Je ne veux pas défendre le décor de lambda. C'est un classique et je veux dire que c'est vraiment pas intéressant pour les restes de la discussion. C'est juste donné par une rôle combinatrice. Et c'est aussi le facteur que avoir le même décor est quelque chose qui peut être écrit dans le space de Romyu. Il dit que Z lambda et Z mu, donc les deux points fixés correspondant aux deux partitions, sont dans le même composant irreducible de le point de fixation de mu sur le space de KJ. Donc nous avons une action de C star. Je peux restreiter une action de mu d, prendre le point fixé. Cela explique les composants irreducibles. Et puis je peux décrire que je peux demander si Z lambda et Z mu sont dans le même composant irreducible et cela se défend que cela vous donne exactement la même combinatrice que le point de fixation de l'un des blocks de groupes de finite réductive. Et en fait nous avons le même statement avec le schéma de Hilbert. Donc Z lambda, Z mu, Z mu, c'est le même composant irreducible de la point de fixation de mu dans le schéma de Hilbert. Donc je pense un peu plus spectaculaire pour retrouver quelque chose qui vient vraiment de la représentation modulaire de finite réductive en charactéristique transversale dans la géométrie de Hilbert. Je ne veux pas faire le schéma ici. Je vais pas faire le schéma ici. Je veux aller sur cet exemple par en plus dans la série de représentations modulaires. Donc je fixe un décor. Donc gamma, un décor. Et je dénote par B gamma, c'est un set de froid lambda comme le décor de lambda est equal à gamma. Et je mets, et je vais mettre r pour être n- le size de gamma d. Ok, donc c'est un numéro naturel. Et il y a, encore une procédure combinatoriale qui s'appelle le décor. Et ça vous dit que ce set de partitions qui ont un décor fixé sont en direction avec des caractères irréduciables de GD1R, qui est un groupe complexe dans l'infini et cette direction est pas expliquée pour le moment, mais il y a une conjecture par Brouhémalou et Michel. C'est le suivi. Donc, il existe une variété de l'inluctique que je n'ai pas par X gamma et le système local de gamma en ce moment, comme ça, comme ça, dans la comologie de cette variété, vous trouverez exactement les guides qui sont dans ce bloc. Donc, Rho lambda est le bloc, si et si, il existe dans la comologie de cette variété de l'inluctique avec respect à ce système local. Donc, c'est le premier facteur. Et, par contre, le homomorphisme algebra de cette comologie, c'est le premier facteur. Par l'homomorphisme algebra, je veux dire l'homomorphisme algebra de toutes les hI. Il doit être homomorphique, il doit être homomorphique pour l'équal algebra pour ce groupe, qui est le 1R, gd1R. Pour quelques paramètres que je vais dénoncer par K gamma, ce sont les paramètres. Ce sont les groupes gd1R, une famille comme celui-là. Et pour cela, je peux associer l'équal algebra. Donc, c'est la conjecture et c'est fort pour être prouvé, mais au moins, il donne une explication. Il doit être dit que la conjecture, je l'ai dit qu'il existe, mais la conjecture est donnée par la conjecture, et quelque chose important, c'est que les groupes gd1R sont aussi donnés par la conjecture. Ils donnent une formule explique. Mais ce qui se passe sur le hommomorphisme algebra il y a un résultat par Russe Maxima et moi. Je suis intéressé dans les components irréduciables qui sont les deux components du point mu d'affichage du cml d. Et en fait, cml d et je peux le rééducer comme des components irréduciables qui sont paramétrés par Decor. Et le component qui correspond à gamma c'est exactement le hommomorphisme calorif Le groupe GD1R, donc le arc, on va dire l'argama, et il y a des paramètres pour le calage au homoseur. Et le paramètre ici, qui correspond vraiment avec le paramètre qui est involveé dans ce anomorphisme algebra. Donc je pense que c'est quelque chose qui est assez intrigué. Vous pouvez retirer le paramètre de l'éco-algebra, qui est défini en termes d'un anomorphisme algebra de variété de l'enveloppement de l'éco-algebra dans l'intermédiaire du calage au homoseur. Donc c'est peut-être un peu plus surprenant. Et ici, encore une fois, il y a le résultat de l'anomphisme. Je vais dire l'anomphisme et le cordon. Il est écrit en papier par Yann, mais Yann m'a dit que toutes les idées viennent de l'anomphisme. On dirait que si vous avez pris le point de modifique dans le Schim de Hilbert, encore une fois, vous pouvez le dire comme un union sur le gamme. Et quelque chose, quelque chose qui est une résolution simplectique de CR cross CR star par GD1R. Donc, en taking the modifics point, in the deformation, we retrieve deformations of other calage homoseur space. And if you take the modifics point in this simplectique resolution, each irreducible component is still a simplectique resolution of someone. And again, the combinatorics fits with all the three situations. Ok, so that's the... Maybe this example is the starting point of my... of what I'm going to talk about. Ok, ok. Let's talk about this later. Well, maybe it should be... Ok, let's write xk gamma. Ok, so the result of Iman say this, and a question is that is this guy isomorphic to the xk gamma of GD1R that I haven't defined, but I will define with this parameter, the same parameter. Thanks. Ok, well, so now let's go back to the general situation. And you see that in this example, which, I mean, to my point, is quite nice and intriguing, the simplectique geometry does not play any role. And the reason is that, well, the GLN is really the simplest case. There is no hospital representation, the deformation is smooth, there is a simplectique resolution. So everything works really perfectly. But in the other cases, there are some features that really led us to consider the simplectique structure or the Poisson structure on all these varieties. So let's go back to the general case. So let's talk about first unipotent characters. In the type B case, because there you also have the resolution, is that also nice? In the type B case, I'm going to explain, the deformation you must consider is not the one which is smooth, to have the right combinatorics. That's a good question. OK. OK. So I can still consider the unipotent characters of my group G of Q, my finite group. And inside, it turns out from the work of Lustig that the unipotent characters of G of Q are in fact, and you have seen it already in the example of GLN. It's of course erased, but the parameter set of the unipotent representation does not depend on Q. And I can really write it as unipotent of W, which means that there is a finite set, a combinatorial finite set, which is associated to W, and which is in bijection with unipotent characters of G of Q. OK. Q does not play any role in this situation. And inside this finite set, which parameterises unipotent characters, there is the irreducible character of W, which parameterises the principal series. OK. Et il y a des features que nous ne verrons pas dans GLN, c'est que les unipotent characters admettent une partition entre les familles, qui sont indexées par les deux côtés. Et qui sont en train de se chercher, si vous considérez que F' intersecte avec la principal série, les unipotent characters de W. Et puis vous retirez les familles cashierloustiques, qui sont définies en termes de base cashierloustiques. OK. Donc nous avons une partition entre les familles. Nous avons aussi une partition entre les blocs, les blocs, je vais revenir à ça. Et il y a une partition entre les séries de Ari Chandra. Et la partition entre les séries de Ari Chandra, dans le cas des familles pour GLN, les familles sont juste des taux de singe. Il n'y a que des séries de Ari Chandra, donc c'est vraiment pourquoi nous ne le verrons pas dans le cas de GLN. Mais... Nous pouvons voir ce que nous pouvons expecter de retirer en général. Donc, nous allons définir le spécifalage de Kaloszorom. Le généralisé. Je dois introduire l'algebra rationnel, qui est la génération de la Daha qui a été introduite par Katarina. Donc, c'est le espace vector. C'est le produit de 3 subalgebras. Nous sommes entre W et V star, et nous avons une relation commutateur entre l'algebra et l'algebra. Une relation commutateur qui intervient les paramètres K. Et je définis ZKW pour être le spectre du centre de H0K. Quand T est différent de 0, le centre est trivial, donc il n'a pas d'intérêt à considérer le spectre, mais quand T est à 0, le centre explique. Et pour exemple, si tu prends K equal à 0, l'algebra H00 est juste le produit semi-direct de W avec la fonction polinémale de V cross V star. Et en particulier, cela vous dit que Z0W est juste l'invariance pour W dans cet algebra. Donc, nous avons reçu l'invariance diagonale. Et il se trouve que, quand le paramètre K est variable, ce sont les formations flates de cette variété. Et ce n'est pas seulement une formation flate, mais une structure de poisson qui est de la façon dont tu as une formation de H par un paramètre T. C'est le paramètre K qui est un famille? Ou c'est juste... C'est le même K. C'est le K qui est ici. Et je peux maintenant dire que je vais vous expliquer le premier conjecture par Gordon et Martino, qui est la suivante. Les caractères importants sont défis seulement si W est un groupe de vies, mais le spécifiant de calage pour W est pour une complexe de fraction. Donc, si W est un groupe de vies, il y a une bijection entre les deux celles d'éthelete ou les familles. Donc, il y a une bijection entre les deux celles d'éthelete et le point fixé dans le spécifiant de calage mais pour le paramètre 1. Je veux dire que tous les cas sont constantes et sont equal à 1. Donc, le spécifiant de calage qui devrait être associé à la caractère unipotente est celui pour lequel tous les cas sont equal à 1. Et cette bijection devrait être compatible avec la suivante map. Donc, pour chaque caractère réduite, vous pouvez associer les deux celles d'éthelete. C'est la théorie de la théorie cash-d'analyse. Mais sur le côté chérienique il y a une théorie de Baby Verma module par Jan Gordon. Il ne doit pas associer les caractères réduits et les représentations du H0K. Vous pouvez regarder la action du centre sur cette module et vous trouverez un point fixé. Donc ici, si j'ai un point ici je n'ai pas le cas de la famille de la famille calage associée à ce point. Et c'est le subset des caractères réduits. Donc, cette bijection est checkée dans beaucoup de cas. Donc, en type ABCD. Donc, pour beaucoup de gens ce serait suffisant mais je n'aime pas d'exceptionnel exemple. Je n'ai pas satisfait de cela. Ok. Vous pouvez faire G2 mais ce n'est pas facile. Il a été testé par Mr. EZ. Mais pour H3, H4 et F4 il a été testé par Mr. Bruton dans la computation magma avec Ulrich Thien. Ok, donc, dans un sens, cette bijection est relativement reliable. Ok. Mais c'est aussi pour groupes pas vile groupes. La conjecture est pour vile groupes. Ok. Parce que si W n'est pas une vile groupe, je n'ai pas d'exceptionnel. Je pourrais faire un groupe coxétaire. Ok. Ok. Mais ce que c'est pour H3 et H4, c'est pas 1. Oui, mais je dois faire une coxétaire. Ok. Parce que pour une coxétaire, vous avez aussi des cellules. Ok. Ok. Je vais peut-être parler de la conjecture. Parce que un exemple de GLN et ensemble avec cette conjecture, qui est presque prouvel, montre que vous devriez avoir une relation forte entre ces deux sites. Et puis vous pouvez essayer de figurez ce qu'est-ce que, pour exemple, sur le représentation de la théorie. Qu'est-ce que cela veut dire sur le côté géométrique? Pour exemple, dans la représentation de la théorie, vous avez la partition dans deux blocs. Et cela... Oui, ok. Je suis un peu confus. Ok. Donc, quelle est l'analogue de la théorie? En CM. Ah, espèces. Eh bien, comme vous l'avez vu dans le exemple de GLN, l'analogue de blocs est équivalent d'en prendre un point d'exception. Donc maintenant, je vais revenir sur le complexe. Ok, il y a... Il n'y a pas de restriction d'un parameter. Et il y a... On va rester en conjecture par Raphaël et moi. Ce qui est que si vous en prendre une partie d'un component irréduciable de la pointe de modifié dans le space de la space de la Ramos. Et puis, ce doit être isomorphique pour la space de la Ramos. Je dirais que pour une défiguration de la W' où W' est encore une refléction, et K' est encore un paramètre qui doit êtreundersturisé. Ce que je veux dire dans cette conjecture de la sphère de Kalojo-Romozer, vous vous portez un point fixé et vous vous restez dans la famille de Kalojo-Romozer. C'est ce que vous suggèrez, l'analogie avec la partition des blocs dans des groupes finis-réductifs. Et je dois... Je dois... C'est presque. W', K'... Non. K', non. Ok. Et la cér<|fr|><|transcribe|> D'en offset L'œuvre dude et pasDS bon je c'est ça gim eğ ok Et ou de UCLA unenings. si le k est smooth. Donc, la existence de paramètres pour qui ce spécifisme de calage de R-mose est smooth impose une grande restriction de W, mais W doit être type GD1R ou il y a une exceptionnelle groupe G4. Et dans le cas du GD1R, il y a une description de l'espace de calage de R-mose en termes de la variété de Nacagimac, et c'est la façon dont nous pouvons prouver cette conjecture dans le smooth cas. Et pour le G4, encore une fois, Magma a été invoûtée. Donc je dois aussi avoir des autres cas. Donc, pour exemple, la groupe G2 et la groupe G4, même quand la déformation n'est pas smooth. Je dois dire que je suis un peu optimiste ici. Peut-être qu'une fois que je prends la normalisation, je ne suis pas sûr que l'individu des points de fixation sont normaux. Dans le smooth cas, c'est smooth, donc il n'y a pas de problème, mais peut-être qu'on devrait prendre la normalisation. Vous avez dit que la déformation est smooth, mais il y a des déformations qui ne sont pas smooth et pour qui la conjecture a un sens. Oui, exactement, c'est ce que je veux dire. Donc c'est typiquement un exemple d'exemple d'une question de la géométrie de Kalojo-Romosaire, qui est vraiment inspirée par la représentation de la théorie de finite-rédutives groupes. Et je voudrais venir à quelque chose qui intervient un peu plus la structure symbole. Et pour cela, je dois parler un peu de la théorie arichandra, mais pour la théorie symbole, je vais parler de la théorie arichandra. Et de la théorie arichandra. Vous avez dit que c'est un exemple de théorie arichandra, mais il y a des choses qui sont compliquées. Excuse-moi, c'est un exemple de théorie arichandra, mais il y a des choses qui sont compliquées. C'est un exemple d'entretien de Kalojo-Romosaire. Je n'ai pas constaté ce que... C'est un exemple d'anthropopoison de variété. Et il devrait être aussi un ésomorphisme de poisson de variété. Ok. Le espace calogérum est un poisson de variété. Il a une stratification, en ce cas, une stratification de la glisse simple, qui est la glisse simple, algebraique. Il y a une genre de théorie de la glisse simple, qui est faite par la glisse. Je commence avec la définition. Un point dans le espace calogérum est appelé le cuspidol. Si, par le côté, c'est une glisse simple. C'est une glisse simple de dimension 0. Il y a un facte prouvé par Gwynn que cela implique que si vous avez un point cuspidol, le point doit être fixé sous la direction de ce point. C'est un point prouvé par Gwynn. Il y a un crem par Bellamy. Il y a un naturel sur le map de l'objectif. De la sete de pairs, Wp est le zp. Wp est un subgroupement parabolique. Zp est le point cuspidol de l'espace calogérum associé à Wp. De la sete de glisses simple. De la carte Wp. Et ce map a la première priorité. Si Wp, zp est map à l'élément cuspidol de l'espace calogérum et Zp est l'élément cuspidol de l'élément cuspidol. Vous pouvez retrouver la structure de l'algebra calogérum modulée à la idée que vous avez definie par Zp. Merci à l'algebra chirénique de Wp modulée du maximum idéal de ce point cuspidol. Il y a une anisomorphique avec une argébration de la maitresse de la cardinale de Wp. Dans un sens, l'élection de la théorie de représentation de ces modules simple de Wp. Si vous le faites point par point, selon la action du centre, ce sont les sujets de la théorie de représentation de cette agébre, où Zp est le point cuspidol. C'est un analog de la théorie de représentation de ces modules finitétes et réduites. Une autre question que j'aimerais expliquer en relation à ceci, c'est la suivante. La question de la structure peut être un peu optimiste. C'est ce • la la placule de la sjed sparkedique de la théorie. S Angry consassaerez le amplitude de l'ätta stuck Right brick. C'est une enfance classique du paire plus intelligent. Le mike est vide, et il détecte le principle. une clé de la lif simple. Je prends la normalisation, mais ici, je sais que c'est vraiment nécessaire. Est-ce que l'isomorphique est un spécificat de la spécificat de Wp par Wp? Donc je dois préciser sur lequel spécificat de vector s'actue. Je prends le point fixé de Wp dans V, donc c'est juste le spécificat de vector sur lequel cette groupe s'actue naturellement. Et ici, il y a des paramètres qui sont explicités. Ok. Donc, encore une fois, vous voyez que tous les constructions qu'on a dans le final de la rédouille, donc en taking blocks, le mois, en taking newd fix point, and newd fix point, vous ne vous allez pas sortir de la famille de carot de jormosaire. Si vous avez une lif simple, alors cela devrait aussi être, encore une fois, un carot de jormosaire pour une autre groupe. Donc, c'est vraiment une sorte de géométrie où, que vous faites, vous ne vous allez pas sortir de cette famille de variétés. Je pose cette question, c'est pourquoi je ne vous l'ai pas écrit, je n'ai pas beaucoup de évidence, mais il y a le travail par Bellamy & Till, Bellamy & Till, qui dit que les combinatories de les lifs simplectiques sont très compatibles avec cette conjecture, avec cette question. Et, encore une grosse computation, pour exemple, dans le G4, mais dans le G2 aussi, montre que c'est ok. Et pour exemple, ici, on a vraiment vu que nous devons prendre la normalisation du lif simplectique. Le lif simplectique n'est pas normal, donc il n'y a pas de chance d'être asomophique pour carot de jormosaire, par prendre la normalisation, vous trouvez que vous avez vraiment un asomophisme. Ok, donc je n'aurai pas de temps pour construire le XKW, donc, ok, mon titre est un peu chiant. Et je voudrais conclure avec une question résistée par Gwyn Bellamy et Andrafel Rooké. Et ça doit être fait avec les singularités de cet asomophisme calogérum. Donc, fixe z, un point de fixe c'est un point de fixe c'est un family calogérum, donc c'est le subset de la set de caractères fréridusibles de W. Et c'est vraiment une question pour essayer de demander si on peut retirer des informations sur ce set juste du géométrie de z ou, en un sens, juste grâce à la singularité simplectique de z. Le space calogérum est la seule singularité simplectique. Et la question de Bellamy Bellamy et Rooké est la suivante. Take, e, alors, ok, donc, let's say, possible inocomutative, crépentres résolutions de la singularité, donc zk Wz. Je devrais donner un nom à ça, mais parfois c'est une variété, parfois c'est un algebrae. Let's say x, ok. Do we have the equality between the rank of the k0 of x? Is it equal to the number of irreducible caractères I have in my family? So I can retrieve the number of simple modules in the family through the rank of this krepentres resolution. Is it what you meant, Raphael? He's not remember? Ok, thanks. Thanks for your solidarity. Ok. What? Excuse me? No, I'm just taking the resolution, a local resolution, ok, just a local resolution. I'm going to explain two examples. The first one is to take W equals b2 and to take k equal to 1. The family k is equal to 1. In this case, there is only one interesting point, which is not smooth. There is only one non smooth point in the Kalojo-Romozer space, so the k of W singular, ok. There is only one point like this. And because it's alone, this guy is a cuspidall point, so I can consider the maximum ideal and its cotangent space. The cotangent space at z, it has quite a big dimension, in this case the dimension is 8, even so the variety has dimension 4. And because it is a simplectic leaf, it means that the mz is a Poisson ideal, and this means that the cotangent space at z is a Lie algebra. And it turns out that in this case, the Lie algebra is isomorphic to S3, by just brutal computation. And you can, in fact, it's also a fact in this case that you can embed the variety inside the dual. But well, the dual is isomorphic to it, to the trace map. So let me write something which is not very true, but. Ok, so we can embed z k of W inside, let's write it like this, ok. This becomes true. So we can write it like this, and if you compute the projective tangent cone of this variety, of this variety at the point z, z is sent to zero. You retrieve exactly the minimal nilpotent orbit of S3. And there is a result by Boville in his paper on simplectic singularities that says that if you have an isolated simplectic singularity, such that the projective tangent cone is smooth, then it must be a singularity which is isomorphic to a minimal orbit of something, and of course in this case, this tells you that z k of W, z, has a simplectic singularity. This is equivalent to the minimal orbit of S3, the closure of this at zero. And it's well known that this minimal orbit has a crepent resolution through the cotangent bundled to P2. To P2, and if you take the fiber, the fiber above zero, you get a fiber which is isomorphic to P2, and the k0 of P2, as long as 3, which is exactly the number of irreducible characters in this family. Ok, so that's a small example for which we can check this question. La dernière, j'ai deux minutes. Il faut prendre W école G2 et K générique. Encore une fois, il existe un seul point. En ce cas, la structure liage de cotangent bundle est isomorphique à P4. Encore une fois, par le sérum de Beauville, la singularité simplectique est isomorphique au minimum de l'input des orbitals. Mais le clé de l'input des orbitals n'a pas de crépanse résolution, mais il n'a pas de crépanse résolution. Mais en singularité, c'est l'équivalent de la coche de C4 par mu2 actant diagonally à 0. C'est une singularité isolée. Et la coche de C4 par mu2 à 0, n'a pas de crépanse résolution, mais n'a pas de crépanse résolution. C'est le coche de C4 par mu2, c'est le c'est du x, y, x'y'. Et encore une fois, le c0 de cet homme, a créé un 2, qui est exactement le nombre de caractéristiques dans la famille de la famille de la Moselle de Cala. Donc, c'est exactement le nombre de caractéristiques, donc Michela, je pense que je ne peux pas arrêter ici.
|
We present here a bunch of questions (but almost no answers...) about partial resolutions/deformations of varieties of the form (V×V∗)/W, where Wisa complex reflection groups, which are inspired by analogies with the representation theory of finite reductive groups. Joint work with Rapha ̈el Rouquier.
|
10.5446/53501 (DOI)
|
So, I want to talk about these fusion rings of the Linde Algebus. So, in a talk today, they will come from quantum groups, so certain tensor categories from UQG. This is mainly going back to a construction of Anderson, any more Anderson, by taking the golden group of this. But you can also get it from very different aspects. For instance, from conformal field theory, by just looking at spaces of conformal blocks. So given an affine-Cartes-Moudy-Lie algebra, so these are some spaces depending on three weights, lambda, mu, nu, and Verlinde constructed an algebra by taking the dimension of the space and send this to the structure constant of this algebra. So the other aspect which I briefly want to mention is you can also construct it from affine-Cartes-Moudy-Lie algebra by looking at the representations at a fixed level. So this is Finkelberg and Castaneloustec. And also from Twisted K-Series. So this is more from the topology side. So this is Fried-Hobklens-Teleman. And finally, the last one I want to mention is when you want to construct three-manifolding weights. And I mentioned Recher-Ticken and Turard. So they all talk about this thing. So today I want to completely focus on the aspect from quantum groups. But these other aspects come in by looking at certain weights of unities, which I will comment on a bit later. So let me now start by putting the setup. So G is a simple complex Lie algebra or gln C. And I fixed the corresponding Cartesian matrix C. And very important with the following number D. So D is the maximum of the absolute values of diagonal matrix entries in the Cartesian matrix. So V is equal to 1, 2, or 3, depending if you are in tape ADE, B, C, F, or G. OK. What else do I need for any root alpha? I define D alpha to be the number, the scaling, where I take the square length of alpha divided by 2, which is normalized such that D alpha is 1 if alpha is short. So again, these numbers are just either 1, 2, or 3. And the maximum of it is just D. OK. So if you put them in a matrix, D alpha 1, so the one corresponding to the simple roots, is exactly the symmetrizing matrix, which we had in the first talk of the conference. So and on this data, I can define UQ of G, and Lusdix divided power per unit. So it depends on Q. And Q is a primitive L's root of unity. So I definitely don't want to assume that Q is L is odd. So I want to look at the cases, which usually people distinguish, L even or L odd. And I define L prime to be L if L is odd, and L half if L is even. OK. Good. So usually one would look as an algebraic at the odd cases and ignore the even cases. It is mostly because one is interested in having connections to algebraic groups in positive characteristics. And the characteristic, which is then the L, is usually a prime number. So we get odd. But for this talk, I really want to distinguish two cases, which are slightly different from even an odd. Let me case one and case two. Where the case one is D divides not this L prime, or D divides L prime. So and I claim that the first guy, this is what usually algebraic look like. Look at, or maybe at the extreme case of these things. And this thing is what I would say physicists and topologists would prefer. And I want to put them together because I want to understand a little bit there into play. So now let's look at UQG, the finite dimensional substations of UQG, and therefore each lambda and integral weight. So x is the integral weight. I have a dual-wall module, which is just the induced module of the one-dimensional module for B, for the ball, induced up. So by inducing, I mean just taking homes. And then of this guy, the finite dimensional part. Let's just take F, or maybe put F in the front. It's the maximal finite dimensional submodule. And then the dual of it is this guy. So these are the dual, respectively, y-modules. So that's a dual and that's the y-modules. OK, so these are all objects in there. And how do I construct now this tensor category? I take modules which have filtration by these guys by both y- and dual-wall modules. These are called tilting modules. And look at the corresponding additive category. So definition, T, an object in this rep UQ is tilting if it has a delta and a nub left leg. OK, so now a general result, building on work of Ringel and Duncan, and then proof digest in this situation by Anderson, is that these tilting modules decompose into a direct sum of indy composables. And the indy composables are classified by dominant integrates by just saying these guys are only non-zero if lambda is dominant integral. And then this is a unique indy composable which has a delta flag starting with delta lambda at the bottom. So this is an indy composable tilting module. And these are exactly all the ones. So then first theorem, which I think goes back to Paradovsky and then Lustig. So what does it say? Tiltings form an additive ribbon tensor kelling. So an important note here is, so ribbon means we have dual objects and tensor means we can tensor together. And it's closed. So a tensor product of two tiltings is again a tilting. And this was proved for the root of unity being odd by Paradovsky using methods from edge by groups. And I think this proof does not work for even roots of unity. And then Lustig used canonical basis to do it in general. And so I think one should really take Lustig's proof to in particular cover the cases which are of interest for the physicists. So for instance, if you do John Simon's theory coming from Whitney, you would be always in this case. And not in the case where the edge bys usually work. OK, now of course, this lemmings that is infinite, we want to have a smaller category which has finitely many objects. And that? The content of that is closed under duals. Is that the hard part? No, closed under duals is easy, but closed under tensor products is difficult. That's the part that you use to use. Yes. And where Paradovsky passes to edge by groups. OK. OK. So what I do now is a quotient out of tensor category. I define a tilting module to be negligible, even only if the quantum dimension is 0. So I don't want to define what the quantum dimension is, but it's an abstract categorical dimension which is possible to define for any ribbon category. So what you do is you take the ribbon element, which in this case is the Cartan element k to the power 2 rho. And you look at the trace of this multiplication with this element restricted or applied to t lambda. And this should be 0. This is what is defined to be quantum dimension 0. And it's an abstract result of tensor categories that if you take this categorical dimension, then it defines with tensor category, an ideal in this tensor category. And so I can define the quotient of, so I should have called this tensor category t. And I can take the quotient t negligible, which I define as t quotient out by the negligible object. So this is the category which has the same objects as t, so just direct sums of in the composable tilting modules. And I quotient out all morphisms which factors to a negligible guy. And so the claim is, except of the case of GLM, this is a finite, same as simple, tensor category. And I'm interested in understanding what is labeling the symbols. And for this, I introduce the following so-called fundamental ICOF. So it's just a set of weights. So it's all integral dominant weights, such that a certain condition holds. So they pair with a certain root when I add with rho, smaller than this L prime. Or if you want to work with co-routes, which I usually prefer, lambda plus rho paired with theta 0 check is smaller than L prime divided by this d theta, which I defined here. So what is this theta 0 check? So theta 0 check depends on whether I'm in case 1 or in case 2. So it's the maximal short root, denoted theta short. Or it's the maximal long root, denoted like this, depending whether I'm in case 1 or in case 2. So this is some sort of indication that we, in one case, work with the root lattice. And the other case, we work with the co-root lattice. So these are some sort of Langland's dual shadow, which comes up here. So it looks technical, but one should really think of they live on two sides of some, or should live on two sides of Langland's duality. So this is just a set of weights. And it defines an L-cuff for some affine y-group. So it's a fundamental domain of an affine y-group action. So here's the proposition, which says something about this. So part of this is in a paper, a very nice paper of Savine. And then the rest is in a joint work of my student, by Dan and myself. So first thing is, if lambda is not in this I-cuff, then T lambda is negligible. And second, if it's in this I-cuff, or if I run so elements in this I-cuff, then they form the basis of the Groten-Ligring of this tensor category. So on its finite, except in the case of gln, in gln, you pair here something which involves lambda 1 and lambda n, and you just have to look at the difference. So you get infinitely many in this case, but otherwise, finite limit. So in the third thing is an easy observation. But I think it helps a little bit to understand the picture. This fundamental I-cuff is a fundamental domain for our finite y-cuff. And it describes the linkage principle for these certain modules. So the y-cuff is of the following form. It's a semi-direct product with the finite y-cuff. So w is the finite y-cuff with l-primes times q, or l-primes times q-check. So this is in case 1, and this is in case 2. And you see there's a difference here, just a little check there in contrast to here. So here, this is the group which usually comes up in a rep station theory of algebraic groups, like in Janssen's book. And here is the affine y-group, which you usually see when you look in books on affine, katsmudi, the algebras, and they are not the same. In particular, you should not just put them together. And so I want to emphasize it has nothing to do with even and odd. It has to do with this sort of division property case 1 and case 2. OK, so we now know what the size of this fusion ring is. And it's simply simple. So what I want to understand now is what does this k0 look like as a ring. So I first want to say something about the easy case of T ln, and then go to the general case. So I define now a to b k0 of T negligible, and I call this the fusion algebra. So it's the algebra over integers with a distinguished basis given by these glasses of the tetics. So let me first look at the case where g is g ln. So this is a well-studied case. But let's start with this one. So there is a theorem which, for me, was really the motivation originally to look at all this, which goes back to Witte. And then it was also proved mathematically by Agni Houty in a thesis, which however I haven't seen. So if anybody has this, I still would like to see it. And then I proved it with Christian Poff in the paper much later, namely that in case of T ln, this fusion ring can be realized geometrically as quantum commodity of some transmanian. Namely, I take transmanians of n dimensional subspaces. This is this n in L, where L is my order of the root of unity. So this is the small quantum group. And I have no idea how Witten really came up with this. So it's a long paper, and I can only catch some of the ideas in there. But I think what is really striking for me is the similarity between the multiplication in A using conformal blocks and the dependence on three weights. And on this side, 3 point comma Witten invariance, which gives the multiplication in this quantum commodity ring. So and what is the Q? The Q on this side is a parameter, just a formal parameter. On this side, it corresponds to the determinant representation. And this isomorphism of rings sends a standard basis vector given by a tilting to a corresponding Schubert last on that side. So it's the most natural thing you can imagine. What do you mean small quantum group? Second? What are the words small quantum group? Small. Small. Sorry. This doesn't make any sense. Too many quantum groups. Thanks. OK. So just to give you a flavor of this ring. So in general, one can always write it as a quotient of a polynomial. And in type A, this I think goes really back to Verlinde. Also, he would not write it in this way. But then if you work on this side, it's C but in TN. And it says that A is isomorphic as a ring to the following. You look at elementary symmetric functions in E1 and EN. So this is the elementary symmetric functions of polynomials. You add a value Q. And you factor out an ideal. And this ideal can be given explicitly. I'll just write it down like this. So these are the complete symmetric functions. So it looks like this. Where K is such that L is equal to N plus K. So I want to emphasize two things here. So one thing is you should really think of these things as symmetric polynomials, which is the same thing as characters for GLN. So A is a quotient of characters, which makes sense. Because somehow these tilting models involve characters of these while models. And the second thing which I want to mention is that it's very well known that this quantum comology is a semi-simple ring. And many people started it using, for instance, inter-couple systems methods. So this is also how Christian Kaufer and myself proved this isomorphism. We used inter-couple systems methods. So let me just state this quickly. So the main point is to understand this ring, it's enough to understand the spectrum of the thing. And the spectrum is somehow simple. So what we did is the following. We constructed a simultaneous eigenbasis for the action of these generalities, E1 up to En. E1 on this ring. And so you can call this either a Gelfand settling basis, like in Wendt's talk. Or you can call this a beta algebra basis or beta basis. Because we did this using some inter-couple systems model. So this inter-couple systems model, I don't want to explain this, but it's called a friendly workhorse in the literature. And yeah, I don't want to get into this. But what I want to say is there is a very nice Gelfand settling or beta basis, which comes from diagonalizing an action of E1 up to En. And the ring is a quotient of a polynomial ring. So the fact that this guy is a basis has to do that if you take a base change matrix, this base change matrix is invertible. So this base change matrix has a meaning. So this is an S matrix in the literature of conformity theory or tensor categories. So what is an S matrix? So if I think in terms of not the linked invariance, and I look at this guy, I just have to be able to draw it. Like this. I'll orientate that. So this is a picture when I label it with tilting modules, t lambda and t mu. This is a picture of an endomorphism from my ground field to my ground field, by just saying I start from my identity. Then I go to t lambda tensor t lambda dual, tensor t mu, tensor t mu dual. I swap the middle two terms twice, and then I pair it. So this gives me an endomorphism from the identity to the identity. I evaluate at 1, and this gives me a number. So that means for each of these lambda mu, I get a number S lambda mu. And then S is defined to be the matrix which has labels lambda mu in my fundamental light pop, like this. And this is an element. So the entries are in CQ1 divided by L, and describes the phase change matrix. So. Category, tensor category. Yeah, exactly. And so the point, the fact that this basis exists, and so it says that this thing is in lots of it, and this is a non-trivial fact. So tensor categories such that this S matrix is in lots of it are called model log. So what I get here is a model of tensor category. Oh, maybe I should not say I get this, but this is what is behind it. Yeah? Is it obvious that they should be trivial when n is larger? When n is larger, then. And the gross volume is empty. Then what should be, try to say L. What should be? And it's L in. So it's n in a. So this is somehow n dimensional subspaces in here, and it's a co-dimension. Sorry, so earlier you didn't have a restriction that the order of the root of unity was bigger than the rank of BLM. Oh, yes, I should have said this, but if you look at the definition of this fundamental I code for AL, then it's empty if this is not true, and then everything collapses. OK. And some people know model of tensor categories. If you have a model of tensor category, then you always have for free an SL2 action on the corresponding golden ring. So coming from this, what you get is an SL2-Z action. Yeah, I should say SL2-Z action on the corresponding k0 of t negligible. That is, SL2-Z has standard genoids as S and t. And so this S, this corresponds to the S matrix, and this t corresponds to a ribbon element roughly. So I don't want to write too many formulas, but this is the idea. So that means this sort of nice behavior with this extra basis comes from this nice behavior of the model of tensor categories. So now, what is I want to say? I want to go back to a philosophy of Chewetniks. So I call it a philosophy, because if you read his book, it's somehow written there, but not explicit. But I think it's fair to somehow call it like this. So Chewetniks philosophy is any valindee algebra is a web station of a double affine Hecker algebra. So now, I put this valindee algebra in quotations, because when you look at Chewetniks book, he defines the notion of an abstract valindee algebra, but he never checks that our algebras are actually abstract valindee algebras. And so if I look at GLN, then it's not, because one of the properties of his abstract valindee algebras is it's finite dimensional, and GLN is not, because we have this determinant web station, that's about this Q, it's infinite dimensional. But another thing which he wants to have is definitely this SL2 set action. So I think it's not an obvious fact that these valindee algebras have an obvious SL2 set action, because we have to check that this category is a model. And then it's also not clear which Daha should act. And if you ask Chewetnik, it's somehow, of course, he would tell you immediately some conditions with Daha, but I don't think that this is somehow, apart from small cases like SL2, this is made rigorous. And so the next part of the talk is to somehow make this a little bit rigorous. OK, so I should define what the Daha is. So double r-fin-hectare algebras is double back to Chewetnik. So what is it? So if I have G and any choice of a lattice between the root lattice, say Q, maybe I should not use too many letters. So between the root lattice and the weight lattice, Chewetnik defines a double r-fin-hectare algebras. And so I call this H with two lines. It depends on two parameters, Q and T. And then it depends on G and L. And for us, L will always be the weight lattice. OK, so now I want to give you the definition. But before I do this, I roughly say how it looks like. If you type A, you have to do a parameter start like in type C, don't you have like 6? Yes, yes. So it's a bit cheating. Let me say it in a second. Yes. So if you haven't seen Daha, I want to say it's isomorphic as a vector space to the group algebra of the weight lattice, x, tensed a finite-hectare algebra. So that's a finite-hectare. Tensed with the group algebra of the dual weight lattice. And then I have a parameter, which also should come in. G. And maybe I should put a Q tilde because I shouldn't mix it up with my quantum Q. OK, so the easiest case to get a feeling where this algebra comes from is the case where I take G to be SL2. And I take as my lattice, I take Z plus Zi inside C. And on C, I have an action of Z mod 2 Z just by sending x to minus x. So what you can do, you can look at the corresponding elliptic curve given by this lattice. So E is the elliptic curve given by this lattice. I remove the zero point, which is fixed under this Z2 action. And then I take E minus the zero point, this. And then I take the fundamental group of this E minus the zero point, modulo Z mod 2 Z action. So all be called fundamental group. And the group algebra of this, the check on to this data for SL2. So and this rejection is given just by quotient-outlook quadratic relation where I take here a certain generator. And I quotient-outlook quadratic relation. So everything depends on this parameter, C. And I quotient-is out. So now you might ask, where does the Q come in? So the Q is hidden. And I really don't want to say much about it because it's a bit annoying. So what is where the Q comes in is that you actually act on an affine weight lattice and not a finite weight lattice. You have an imaginary root. And so you work always with integral weights. So the coefficient in front of imaginary root is an integral number. And instead of writing the imaginary root, you write Q to this number. So and this is where the Q comes in. So it's really nothing which one should understand at this point. But what one should understand is that this data naturally comes from a construction which generalizes the usual weight group and its quotient-outlook. So if you want to see what it is in type A, you can see explicit generators and relations. There's TI's running between 0 and n. So if you would take TI going from 1 to n, you would have a finite Hake algebra. And here are the finite Hake algebra relations, the quadratic relation, and the weight relations. Then you have an extra lattice given by the weight lattice. And you just multiply like in the weight lattice. And then you have some interaction between the T's and the X's. This is very similar to an affine Hake algebra, like this and this, depending of whether lambda and the alpha i check pair to 1 for 0. And then you have an extra contribution by the fundamental group. So the weight lattice, model of root lattice, if you take an element from there, you have relations like the pi's from here. You have relations of this form with the T's and the X's. And so all together, you can say you have a finite Hake algebra. Together with this axis, you have an affine Hake algebra. Or you can take the 0 and the TI, so all the TI's, together with the pi, this gives you other copy of affine Hake algebra. There's two affine Hake algebras glued together in this diagram. And this is some sort of a PBW statement. Good. So if we have this, there is a natural web station coming from it. So what you can do is you can view this part here, somewhat like a boval in this daha. And you can define the polynomial web station to be the induced web station, the triple web station for this affine Hake algebra induced up. So this is H, depending on q tilde and t, tangent over this p with the triple web station. And this is isomorphic as a vector space. And if you want for g ln, this is isomorphic to a lower polynomial in n-nermas. So now what I want to do, I want to specialize these parameters. And what is the idea behind it? These double affine Hake algebras were invented by Chewetnik, for instance, to solve McDonald's positivity conjectures, so in relation to McDonald's polynomials, which come with two parameters, q and t. But we want to work with this valenda algebra, which really only involves characters. So we want to specialize q and t to some value so that they disappear. And I want to do this now. I want to set q tilde to be t to be e to the psi i divided by l. So this is my L-shoot of unity, which comes in. And then I abbreviate my double affine F-t algebra, q tilde t, just as h, because now my q tilde and t is specialized. So now I have this natural action of my double affine Hake algebra on this polynomial web station, and I'm almost happy this looks like characters, except that I have this plus minus and they are not symmetric. So I better symmetrize them. And to symmetrize them, I introduce a symmetrizing element, e. So that's a symmetrizing element in the finite Hake algebra inside h. So you just take the sum over all elements in the finite Hake algebra with some t powers in front, given by t to the length of the element. And this gives this symmetrizing element. So if you normalize it correctly, it gives an item potent. And you can check under this condition here, it really makes sense. It gives an item potent. And this defines you as spherical double affine Hake algebra, just by taking item potent location of the original algebra. And then, of course, this very good data acts on e times the polynomial web station. So what is e times polynomial web stations? e times polynomial web station, I symmetrize these things. And what I claim, this is what you guess it is, namely symmetric Loreau polynomials. So here's the theorem. OK, first, it's pretty easy. e times this polynomial web station as a vector space is just the weight lattice, or the gube algebra of the weight lattice, taking y-group invariance. So you get really invariant Loreau polynomials in this G.L. N case. Second, so when you look at the Daha web stations, the nice ones are always quotients of polynomial web stations. So the idea is to construct nice web stations as quotients of this e times polynomial web station for the spherical Hecker algebra. And indeed, what you can do, you can take e polynomial. And you can define a certain bilinear form on here, which I don't want to introduce, but it's a very standard form which Terednik always uses. You can define the radical of this form multiplied by e. And this gives a web station of this spherical double R finite Hecker algebra, and its irreducible. And this is for any type, not only for Taipei. Number three, m decomposes into one-dimensional eigenspaces for these other parts. So here is still a polynomial, Loreau polynomial part. And so it decomposes for this part into one-dimensional eigenspaces. And so what we have now, we have a quotient of symmetric polynomials, or Loreau polynomials, on which spherical Daha acts. And we have a nice eigenspaces, eigenbasis with respect to an action of this commutative algebra. And this looks very much like the same way which we had at the beginning. So for any type of G, you always specialize your T-nipples. Oh, I should have said something about the T-is. I promised this to you. So in fact, so I was cheating. It doesn't depend only on T and Q. It depends on T-is and Q. And what are the T-is? So you have two wood lengths, at most. And you have corresponding to each wood length, you have a T. So there's either one or two parameters T. So the formulas here depend on it. But this T-is don't depend. So this is for all T. I think I got confused about what this T of the brackets pi, comma, E, minus T. This one? What does that mean? OK, this means you take this elliptic curve minus to 0 point. You let Z more to Z on intact. And you just take the obi-fold fundamental group. And this is a group. And you take the group algebra. And the group algebra is somehow has a quotient, which is the dach. Oh, OK. So in the formulas, the different T-alpha's come in, but not in a specialization. It's the same for all of them. So for instance, also in the eigenvalues, it comes in. This is a very explicit group. This is a very explicit group. I can explain it afterwards. Or you can think what the SL2 is. And then you see it as well. OK, so let me state number four. So far, what I don't have yet is any connection to over Linde algebra. So here's the first connection to over Linde algebra. If g is equal to g ln, that is now important, then we had this explicit description of this well-indulged algebra as a quotient of this commutative ring. So I wrote quantum commutative. So now I can take this and embed this without a q into here, which is lower polynomials, symmetric lower polynomials. But the q is somehow bad. So I want to specialize q to 1. And I want to specialize q to 1. So in terms of quantum comology, I just count all the curves that I ignore degrees. And the statement is, when I see this now as e times polynomial, and I look here at e times radical, and look at here the quotient m, then this inclusion induces an inclusion here, and induces an isomorphism here. So this is an isomorphism of vector spaces with distinct eigenbasis. I mean, this eigenbasis, which I constructed here via integral systems, and here via charite negations. So this is nice. So this connects nicely. So now we want to do it in other types. So I need to add a type. There is a problem, because it doesn't work in general. And here I realize that topologists and physicists are right. We should look at different roots of unity. So if we are in case 2, so this was the case where d divides over l prime. So in particular, this is far away from l being an odd root of unity. Then my valindex by a isomorphic to m as vector space and the multiplication of symmetric Laurent polynomials describes the multiplication a. So we are in case g2, and g is not gln. So this is somehow covered by this. So g is now simple. So I claim if we are in case 2, then this valindex algebra, which I denoted a, is isomorphic to this m as a vector space. And when I act with my daha, then I get exactly back the multiplication of a. So acting with a symmetric Laurent polynomial describes for me multiplication in my valindex. But I need this assumption case 2. So in general, what happens in general? So if we are in case 1, then we have always this assumption of this symmetric polynomials onto m. Then daha acts. So spherical daha acts on n. And a is always a quotient of this d-poil. But it might be bigger. It might be bigger and might have higher dimensional eigenspaces. So in general, this nice property that it decomposes into one-time dimensional eigenspaces, this is true for m, but it's not necessarily true for a, which we can realize as a quotient of e-poil. And we were calculating in which cases does it happen. And we got a weird list. So and then we found, luckily, a paper by Savine. Second? Bigger than 1. Bigger than 1. The eigenspaces have dimension bigger than 1. Oh, might be bigger than m. Yes. Yes, you're talking about. So there is a paper by Savine who studies modularity of tensor categories coming from tilting modules. And he has a list depending whether l is odd, which is here the first line, and whether l is even. This is the second line. So if you look in the case l even, you get problems for modularity for b and c. But then if you look in which case it occurs, it occurs if d does not divide at 1, if d does not divide at 1. So if we are in the cases which the physicists and the poetists like, it's modular. And in these cases, these eigenspaces are exactly one-dimensional. Whereas if we have problems on the modularity, we don't have this one-dimensionality. And it happens in the even case sometimes, but it never happens in this case too, which we have here. And if you look at l odd, you see there's much more problem cases. And this tells you the l odd case is from a modular tensor category point of view. It's, I think, really bad. And also from the point of view of Daha web station is bad. So what you see in this list of Savin is it gives you some weights, which are always fundamental weights, which are lists here. And what he calculated is symmetries of the S matrix. So he asked the question, when are two eigenvalues, two columns in the S matrix linearly dependent? And he checked that if a weight differs from another weight by a translation of, by these fundamental weights, then this is the case. So what this means for us is that our affine y-group, which describes the linkage principle, is too small. We should make it bigger, make our fundamental group, a fundamental I-Cov smaller. And then we get a space which has this dimension. And this is then supposed to be the coordinate group of the correct modular tensor category. And let me just finish with two remarks, which go in this direction. So remark. So that the first thing I only just say in words, this dimension bigger than 1 exactly appears if the corresponding tensor category is not modular. The second remark is there is a natural SL2Z action on this Daha module M, just because there is a natural SL2Z action on Daha's. And then you can deduce it on the spherical Daha. And then you have it here. And then you have it here. So that in particular, in the good cases, we have an SL2Z action on this valine delgebus. And then I should mention a result by Pugier from 2000. So he could construct modular tensor categories as quotients of our tensor categories, which are modular. Or maybe for the specialist, more precisely, spin modular. This k0 isomorphic to this m in almost all the bell cases. Of course, he would not formulate it like this. He would just write the quotient category. And then he would somehow compare it with the table from the topologies. But then if you compare this table with our calculations, then you can match it with this irreducible representation of Daha. And I should finish then by saying, what is the corresponding fundamental ICOF when I take, let me call it AL-good. I define it as being the old fundamental ICOF, which I had before. And I intersect it with our lambda in the weight lattice, such that lambda plus rho alpha I check is more than l prime tough. So where these guys are the corwoods for the weights in the list. So I take this list here. I have all these fundamental weights. I take the corresponding corwoods. And then I make my fundamental ICOF smaller by putting this extra condition. And this describes exactly the irreducible objects in this quotient category and is in bijection to a basis of this spherical irreducible model. And on there, you have an asset to set action, and it has a nice modularity properties. But you see, it's more complicated than in the even case, where the picture is. I think it's much nicer. OK, I should stop here. Thanks. WE Thank you.
|
In this talk I will give a short overview about fusion rings arising from quantum groups at odd and even roots of unities. These are Grothendieck rings of certain semi simple tensor categories. Then I will study these rings in more detail. The main focus of the talk will be an expectation by Cherednik that there is a certain DAHA action on these rings which can be used to describe the multiplication and semi simplicity of these rings. As a result we present a theorem which makes Cherednik’s expectation rigorous.
|
10.5446/53503 (DOI)
|
And please tell me if I get too low on the blackboard that the people in the back don't see anything anymore. And as well as the title of the talk says, it's about T structures. T structures have been very useful in geometry, in representation theory, and other fields. And traditionally, they were mostly used in categories, at least in geometry, were mostly used in categories of constructible sheeps. But slightly more recently, lots of T structures on categorized categories of coherent sheeps showed up. So the prototypical example maybe are the various categories of perverse coherent sheeps. Here are T structures. And the one I'm going to use, well, the one that's closest to what I'm going to talk about is the one given by Bresokov-Arinkin and maybe based on the lean that's on the equilibrium derived category of some variety. And it turns out that these T structures are particularly nice if all orbits have the same clarity, so nice if all orbits have the even or odd dimension. So an example where this has been classically studied is the nilpotent cone. It's well known the nilpotent cone has all even dimensional G orbits inside the l algebra. Another example where this comes up is if you take some orbit closure in the fine-crusmanian, then it's also well known that that has even dimensional G of O orbits. So by that I mean the orbit closure in the fine-crusmanian indexed by some dominant weight lambda. So in all of these you have nice or the equivalent derived categories, you have nice T structures that have been studied by various people to various effects. But of course these spaces don't live in a vacuum. And there are other spaces that are closely associated to this. So for example for the nilpotent cone we all know and love the Springer resolution. And on the Springer resolution you suddenly don't have all even dimensional orbits anymore. So you don't expect to get a nice T structure just from perverse coherent shifts. Similarly on here what you can do is you can take some convolution varieties, the right girl lambda with an underscore. So let's take some and convolve them, which has a map down to some lambda i. And again here you have even dimensional orbits, here you have all kinds of orbits. And sort of a natural question to ask, the other way around, is whether there are some useful nice T structures on the derived categories upstairs that map down to the T structures downstairs. So natural question that we are going to study now, are there T structures here? Here such that, so that they don't flow out in a vacuum, such that pi star or m star is T exact. And for this category the answer has been given. So the answer for the derived category of the Springer resolution is called exotic chiefs. And have been defined by Oryshin-Ilyakas Bessokovnikov and then various collaborators. So we're going to write Bessokovnikov all the time, B dot, which stands for Bessokovnikov, because he did a lot of work around these things. So if you want to define some T structures in here, maybe a good place to start is trying to understand what Bessokovnikov and other people did here. So let me give you some definitions of exotic chiefs and we'll see whether we can generalize something from these definitions. The first one that I want to mention, even though it's not historically the first one, there is an equivalence of this derived category with some other derived category, antispherical perverse chiefs on the affine-grasminian of the dual. Whatever that is, it's not important what that is for us now. That was given by Akipov and Bessokovnikov. And this thing has a natural T structure, even though I'm not telling you what it is, it's a natural perverse T structure, and that exactly corresponds to the exotic T structure here. Okay, very nice, very deep result, but it doesn't really generalize. If you have some other things, then the springer resolution here, then where would you cook up something like this? Where does it come from? So no idea. That doesn't really help us for generalization. Another definition that was probably the original one by Bessokovnikov is find some exceptional set in here, find exceptional sequence, then do something to it called mutation, which is essentially using a different order for the exceptional sequence, and then to an exceptional sequence, you can always associate a T structure. So that's a more general approach, finding exceptional sequences, there's a whole business around that, but it's a hard thing. So if you look at some variety like this, you wouldn't know whether there's an exceptional sequence or not, there might not be one in general. So it's also not that trivial to generalize, and then the second part, mutation has something to do with representation theory, again, doesn't really generalize, so you don't know what to do in general. So these two things didn't work out so far. So where this actually starts from, the more useful thing is in a paper by Bessokovnikov and Mirkovich, and what they did, well, you have another tool that T structures usually come from is find some tilting bundle. But the specific thing that they do is they don't find a tilting bundle immediately on here, they find a tilting bundle on the gotentic springer resolution. So they find a tilting bundle on G tilde instead of N tilde, and then you have an equivalence of Db N tilde G, let's call this tilting bundle something, let's say A, with Db and A mod finally generated a gradient, and A restricted to, so this is very small now, so the restriction of A to the springer resolution, taking the morphisms of that, then you always get, that's basically one of the defining things of a tilting bundle is you get an equivalence of categories with more dos over the endomorphism of the bundle and the original category, and now you can take some natural T structure here, so this turns out to have a natural perverse T structure, which again corresponds to the exotic one. So from a point of view of generalization, you can always try to find a tilting bundle, but again that's a very hard thing, and the way, well, the construction does generalize somewhat, but the proof that it's a tilting bundle in that paper goes somewhere through modular representation theory, so it's not obvious why you'd have some proof like that in some generality, but nevertheless there's an important hint for generalization here, and that is they construct the tilting bundle on G tilde, not on N tilde, so I'll write this down here, hints that we get from those various definitions. And the first thing is look for analogs of G tilde, and the reason why this is important will come up later, but essentially is that you have just more space to play with if you look at a bigger variety. And the final definition that's in the same paper is not a construction, but just an axiomatic definition, which are three axioms, the first thing is you want, what I said earlier, pi star should be t-exact. The second thing is a bit mysterious, so they proved, or Besokovnikov and Rich proved that there's an action of the affine weight group on this derived category, so this is I think Besokovnikov and Rich. So that's again a deep theorem, it's not trivial where this comes from, but anyway the condition then is that if you take the submonoid generated by the images of the simple reflections, so the submonoid, not the subgroup, so all the things you can write in positive powers of this, the action of this should be right exact. And finally there's a third condition that's technical, plus I don't know, I'll say normalization, that it's a bit more involved to formulate. So again, it's not clear why this affine weight group action acts on this, I mean you can write down generators, but again how do you generalize this, but nevertheless that's sort of the starting point that we took off, the hint that we got from this is look for sources of affine weight group action, for sources of weight group actions. So look for reactions. All right and as I guess the title of the talk suggests our weight group actions will come from categorical actions and maybe as Michael also suggests. So any questions so far? Motivation wise? So let me, since I'm the first person here to talk about this, let me give you a quick run down of what I mean by categorical or geometric action. So part two will be categorical or what we want to do is more geometric and I specialize to quantum affine glm. So again these things have been studied by a variety of people. I think originally this kind of setup was suggested by Rukier and Schangen Rukier. Right is correct, Covanoff-Lauder had some formalism, Rukier alone and unsurprisingly the thing we worked with was the setup bought for by Kautis and Kammnitzer. But I won't go into any deep detail in what this involves because the result I want to talk about is more combinatorical so I just want to give you an overview of what the combinatorics of such a thing should be. At some notation I'll have the weight lattice of glm. I will identify with z to the n in such a way that the roots alpha i are given by 0, minus 1 in the ith position, 1 and 0 again and the affine root I'll specify to be 1 bunch of 0s, minus 1. So I'll look at somewhat specific glm head actions. And now in a normal representation you know it splits up into weight spaces so in a categorical way we have to split up into weight categories. So I'll have categories that are like k of k. So any without weight weights in this indexing way I'll always denote them with a k. And since everything is geometric I'll immediately specify to some coherent derived category of some space some y dx of k potentially equivalent. So everything I can do I can do equivalent or not and you'll get results either way. And I said uq glm head so there's some quantum things so there should be some grating somewhere plus grating. But in the interest of you not having to watch me trying to figure out gratings here I'll usually suppress the grating just be aware that if one actually wants to prove something then gratings are very useful because it lets you split things up. But they're kind of annoying to write down on a blackboard so let's skip that. Now in a classical representation between the weight spaces you have your vibrations EI and Fi coming from SL2 triples you have the same thing here instead now there will be functors and I again specify them to the geometric set up so we'll have 4mk functors and I'll have the EI functors going from one category to the category k plus a root and I notice I'm at the low and the blackboard so I have EI going up to plus Fi Fi going down and these things then have to specify satisfy a bunch of combinatorial requirements and I'll give you an example in a minute so what do I want? The first thing is that E and Fi should be somewhat related and specifically they should be a joint so I want EI the right a joint and the left a joint of EI should be essentially Fi but I said I suppress gratings up to a specified grating shift. And I need commutator relations so I have EI Fi should be the same as doing it the other order but again as in usual representation theory that is not true on the nose you'll get a bunch of identities so plus the sum of EI this way and I'll have Fi Fi P many. So again I'm suppressing gratings what I mean by this is that essentially they should be in various grating shifts or these identities and if this is a negative number then this should be on the other side because in a categorical level I can only add things I can never subtract things so if I would have to subtract something in usual setting then I would have to put it on the other side so this is a metric thing if the index is negative. If I is not J then as usually EI and Fi should FJ should commute FJ EI and finally I want to talk about finite stuff so I have some finiteness conditions. So if I want to actually want to set up things like this then of course just specifying a bunch of relations like this is not enough as everyone knows on a categorical level we have to deal with higher category structure so this should actually be a two category there should be some additional data to make everything work out nicely but in the interest of actually getting to some results I'm not going to tell you the exact list of axioms maybe just pointing out one thing these things are enough to also get something like categorified serrations so you get more relations out of just relatively few ones. I promised some examples that might be more interesting than me going over a bunch of So how could we want write down something like this I need to fix some numbers M and N positive integers so the category should be the rough categories of some spaces so I need to give you some spaces and sort of the example that we mostly looked up at is spaces y of k which are given by a flag with jumping so with dimension jumps indexed by k so we'll start in lower polynomials to the M I call L0 then L1 so various spaces up to ln inside sorry this should be polynomials inside low on polynomials so what do I want I want to have the k giving the dimension jumps so I want to have the dimension of Li mod Li minus 1 to be equal to Ki. Rational functions sorry I wrote the right thing I wrote what I didn't say the right sorry rational functions or I mean you could write power series and lower series I guess got mixed up in my yeah I wrote the right thing and we sort of want to model as an example we want to get the nilpotent cone so we want to have set as a nilpotent operator so we want set Li to be contained in Li minus 1 so I'll usually write this as giving the jumps up here so this is k1 this is dimension jump k2 and so far km and we'll have set has to act like this. So for example the most familiar of these things if it's m is equal to n is equal to n and then I have y111 so m times 1 so I have a full flag so I have a flag with jumps 1 and I have an operator that goes down so it's a nilpotent operator so this will be some kind of compactification of the springer resolution. The opposite extreme if I have a jump by big n or little n it's the same and nothing else then everything collapses then here I just have to give one space that has to be killed by z so there's no choice at all so this will be just a point and in between I have various sort of partial springer resolutions but I don't know what the word is partial flags plus a nilpotent operator that stabilizes it. I'll write this over here. More generally to tie back into the examples I gave at the beginning if I write lambda equal to omega k1 up to omega kn where the omega i are the fundamental weights of sln one can check that yk1 up to kn is the same as the convolution variety I had at the beginning. So these examples in particular include convolution varieties where only allow fundamental weights so these are sort of the two basic examples that are included in this kind of construction so you get all kinds of partial convolution varieties whereas length convolution varieties. Now I used up the big board in the middle. How bad is this to read right now? I'll write on here. Maybe if I use yellow. So I've given you some spaces and the categories k will then be the derived categories, the equivalent derived categories of these things. How do I define say e1 in this setup and let me just specify to n equals to 2, 2 for simplicity. So what do I have? I have some yk. Is this readable? That consists of a flag l0 inside l1 inside l2 with k1 and k2 and set going down. Then e1 what will it do? It will go to the derived category of yk plus alpha1 so that has a similar form say l0 prime inside l1 prime inside l2 prime and now alpha1 decreases k1 and increases k2. So that jump decreases and that jump increases and now e is just defined by finding some kind of hat for this diagram. So what can we do? We can look at a flag at flags l0 which should be the same as l0 prime. They're just polynomials anyway. Then the next smallest thing on the board here is l1 prime. That's just k1 minus 1 dimensions bigger. Then the next smallest thing on the board is l1. That's one dimension bigger. And then that sets either an l2 or l2 prime. So l2 which should be the same as l2 prime with a jump of k2. And now where I said should map l2 prime into l1 prime. So it should map like this and it should map l1 into l0. So it should map like this. And then I have canonical maps down here and down here by just projecting on either the non-primes or the primed ones. And e1 will just be pull push along this diagram. Maybe tens of some vector bundle. That's some line bundle actually. So there are fairly obvious ways to define these eis. The only thing where you have to think a bit is for e0. That's where you can't just write down a diagram like this. That's the harder part which is easiest to describe in some kind of loop presentation instead of the kind of catch moody presentation that I gave earlier. So what has this all to do with the two hints that are now erased? Let's look at the first hint. First hint we should look at some analog of the Goten-Dick-Springer resolution. Here I gave you some analogs of the Springer resolution while making it an analog of the Goten-Dick-Springer resolution is not too hard. Make this example 2 and just remove that minus 1 here. Now we have a flag that's stabilized by Z. So it's a flag plus an operator. It will be some generalization of the Goten-Dick-Springer resolution instead of the Springer resolution. For various choices of Ki, I'll have compactifications of various partial Goten-Dick-Springer resolutions. In particular, at the outermost level where the flag collapses to just the choice of a single vector space, I'll just have a compactification of the actual E code. And of course, I could write this just as a usual flag plus an operator, the various different Goten-Dick-Springer resolutions, then I wouldn't get compactifications. I would get them on the nose. And this then, of course, is not correct anymore. So you don't have, I don't know what these spaces would look like. And then the EIs and FIs you can define in a very similar way to the example 1. All right, and now where does the second hint come in? It's a general fact that if you have these kind of categorical actions, you can get break group actions out of them. Yeah. I mean, this thing has a description of the, the imbalance in the focus. It doesn't really do the... True, but I don't know what to exactly write down here. What the exact words for this... Just write BD as a subscope. Okay. So, and also, notation-wise, I'll call them fat-wise. So what we had before, K is DD, something like this. Okay, yeah. They have descriptions, but not no buts. And there's a whole paper, a bunch of papers by Joel, for example, discussing these things. Oh, yeah, the point is better. So where do the break group actions come in now? So, they should act on the direct sum of all these categories. Remember, I had some... Well, still here I have some finiteness conditions. Fine. Whatever. So, this would be some finite sum. And how should the break group act? It should have generator's TI going from the category indexed by K to the category indexed by the writer's SIK. So this will be switch Ki and Ki plus 1. And in the T equals naught case, I'll switch the last and the first index. So how could one get such functors? Again somewhat motivated by usual representation theory. So let me draw a diagram. So if say here is the category indexed by K, then the category indexed by SIK would be here. So this is the SL2 version, I suppose. So how could I get from here to here? Well, I could go directly via E twice E, or I could go up to here and then one back. So I could do E3 back with F. And finally, I could... Did I miscount my dots? Yeah, I miscounted my dots. That should be symmetric. I could go up to here and then back. So doing this four times and then going two steps back. And in usual representation theory, what you would do now is taking the alternating sum of all these variants. In categorical level, you can't take an alternating sum because you don't have minuses. What you do is these are all triangulated categories so we can form a complex. So I write this down. So Ti will be the complex that is going just directly there. So that would be E Ki minus Ki minus 1 times. I'll write it into my diagram. Or I can go one further. Ki minus Ki minus 1 plus 1 and then go one back. And the map here, remember, E and F are joined. So I'll have an junction map from here to here. And then I can continue the diagram. And the Ti will be the convolution of that complex. And the theorem for this by I guess Cautus and Kamnitzer, these Ti satisfy the plate copulations. In other words, they give an action of the fine plate group, in our case, on that sum of all the categories. This is that specifically if we look at the category K 111, sort of the middle category, then if I switch to indexes here, nothing changes. So I'll have a plate group action on this thing. And specifically if we take the example of the nilpotent cone, then I have get a plate group action on the bounded derived category of the springer resolution. So remember, the springer resolution can be written as something like that. I had an example earlier. And this recovers that mysterious plate group action, mysterious maybe, under quotation marks that I had in the definition at the very beginning. So what this theory does for us is it at least gives us some reason where this plate group action comes from. Yeah, I'm a bit dodgy here. But if I already have the e-nose, then I can write it like this too, I think. But I wrote it originally as a Katz-Mudi presentation. So I haven't quite followed the way you actually prove things. I've simplified a bit. So if you already have e-nose, e-nose, and so on, then I think that is correct on the nose. This is, yeah, I'm only talking about type A now, simply because there we worked out all the combinatorics. I think that's the plate group action you get for anything. Can you expand that as a reflection of the nashani and shebar outside that thing? Yeah, so the fineness, I guess, is something specific to this. And I'm sweeping things under the rug. It's not too weird to define the GLN hat action to start with, but it can be done. And it's usually done via loop presentation. OK, I hope, I mean, this was a very quick and very high-level overview, but I hope you got something of an idea what is happening. So let's tie this back to the original question of trying to get some t-structures. So actually doing this thing we said in the title, obtaining t-structures. And to specify, I'll take, I'll fix n bigger, what's not equal to n, sorry, fix this thing and assume k of k is 0 unless the sum of the ki is equal to n and all ki are positive. So the examples that we had, if you don't have this, you don't get a, if you have something negative, you don't get a reasonable flag. If you're switching between different sums, then there's no EI and Fi that gets you between those anyway, so they're independent. So let's, we can assume this without much loss of generality. And this is because we are interested in these specific examples. So let me give you a theorem. Suppose, so in that example that I gave you, you saw that the highest rate category, so the category k and then a bunch of 0s, these were always much easier to deal with than the more central category. So this was just say the actual group instead of some crazy, crazy goten-deich-Springer resolution. So just in my fine space, so that's much easier. So assume we have a t-structure on here, here's a t-structure. And what we want to get from this is we want a t-structure on all the other categories. So the conclusion should be there exists a unique t-structure on all the other guys, on all the kk such that all the functors EI and Fi, including E0 and F0, are t-exact. So that's a very reasonable condition. This is the only function we really have to play with. We want them to be exact. So if one wants to formulate a theorem like this, what kind of condition would one have to fill in? Well, first of all, let's just do something stupid. Let's start at this category and let's just do the function E1 once, then the function E2 and so up until En minus 1 and then E0. So what we do, E1 shifts 1 over here, so if n minus 1, 1, then we shift the 1 further, further down until we come back to n. So that will be an end of function of that category. And obviously, if all the EIs have to be exact, that thing at least has to be exact. So that's the simplest thing that we can write down that has something to do with this category. So this has to be exact and more specifically, it has to be exact for all powers and for the corresponding things with Fs too. And secondly, if you think about the categories with the non-fat wise, then this thing was just a point where inside you have much larger varieties. Clearly, from a point you can't deduce much, so this thing needs to be big enough in some sense. That was also what I said at the beginning. We want to look at g tilde instead of n tilde. And category, clearly this means that this outermost category, highest weight category, should weekly generates the innermost category, k 111, under the action. So what I mean by this is you start out here, apply a bunch of E's and F's until you come to this. And under all those functions, you should categorically generate this middle category. So the image should be big enough that you can say something about the middle category. Otherwise, you'll never get something unique because you have a bunch of rest of the category where you can do anything essentially. And under these two very minimal conditions, you do get a unique t structure on all other categories. And moreover, to tie this back to this original thing here, moreover, we had a defined break group action, so the sub-monoid, the action of the sub-monoid generated by t naught, t 1 and so on, is y to x. So we get something that fulfills this weird condition, automatically fulfills this weird condition that we had at the very beginning for the exotic t structure. So maybe to give you an example to tie this all back into the original thing we started from, so what you can do is these two conditions are very easy to check. These conditions are satisfied for any perverse coherent t structure. In particular, for example, the standard t structure on dB, fat y, and 0, 0, potentially equilibrium. So in that case, these functions will just be tensoring with some vector bundle that is always exact. And it turns out these spaces are exactly big enough that things are generated. So if you start with the middle perverse t structure here, so start with the lower middle perverse t structure, this t structure on, in the specific case of the just dB of t is equal to the equilibrium, then we get a t structure on dB of g tilde, and then we can restrict the t structure to dB of n tilde or everything equilibrium. And what one gets back is exactly the exotic t structure of Berserk-Haffnickhoff that we started out as motivation. And similar for more general things, we get again a t structure on 1111 something, we can restrict this to the non-fat y 1111. And then there's a companion theorem that we can get the canonical t structure on all the other y 1111, so for example, the other non-fat ys, so for example, these convolution varieties that we had earlier. So we get canonical t structures on all of these things. So if I take the trivial t structure here. Yes, because it's infinity number of four feet, you should take the least of the point. Yes. Is that an example of non-facial t structure? So if I take, I don't know if I get something interesting if I take the trivial t structure on this, but if I take it non-equivalently and take, start the trivial t structure, then on here I get the representation theoretic perverse, exotic t structure of Berserk-Haffnickhoff and Milkovich. And when t of g, is there some non-equivalent? Yes, I mean. I think that there are the two kinds of, you can see a way to, I mean, the deformation of the t structure. Yeah, this is just because, so for, if I want something on here, I'm actually only interested in the nilpotent cone in here. So I put some t structure on that and then I have to extend it somehow to the rest because I want the rest, I need the rest somehow. And the way is to do the lower middle perverse t structure. So the orbits here are not even dimensional or, so I take instead of something in the middle, so half dimensional I take, break it down half dimensional, so round it down or something. So that's just a canonical weight or it can do the upper one, doesn't really matter. So in here, what I'm really interested in is dbn, or I guess dbg with support on n, that's what I'm actually interested in. I have not much time left. What I want to see now is just give, so the proof has two parts. It has some theorem that is very, I find very useful for a general statement so that I want to show you. And then the rest is combinatorics. So let me give you the sort of the core of the idea, idea of proof. And that is the following, essentially it's a corollary, theory of a theorem of Pauli's Schuch, that goes like this. Suppose phi from db of x to db of y is something for a Mochaifangster, phi is conservative and finally db of y has a t structure such that if I go from db of y by the left a joint back to db of x and then go back to db of y, that is right exact. Then the t structure uniquely lifts. So then there exists a unique t structure on db of x such that phi is exact. So the idea of the proof of the theorem that I now erased is you use this theorem over and over again. The weakly generating gives you the constructiveness. And then do a lot of combinatorics to check that you don't run into any contradictions and that one simple condition is enough that you can't have any contradictions. So sort of what I want you to take away from this talk is maybe not that exact result that might be interesting to some of you, might not be interesting to other, but when you want to construct t structures one way to do is put everything into some bigger diagram where you have more spaces to work with, hopefully something easier somewhere on the edges. And then try to do some argument with a theorem like this and combinatorics to inductively go on and go on into more complicated spaces. Okay, thank you. Thank you.
|
T-structures on derived categories of coherents heaves are an important tool to encode both representation-theoretic and geometric information. Unfortunately there are only a limited amount of tools available for the constructions of sucht-structures. We show how certain geometric/categorical quantum affine algebra actions naturally in duce t-structures on the categories under lying the action. In particular were cover the categories of exotics heaves of Bezrukavnikov and Mirkovi ́c. This is joint work with Sabin Cautis.
|
10.5446/53504 (DOI)
|
Okay. So I want to thank the organizers for the invitation to speak. It's a pleasure to be here. I'm going to describe a construction that gives a partial compactification of something called the universal centralizer. And this is an example of a Coulomb branch, so it will fit into the theme of the morning talks. And I'll begin by explaining how to obtain this variety and explaining how it's naturally a symplectic variety. And then I'll try to kind of modify this construction to my purpose. So for today, G will be a semi-simple connected algebraic group over C. And I'll require it to be of adjoint type, so it'll have trivial center. And then I'll denote by factor G its free algebra. And I'll fix for the rest of the talk a principal SL2 triple. So this is a triple of elements that generate a copy of SL2 inside G and that are all regular. And I'll denote my centralizers by superscripts. So the centralizer of the nopotent E is going to consist of the group elements fixed by the adjoint action. And the same notation will carry over to the Lie algebra side. And then once I have the centralizer, what I obtain is the principal slice, which is the affine space given by the sum of the principal nopotent F with the centralizer of the opposite principal nopotent. And it's well known that this affine slice, well, it parametrizes the regular conjugacy classes on the Lie algebra of G in the sense that it intersects each regular conjugacy class exactly once and transversally. So it gives a section to the adjoint quotient. And then the definition is that the universal centralizer associated to this group is the variety of pairs G in the group and C in the slice with the property that G centralizes C. So this is like the family of centralizers of regular elements parametrized by their conjugacy classes. And this variety has a natural symplectic structure that is obtained through a construction that's normally attributed to constant, I think so, called the construction constant Whitaker reduction. And this is going to be a Hamiltonian reduction from the cotangent bundle of G. So I'll fix the couple more items of notation. G is going to be the unique Burrell whose Lie algebra contains the principal nopotent E. And N is going to be its unipotent radical. And for any subgroup I'll denote by the corresponding factor, that are the Lie algebra. But I'll try not to write below this line. So then the construction goes like this. We have a Hamiltonian action of this unipotent radical on both sides of the cotangent bundle of the group G. And this cotangent bundle, well, this action produces a moment map, which I'll call nu to the dual of the Lie algebra of N. And I can understand this action explicitly in the following way. So I identify the cotangent bundle of G with the product of the group G and its Lie algebra via left trivialization on the killing form. And I identify and start with the quotient of G by the subalgebra B, also via the killing form. And then the moment map nu factors through the moment map for the G cross G action. And this moment map is given explicitly in the following way. It takes a pair of an element G in the group and C in the Lie algebra to G applied to C and C in the comic C. So this means that its image consists of conjugate pairs in G cross G. And the fiber over a diagonal point is precisely the centralizer of the corresponding Lie algebra element. And now what we do is we consider in N star, which I've identified with G mod B, the diagonal coset of this principal nopotent F. And this is fixed by the action of N cross N. Because you can imagine that this is like a lower triangular matrix with ones on the subdiagonal and acting by N pushes it up, but then anything that we've pushed up is killed by the B quotient. And in fact, it's a regular value of the moment map. So this is N cross N fixed. And the two-sided action of N on the fiber above this coset is free, which tells us as we're in this inflectic setting that this is a regular value. And then if I look at this fiber explicitly, it just consists of pairs of points G and C with the property that both C and its translation by G live in the space F plus B. And F plus B is known to be isomorphic to N cross the principal slice. So this is an isomorphism given by the action map. So this means that when I quotient this fiber by the action of N cross N, I get exactly the universal centralizer. So this makes V into a smooth, symplectic affine variety. And so we think of Z as a family of centralizers of regular elements that are indexed by points in the principal slice. And then the goal is to find some kind of partial compactification of Z along the fibers and to say what happens to the symplectic structure that comes from this reduction when we try to extend it to the boundary of this partial compactification. So the plan is to compactify the centralizer fibers. And the place where they're going to be compactified is inside the wonderful compactification of G. So I'll give a brief description of what this is before I explain how to perform this construction. So I'm going to let G tilde be the simply connected cover of G. And I'm going to let lambda be a regular dominant weight of G tilde with respect to some choice of maximal torus and borrel. And this corresponds to a regular irreducible representation of G tilde which I'll call V. And then the wonderful compactification of G we obtain in the following way. We have the representation map going from G tilde to non-zero endomorphisms of V. And quotienting by the center on the left produces the group G. And quotienting by scalars on the right gives us a projectivization of this endomorphism space. And because the group G was of adjoint type and because the weight I chose was regular, this is going to descend to an embedding of the group G into this projectivized endomorphism space. And then the definition is that the wonderful compactification of G is precisely the closure of the image of the embedding. So this compactification has many remarkable properties. The first is that it's independent of the choice of regular dominant weight that we made. It's a smooth projective variety with a two-sided action of the group G that is extending left and right multiplication on the group itself. Obviously, G sits inside G bar as an open-done subset. And the boundary is a normal crossing divisor. And the action of the group on this boundary breaks it down into a collection of finitely many G cross G orbits. And these are indexed by subsets of the simple roots in the sense that the closure of the orbit. So I should say maybe this divisor is a union of rank many irreducible components. And then the closure of the orbit indexed by the subset I is just the partial intersection the corresponding divisor components. In particular, this tells us that the orbit closures are smooth. And maybe I will say one more thing about them. So these orbit closures have a very nice geometric description, which goes something like this. So for a subset I, I have a corresponding pair of parabolic, p i and p i minus, and a levy subgroup, which is their intersection. And then the closure of the I is orbit fibers over the corresponding product of partial flag varieties. And this fiber, this fiber is another wonderful compactification of a semi-simple group of strictly smaller rank, which is the group given by the adjoint form of this levy. So this is a smaller wonderful compactification. And in particular, this means that there's a unique closed orbit of minimal dimension. And it's isomorphic to a product of two copies of the flag variety. So in particular, unique closed orbit of minimal dimension corresponds to taking I to be the entire set of simple roots. And that produces a product g mod b cross g mod b minus. So I'll give one very simple example before we move on in the case where g is pgl2. And then a simply connected cover is sl2. And here all non-zero weights are regular, so I can choose the representation v to just be the standard representation. And then the map chi is the embedding of pgl2 into projectivized 2 by 2 matrices. And its image just consists of the matrices with non-zero determinant. So when we take the closure of this image, the wonderful compactification of pgl2, the entire space of projectivized matrices, which is a copy of p3, and then side it, the boundary sets as the 2 by 2 matrices with determinant 0. And via the SEGR embedding, this is just a product of two copies of p1. And p1 is the flag variety of sl2. So this realizes this orbit classification in this very simple example. And I'll give a non-example that says this construction doesn't generalize, so when g is pgln for n greater than or equal to 3, the standard representation is given by a dominant weight, so it's no longer regular by a fundamental weight, I mean. So it's no longer regular, and in general, the compactification of pgln is not just an n squared minus 1 dimensional projective space, it's a more complicated, smooth projective variety. I'm open to question. And I should say, this is good to give references, that this compactification was introduced by Deccan Chimian for Chessi in the much more general setting of symmetric spaces, so the construction that I gave is a simplified version of their original construction. Yes, exactly. I'm not going to say anything about the, so there's a Poisson structure on the wonderful compactification, which is not related to what I'm doing, but I am about to say what the log-quotangent bundle is, and then there will be Poisson stuff. So I said that the goal was to compactify the centralizer fibers of the universal centralizer inside g-bar, and the way I want to do this is by another Hamiltonian reduction, but from a kind of enlargement of the cotangent bundle of g. And the correct enlargement to consider for the wonderful compactification is something called the logarithmic cotangent bundle of g-bar. So I'll denote this by T star g-bar d. So this is the vector bundle associated to the locally free sheaf whose sections are logarithmic differential forms. And they're allowed to have poles along the boundary divisor. So if I have coordinates x1 through xn on the wonderful compactification so that the divisor is given by the vanishing of the product of the first k, then these sections locally are generated by dx1 over x1 through dxk over xk, and then dxk plus 1 to dxn. And of course, when I restrict the slug cotangent bundle to the copy of the group g that's living inside the wonderful compactification g-bar, I just get the usual cotangent bundle of g. And the essential observation is that this log cotangent bundle has a canonical log symplectic Poisson structure. So a log symplectic structure is the type of non-degenerate Poisson structure. So it's going to have an open dense symplectic leaf, and in this case, this is going to be exactly the cotangent bundle of g. And it's from this Poisson variety that we're going to perform the sort of analog of constant Whitaker reduction to compactify the universal centralizer z. So let me say very quickly that there's a good explicit way of thinking about the slug cotangent bundle, which is the following. So we can view this bundle as a sub-bundle of the trivial g cross g bundle on the wonderful compactification. In the following sense, so at a point x and g-bar, I'll denote by O the g cross g orbit of x. And then I have an action of the stabilizer of x on the normal space to this orbit. And I have something called the isotropy, leisobalgebra, at x, which is just the lealgebra of the kernel of this action. And this is exactly the fiber of the logarithmic cotangent bundle at x. And it's a sub-algebra of this trivial g cross g. And it's a sub-algebra of g cross g. And this is how we view this log cotangent bundle. This is a sub-bundle of this trivial bundle. So there's actually a sort of alternative construction of the wonderful compactification where we embed the group g into the Grismanian of group dimensional subspaces of g cross g. And then g-bar is the closure of the image of this embedding. And I should say maybe we do this by mapping a group element to the corresponding translation of the diagonal sub-algebra. And then this log cotangent bundle is nothing but the restriction to g-bar of the tautological bundle on the Grismanian. OK. So now I run the same constant Whitaker reduction on the log cotangent bundle of g-bar. So this means I have my two-sided action of the unipotent subgroup n. And this action is Hamiltonian with respect to this log-symplectic Poisson structure. And it has a compactified moment map, which I'll call nu-bar. And again, I can understand it by identifying n star with g-mott b, in which case this moment map factors through the moment map for the g cross g action, which is now just projection onto the fibers. And this compactified g cross g moment map is exactly, well, both of these moment maps are extending our moment maps for the cotangent bundle of the group g. And in fact, we can see, if you remember, the image of mu consisted of conjugate pairs in g cross g. But now mu-bar is a proper map. And its image consists of pairs of elements in the Lie algebra that lie in the same orbit closure. And if c is a regular element, then the fiber above the corresponding diagonal point is exactly the closure of the corresponding centralizer inside the wonderful compactification. So now the theorem, well, the theorem is that we consider, again, the diagonal coset corresponding to the principal nu-potent f. And this point is still fixed by n cross n. It's still a regular value of the compactified moment map mu-bar. And there's an isomorphism between the quotient of the fiber above this point by the n cross n action and the variety of pairs xc in the wonderful compactification across the principal slice with the property that x lies in the closure of the centralizer of c. And this thing on the right-hand side I'll denote by z-bar. And you can see that it's a partial compactification of the universal centralizer z in the g direction. So in particular, the partial compactification is smooth and it has a natural log-symplectic structure that is extending the symplectic structure on the universal centralizer z, which is the open dense symplectic lease. OK. So now I want to next take a bit of time to describe what these compactified centralizers look like. So let's fix the following space. It's going to be an affine subspace of the le algebra that consists of the braille plus the negative symbol root spaces. So here alpha 1 through alpha l are the symbol roots. And then for every element x and g, we have an associated object called a Hessenberg variety which consists of cosets in the flag variety associated to group elements g with the property that g inverse applied to x is contained in H. So this is a definition. And maybe first I say theorem, which is that there is g over x. Well, maybe what I want to say is fix a regular element x in the le algebra, then there's gc equivariant isomorphism between the compactification of gc and g bar and the Hessenberg variety associated to c in g mod b. So explicitly now what do these centralizers look like? When I choose a regular semi-simple element, then, well, its centralizer is a maximal torus. So I expect its closure in g bar to be some kind of projective toric variety. These are described combinatorially by fans, and the fan corresponding to this one is exactly the fan of vile chambers. And then on the other hand, if I pick a regular nopotent, then, well, its centralizer is now a unipotent, abelian subgroup of the same dimension as the rank of g. And through this identification, the compactification of the centralizer is isomorphic to something called the Peterson variety, which is these projective toric varieties are smooth, but the Peterson variety is fairly singular, and then high rank is not normal. And then in the intermediate case, so when x is an arbitrary regular element, these, I can write its Jordan decomposition, and then this is reflected in the structure of the compactified centralizer in the following weight. So I take the reductive subgroup, which is the centralizer of the semi-simple part. I look at the corresponding adjoint group in which the image of the nopotent becomes regular. And then the compactification of the centralizer of x surjects onto the corresponding, let me denote Peterson varieties by subscripts, onto the corresponding Peterson variety of this semi-simple group. And the general fiber is a toric variety that corresponds to the compactification of the center of the sleddy. So, what this means is that I can view the partial compactification of the universal centralizer as a sub-variety of the flag variety cross the principal slice, whose fibers are Hessenberg varieties. So this is a smooth family of Hessenberg varieties with a log-symplectic structure. And in this way, I can put on a C star action that is going to let me decompose it into affine spaces. So, remember that we fixed triple EHF. And the regular semi-simple element H gives me a map from C star into the group G that just takes T to exponential of TH. And now I can define a C star action on Z bar that has T acting on an element borrel V bar comma XE by the usual action in the first coordinate and by a scaled action by T star in the second coordinate. So, if you think about what's happening in the principal slice when I act by gamma of T, this action is fixing F because when H acts on F, it acts with eigenvalue negative 2, but then this is canceled by this power of T. So this is an action that contracts the principal slice to the principal nopotent F. And it contracts the partial compactification of the universal centralizer to the C star fixed points in the fiber above F, which is to say the C star fixed points in the corresponding Hessenberg variety. So, when I said, ah, maybe I meant this, or no. Yes. Okay. Right. So, these six, these C star fixed points are well known in the Peterson variety and they correspond exactly to translations of the fixed positive borrel by a biogrup element that is the longest word of some parabolic biogrup. So, these are indexed by subsets I of the simple roots. And then we get a decomposition of Z bar into attracting sets for the C star action, where the attracting set XI is the collection of pairs, where I let T go to zero, is the collection of pairs that flow to the fixed point index by the subset I. And such a thing has dimension to L minus the cardinality of the subset. So, what we get is a stratification of Z bar by these affine spaces and a basis for singular cosmology where each XI lives in degree 4L minus twice the rank of I. So, maybe I stop. No, not the R.
|
Let G be a semi simple algebraic group of adjoint type. The universa lcentralizer is the family of centralizers in G of regular elements in Lie (G), parametrized by their conjugacy classes. It has a natural symplectic structure, obtained by Hamiltonian reduction from the cotangent bundle T∗G. We consider a partial compactification of the universal centralizer, where each centralizer fiber is replaced by its closure inside the wonderful compactification of G. The symplectic structure extends to a log-symplectic Poisson structure on this partial compactification, whose fibers are isomorphic to regular Hessenberg varieties.
|
10.5446/53505 (DOI)
|
So I'll talk about, I'm giving you a bit of an overview talk or at least I'll try to. So I'll mention review of my work with Benzvian Rochet and with Rochet Snyder. And then I'll spend a bunch of the talk talking about some work in progress with Ian Lay, Gus Schrader and Sashe Shapiro. So Schrader and Shapiro are among us today. Okay, so let's let G be a reductive algebraic group over C. And for now let's consider S to be compact and oriented surface, possibly with boundary. So may or may not have a boundary. And so basic objective study is the character stack. So it's a modular space of G local systems on the surface. So that is for me just representations from pi 1 of S into our group G divided out by the conjugation action of G. And so I'll also use the notation, so the character variety is its affinization. So that means the character variety is an affine variety, which I'll denote without it underline. And for that what we do, well when we build this thing as a stack, we write it as a quotient of a framed character variety by the G action. And so similarly, if I want to just study the affine quotient of this stack, then I take the framed character variety. I won't write it, but framed means that I trivialize at a single point. So that I choose a basis at a single point. And then that framed variety has an action of G. That's the one that I quotient by to get this stack quotient. And so if I just want to talk about functions and do linear algebra, then I just take the G invariance in the framed character variety. OK, so some basic, some context I guess is, so, oops, I'm already. Yeah, I mean, well there's a global section, so yeah, so I'll mention this later, but there's global sections from here, let me not write it. This is the stack quotient and this is the affine quotient. So indeed, if I take the structure sheaf here and I take its global sections, then I'll get the same algebra. And this is just a way to present it. So this is canonically Poisson. And so when S is closed, this is symplectic. And this is the sort of a Tia-Bat and Goldman sort of independently defined this Poisson bracket. And there's more structure, which is if I have a three manifold, this is a Lagrangian correspondence in, so it's a Lagrangian correspondence between the two symplectic structures on the boundary. So here the boundary I'm denoting as S plus union S minus for the incoming and outgoing boundary, so M3 is also oriented. OK, so this is a very basic structure and what we're interested in today is quantizations. What I want to try to convince you is that, so quantizations like this have been studied in quantum topology for a long time, but I think I hope to convince you today that there's a lot of room for geometric representation theorists to get involved in studying these quantizations and I'll try to pose some questions that are natural for us to think about. So there were three quantizations throughout the 90s and early 2000s that were proposed for these sorts of things. So as far as I know, one of the first were, I'm taking the, calling AGS algebra, Xae of Grossa and Chomeros. And so these are certain generators and relations, algebras, and these really deform. What they really deform is this framed S when S has a boundary. OK, so in a few more words, maybe I'll write over here. So if S has a boundary, then the framed character variety of S is just G to the 2G plus R minus 1 copies mod G. So if you think about that, what I'm saying is that the pi 1 of a punctured surface is just a free group and so all we have to do is spell out these elements of G mod G. OK, and so what they really quantized was this space G. So these quantizations, there's some sort of Q algebra quantum functions on G to the 2G plus R minus 1 that they defined using sort of well understood ideas from quantum groups and that's our first point of contact. Another one was proposed by Terayev and these are skein algebras. So this is really in the case of SL2 and I'll talk a lot about this. These skein algebras are sort of diagrammatical algebras that are defined by considering sort of all possible tangles on a given three manifold modulo certain skein relations like you see in the Jones polynomial. And then finally, what I want to spend a lot of time talking about today are the quantum cluster algebras of Fakhen-Gantra. So more recently with David Menzvi and Adrian Brachet, we gave a fourth construction in the language of factorization homology. And essentially over the last few years, we've been just trying to pick off these classical constructions and understand how they fit in our framework, factorization homology. So what I want to spend the rest of the time doing is recalling this construction in factorization homology and then explaining how it is that it recaptures each of these quantization procedures. And I just want to emphasize that before this approach with factorization homology, there were a few sort of ad hoc constructions relating various of these different constructions but it was essentially very difficult to relate these. And we'll see why. The relation through factorization homology is not directed. It's somewhat subtle. Is there a statement that all of them are the same? No, but it will get there. They're all instances of factorization homology of different flavors and I'll tell you those different flavors. And so using that, you can make some precise relationships. That'll be the focus of today's talk. Okay, so for factorization homology, so we consider a certain two category called PER, which is C-linear locally presentable categories. Okay, so here locally presentable is a mild generalization of abelian. If you want to think about just abelian categories for the rest of the talk, that's fine. And so this is a two category. So those are the objects. We have a co-limit preserving functors and we have naturalisomorphisms. So that's just the place where we're going to do linear algebra. That's what presentable means. Yeah, presentable means co-complete. So they're closed under all co-limits. And then we're going to fix a particular presentable category to be rep QG. So this is representation. So this is a braided tensor category of integrable representations for UQG. So here integrable just means, when Q is generic, for instance, integrable just means that they're locally finite dimensional. So they're direct sums of finite dimensional modules. And so the data of this as a braided tensor category, so the data of rep QG as a braided tensor category is equivalent to giving a functor from a certain category of disks, which I'll now explain to presentable categories. And this category of disks has objects or just disjoint unions of D. So boldface D will denote just the two-dimensional disk, the two-dimensional open disk. The one morphisms are embeddings, so oriented embeddings. And the two morphisms are isotopes. So this has a symmetric monoidal structure just by disjoint union, and this has a symmetric monoidal structure on it, which is called the Deline-Kelly tensor product. And this is just defined as you define tensor products of vector spaces by how you map out of it. So it's bilinear. So maps out of the tensor product are bilinear maps out of each component. OK. So the first orienting observation is that the data of a braided tensor category is the same as the data of a functor from these disks to categories. So for instance, while the disk itself, so D, that goes to the braided tensor category A. If I include two disks into a bigger disk, that goes to a functor from A tensor A to A. And because I have isotopes, I can say switch the order of these things, and that goes to the braiding, et cetera. OK, so this is some well-understood correspondence. And so factorization homology as defined by Ayala and Francis is the universal extension from the category of disks to the category of surfaces. So this category of surfaces is defined in the same way, except I don't just restrict to disjoint unions of disks. I allow arbitrary things. And there's a canonical extension, which was defined by Ayala and Francis following a suggestion of Luri. And I'm going to note that Z goes to ZQS. OK, so it's an invariant of surfaces. It produces you a category, an abelian category. And we showed with Brochet and Snyder that this actually extends to a 3D TFT. OK, so shortly after Ayala and Francis defined the factorization homology, so Claudia Scheinbauer proved that it defines a two-dimensional TFT. So it's already giving us a surface, invariance of surfaces, and those actually fit into the framework of TFT. And we showed with Noah and Adrian that this defines a three-dimensional TFT. So here, to three manifolds, we assign vector spaces. And to surfaces, we assign this category ZQS. OK, and so on down the line. So to one manifold. I would say that this is a piece of important here. Yeah, I'll remark on that. So this is a minoidal category. And this is a braided tensor category. So the part that this data here defines a 2D TFT, this was in Claudia Scheinbauer's thesis. And so to answer Sasha's question. So, right, so because we're talking about the category of disks and we're talking about surface invariance, her statement is completely general. So any E2 algebra in any setting defines a two-dimensional TFT. And yeah, and what we showed with Noah and Adrian is that if you have a rigid braided tensor category. So rigid just means that it's generated by dualizable objects. Then it defines a three-dimensional TFT. And just let me answer his question, and then I'll go to yours. So Sasha said, we should think of this as a 40 TFT. That is how we think of it. So what we define is there's a certain four category, which is the Merida category of braided tensor categories following Huxing, Johnson Fried and Scheinbauer. So they tell us that there's a four category, and we take our braided tensor category and regard it as an object in that four category, and we ask how dualizable is it. And what we show is that if you're rigid, then you're three dualizable. We define three manifold invariants. But in order to be four dualizable, you need something stronger, something like modular or at least braided fusion to get to four dualizability. So the point is the vector spaces make sense and converge, but the numerical invariants to four manifolds don't make sense. When they do, so if we restrict to really small categories, then we do get numbers. And this is what was called the Crane Yetter. No, I mean these ones have like finitely many simple objects. Repqg has infinitely many symbols. They may or may not have boundary. Here, well for a TFT, this is what I'm assigning to a closed surface, and as one expects. I'll talk about computations right now. Yeah, that's what we're going to spend the day doing is computing some examples. Yeah. Can you see in the one one the whole vector, it need to move to the categories? Say again, sorry. Do you need to move to the categories for any of those things? Well in the definition, like in the constructions of Ayala Francis, they're working with infinity categories. So presentable, this is a two comma one category, which one can trivial regard as an infinity one category. But all the co-limits that we ever compute are just computed in two categorical terms. Yes, absolutely. Yeah. I mean, the only catch to that is the computations are always done in two categorical terms. There's a four category of braided tensor categories, and so we talk about the notion of bimodules, for instance, it's the sort of four categorical notion, but the computations themselves always are just linear algebra. Yeah. The unique statement is really for the infinity category, but do not have the uniqueness in the two one category or just... Maybe I'll, let's discuss that kind of question later. That's okay. Yeah. I don't have a specific question, but you mean this uniqueness? It's a canonical extension. It's a left con extension. This is a generating subcategory, and it satisfies a certain defining universal property. Yes. You simply regard the two one category as an infinity one category with discrete higher morphism spaces. Yeah. Yeah. So it's... Yeah. Okay. So, yes. So the miracle of the factorization of all the construction is that it is computable. And actually, let me just say now, for surfaces, I'm going to use over and over a property called excision that was proved by Ayala and Francis. And excision says that if I want to compute these invariants for some surface, if I present my surface as glued out of two smaller surfaces over some interval, then I get an equivalence of categories between... I can compute the surface... I can compute the invariant of the first surface. I can compute the invariant of the second surface, and those are both module categories over the cylinder, and I take the relative tensor product. So this is an instance you're asking about with boundaries. So I'm saying this is a module for that minoidal category. This is a module for that minoidal category, and I just compose them as modules, which is I take the relative tensor product. Okay. So this is what allows us to do computations, as you'll see as we go along. So using this property, we did some computations. Sorry, but I'm not full of the interest. So you're assigning a category to every surface with boundaries. No, you're assigning a category to every surface full stop, with or without boundaries. We do without boundaries. With or without boundaries. So if you have a boundary, it also carries an action of a minoidal category associated with that. Oh, okay. Right, right, exactly. So if you like, we're assigning not just a category, but a module category over its boundaries. So that's the beautiful thing about factorization homology is that everything is based in categories. Okay, so Sasha is interested in the minoidal category to a surface, what the minoidal structure is on one manifold. And so this is the minoidal category of modules over a certain algebra. This is called the reflection equation algebra associated to OQG. And the buzzword for today is that these are the Q-Harris Chandra bimodules. Now that's a good question. So the question is, are there assumptions on Q? No. So that was an important point. So Q can be generic, or Q can be a root of unity. In the root of unity case, you need to say what you mean, and Paul Saffronov will say that very precisely. If Q is generic, the behavior is of course different, but the theorems I'm stating on this board apply equally well. So Q-Harris Chandra bimodules. So this algebra OQG has a certain degeneration which recovers the universal-enveloping algebra of G. It's a bit surprising because it has another degeneration which recovers functions on the group G. But because it's non-commutative, it really behaves the most like the universal-enveloping algebra but may be valued in the group rather than the Lie algebra. And so if we would study equivariant U of G modules but which are not finite, then that would be the notion of Harris Chandra bimodules. And likewise, so this minoidal category carries a structure which resembles Harris Chandra bimodules. I think Sam will talk a bit about this if I understand right. So that's the first one. Isn't the same category used to sometimes write OQG over G1? You could write it that way. Yeah, that's a good point. So, right, thank you. So this is for S1. So the character variety for S1 would just be G mod G, G quotient by its adjoint action because pi1 is just Z. So we have a single group element and we regard it up to conjugation. So note if I, so Sasha asked, does this apply to any Q? These statements apply literally to Q equals 1, the same. So if I plug in Q equals 1, then I just get O of G modules in refugee, which is to say I get equivariate, quasi-coherent sheaves on the stack G mod G. So it's exactly a quantization in that sense. Okay, to a puncture torus, we get another nice algebra. We get modules for another algebra, which is DQG, and these are some Q difference modules on G. And so if we send Q to 1 in this way that we got U of G, then this becomes differential operators on G. So this algebra Q deforms the algebra of differential operators. And then the most interesting of all, so these are what you would call weekly equivariate modules. They're DQG modules and they have a compatible action of the quantum group, which is integrable, what's called weekly equivariate, and I'll say now what strongly equivariate ones are. And these are what I'm calling, what we're calling Q character sheaves. Okay, so this notion of strongly equivariate means that, well, so there's a moment map from OQG to DQG, which recovers the inclusion of vector fields from U of G to D of G when we do this to generation. But in topological terms, it just comes from the fact that if I have a torus with boundary, then it has a boundary. And so I get an inclusion of the boundary circle as the boundary, and it turns out that this induces a homomorphism of algebras from OQ, which is assigned to the circle to DQ. And so strongly equivariate means that there's a compatibility between. Now what happens is we get sort of two actions of this algebra OQG. So there's something called the Rossoisomorphism, which tells us that OQG sits inside of UQG as a subalgebra. And so now strongly equivariate means that there's now two actions of OQG that I insist coincide. So on the one hand, I was looking at modules for rep QG. So I was looking at modules which are equivariate for UQG, so I've already fixed the structure there. And on the other hand, I have this homomorphism from UQ, so I get another one. And I ask that those two coincide, and that defines the notion of a strongly equivariate module. And what we prove is that the thing you assign to the two torus is this category of strongly equivariate modules. It turns out it's a full subcategory of the punctured torus. So I get two actions of this OQG. One is by taking the module, forgetting rep QG, forgetting the equivariate structure, and just pulling back to OQG. And the other is by forgetting the DQ structure and just regarding it as a UQG module. And I ask that those coincide. And of course, that just mimics the notion of strongly equivariate D modules that you're used to. Sorry, this embedding from OQG to DQG doesn't extend to them. Embed from UQG to UQG. There is not an embed. No. The embedding that I use here does not have an embedding of UQG. So this embedding here, it's been from OQG to UQG. It's not nice and more than, but it's very close to being nice and more than. It's very close to being nice and more than. Yeah, so there's a certain element which if I localize it, then I can extend it. But that element is not localized here. And that's actually quite important. Because for instance, the classically what you're doing is you're throwing away some locus, which is the degeneracy locus of your Poisson bracket. So it's a nice thing to do. But if you do that, then you lose important parts of the variety. Okay. So. When you forget this, you get the UQG structure and then you restrict the OQG right. That's right. That's right. And you ask that those two different actions of OQG be compatible. So it's not at all obvious that that's the right definition. That's part of the content of this theorem. But that turns out to be the right thing to do. Do you discuss it as an extent for the OQG localized structure? No. Because the OQG inside it's then. Let's discuss this after. Okay. I have DQG. I have a module over it. I can restrict it and I get a module for OQG. Okay. That's M1. On the same underlying vector space, I have an a priori integrable UQG structure and I can just restrict that through this homomorphism from UQG to OQG and I ask that those two actions of OQG coincide. I didn't understand that. Yeah. Okay. Because this OQG extent is inside the UQG. It does not. It does not. So there is not a homomorphism from UQG. That's not true. So that's the UQG structure. You said before that UQG and then localization of UQG. So it does seem to make it. But first thing is that there's a slightly bigger algebra acting because there's an action of DQG and in addition there's an action of UQG. So altogether they form a bigger algebra. Yes. That is true. The support of this, yeah, the support of the strongly-equivariate modules are automatically in this localization that you've suggested. Yeah, that's right. That's right. Actually, that is useful in some constructions. Thank you. Okay. So, right. So I think, so one thing that tells us is that it's interesting to study these. There's also an analog of this for higher genus. I'm just focusing on genus one. It's useful to study these categories by using our intuition of character sheets. And so that's, I think, what Sam will discuss in the next talk. Did you see in what sense is the quantization of the cross-on-directed in-section before? Yes, it is. For instance, so DQG is, I should have said this. So DQG, so the general punctured surface or closed surface? Either one. So for it, okay. So, right. So for a general punctured surface, the algebra that we get is isomorphic to the Alexei of gross-ashomorous algebra. And this is known to degenerate precisely to the facgrossly plus on bracket on the character variety. So it's as strong as you could hope for. For closed surfaces, it's a reasonable question. You have a category and not an algebra, right? These examples might lead you to believe that it's always modules for some algebra. And that's not a priori true. For some groups, it will be true. But for some groups and for some surfaces, it won't. So Sam will talk about this. So yeah, the sense in which it's a quantization in the category sense, I think, maybe it's an interesting question. Oh, yes, of course. Sorry, that was very important. Yes. So, by definition, we extended from the disk category here. So by definition, ZQ of the disk is the category rep QG. Yeah, thank you for that question. And what we're saying is that the rep QG is a quantization of rep G and rep G would be QC of point mod G. So we're taking that as our basic intuition and we're extending that to surfaces. Yeah, thank you. OK, and so all these computations in that theorem, they all boil down to just computing excision in categories and you have lots of tools for doing that. So that's the relation to the AGS Algebra. OK, so these algebras are mostly how I think about these categories. But for applications, it's nice to note that they have a connection to skeins. So I'm going to fix G to be SL2 and I want to recall what skein modules are. So recall that the Jones polynomial is determined by what's called a skein relation. OK, and so the skein relation on the Jones polynomial says that if I have a crossing in some projection, I can resolve that in terms of the Jones polynomial of different ways to alleviate the crossing and I get these factors Q to the 1 half and Q to the minus 1 half that appear here. OK, so Kaufman and Terayev suggested a generalization of the Jones polynomial, which is called the skein module. And so we have a three-manifold, let's say oriented and closed, three-manifold. And so the skein module of the three-manifold is just a vector space despite this word module. And what we do is we take the span of all links in our three-manifold, isotope, of course, and also the skein relations. So because we have an orientation, it makes sense, it allows us to make sense of this idea of not projections. And so we say if there's any ball in your three-manifold where the different knots look like these three configurations, then you impose that relation. So this defines you some vector space. So despite the easy definition, these are impossible to compute. They're essentially impossible to compute. So an open question is, are they, you know, given a given three-manifold, is it zero-dimensional? Is it one-dimensional? Is it infinite-dimensional? You just, with these relations, you don't have very many tools except you need to really understand your particular three-manifold and do some sort of combinatorics. Yeah, that's on the paper. That's a good question. So for the three-sphere, this is C. And you knew that when you were an undergraduate and you took your first knot theory course because the fact that this is C is what tells you that the Jones polynomial is well-defined because I can take any link and I can use these relations and write it as a multiple of the unknot or several unknot. And so then I get C. Yeah, thank you. Yeah, so there's a course of version for both. But let's say that I work over rational functions in Q or Q is generic. Yeah, thank you. Yeah, the Jones polynomial, of course, I always give you Laurent polynomial, but okay. And so, yeah, very little is known about these for any general class of three-manifolds. And so it's a challenge to understand those. And traditionally, people have studied these by skein algebras. So if S is a surface, I'll also denote by the same notation SQ of S to be the skein module of the surface cross an interval. But now this has an algebra structure because I can stack links on top of each other in the interval direction. Okay, so this is an algebra. This is called the skein algebra. And so when you have a manifold with boundary, then you get an action of the skein algebra on the skein module just by sort of putting the things into the boundary, pushing them into the boundary. And this looks a lot like a TFT, but it's not. It fails to be a TFT for a good reason, which is that you need to do a little bit better. And so- Yeah, I mean, in the finite skein modules, do you really consider the links so that you can sort of add them on the boundary? I'll get- that's what I'm going to say now. Yeah, so right here, this has nothing ending on the boundary. And so the three-manifolds also have nothing ending on the boundary. And so all I'm doing is inserting just links in. And what you're saying is that's a bad idea, and I'll now say I agree. So there's something called the skein category. So I'll write that as- okay, so this is a category. So this was essentially proposed by, let's say, Morrison and Walker. And let me also mention Johnson-Fried. So Morrison and Walker, at least, were thinking about this following definition when they introduced blob homology, which preceded factorization homology by many years. And so what they said is that we should study a category just like Sasha says. So objects are finite sets, x inside a surface, and morphisms are skeins starting and ending at x and y. Okay, so instead of- so here's my surface across the interval. And before I was only allowing, even though I was considering surface cross an interval, I was never allowed to come near that boundary surface. But now we say, okay, I fix a finite set of points here and another finite set of points here, and a morphism is sort of anything I can draw that connects those as a tangle. It means skeins in s cross of interval. Yes, skein- No, I mean just one more interval. It's going to confirm x to y. Skeins in s cross- yes, sorry. Skeins in s cross i. So that's the notion of the skein category. Okay, notice that this is not the kind of thing we like to study in representation theory. It's not abelian. So I have objects like this. I have morphisms like this, but I certainly can't construct kernels or co-kernels. That won't be another such object. Okay. So theorem, nevertheless, of my PhD student, Juliet Cook, sort of following ideas of Johnson Fried and Kevin Walker. So Theo has a sort of unpublished notes where he suggests that the following should be true. He essentially conjectures it. He states it without proof. And he sort of attributes the main ideas to Kevin Walker is the following. So let me say it this way. So if we map from disks not to presentable categories, but to item-ponent complete or Cauchy complete categories. So these are sort of small K-linear categories. And again, a braided tensor category, semi-simple, a semi-simple braided tensor category determines us a functor like this. And we can just as well define the factorization homology in the same way. And the claim is that this is exactly the skein category of S. Okay, so skeins arise not as these co-limits in presentable categories, but in this small, this world of small categories. Okay, but this is only true then for Q generic. So this precise statement only applies for Q generic. Yeah, exactly. So for instance, for rep QSL2, then the, we basically what we're saying is that, so I've simplified SL2 just to reduce the notation. But what you would say is that you're not allowed to mark these points X by arbitrary objects of your category. Each X has to be marked with the defining representation of SL2. And so if I have three dots, then that has to be sent to V tensor V tensor V, where V is the defining representation. So for a general semi-simple tensor category, you can mark each dot by a different simple. And this is enough. But in SL2 case, you can get away with just the defining representation. Her theorem is true for any G when, or any braided tensor category, which is semi-simple, where you regard the, where you just restrict to the simple objects. And then you regard it as a functor from disks to ICO. You recover the scan category. That's the content of her theorem. And so as I say, this was stated without proof in some notes of Johnson-Fried. And well, okay, Kevin Walker made similar statements about blob homology, but before factorization homology was a thing. And there were technical difficulties in actually proving that, but they've since been addressed. Yes, Arshad? ICO, so these are item-podent complete K-linear categories. So that just means if I have a morphism, I can't necessarily take its kernel. But if that morphism is item-podent, then I can project. And therefore, I get both its kernel and its co-kernel. But only, so that's only well-behaved in the semi-simple case. And so a corollary of this theorem is that if I take these categories here for some surface and I look at their compact projective objects, then that subcategory is equivalent to the skein category. This is also for Q generic. That's an important point, thank you. And in fact, there's just some two functors between ICO and PER. If I have an item-podent complete category, there's a free co-completion that turns it into presentable category. And I can take compact projectives. Those are adjoint functors, and that's essentially how you prove it from this identity to this identity, just having an adjunction. Do you expect this corollary to be true, though, at roots of unity? Yeah, so let me talk about what is and is not true at roots of unity. So this corollary is definitely false. It's super false when Q to the L equals 1. And the reason is that basically, I mentioned that the way that you prove this is using an adjunction. And so if I have a, so basically I can express, so the idea is I can express harms in ZQS in terms of harms in rep QG. For instance, if I have some surface with or without boundary, I can always choose a disc in there, and I can include that disc, and I get a functor from rep QG to ZQS. And it turns out that for the objects that come from this, for these skein objects, they just come from including into a disc. And then if I want to ask, is that projective or not, I can just take whatever object I was trying to harm with, forget it down to rep QG, and then I can just take harms in rep QG. And so the point is that if you're not semi-simple, then the corresponding skein objects that you would, the objects you would get, you would always get a functor this way, and they're not projective. Simply because if you're not semi-simple, you weren't even projective in rep QG, so you have no chance to be projective there. So this corollary can't be true in QSRT of unity. However, so some work in progress, again with, so a new, new paper with Adrien and Noah Snyder, is to define certain tilting subcategories. So that is, I can, I can nevertheless look at the subcategories which come from tilting modules, and those categories won't be the compact projectives, but their harms will still be computed via skeins. So that is to say that there's a tilting subcategory. So we have ZQ of S, it has a certain tilting subcategory inside here, and harms in here, and the tilting subcategory are given by skeins. So sort of what it's saying is that this category, it's compact projectives, its structure as a category is not the same as tilting modules, but there's a subcategory of tilting modules who behave just like skein modules. And that sort of pattern goes back to the very definition of skein modules, the way they're defined, sorry, the way tilting modules are defined. So for instance, for SL2, you just define the tilting modules to be, I take tensor products of V, and I take all of the item potents, and I just project those. So at a disk, this is a true claim, and then I claim that we can export that to general surfaces. So that motivates some of the work we did with Jordan and Pavel, yeah? Oh, yeah, well, I wouldn't attribute. Yeah, in the quantum setting, I attribute this to like Anderson, Harhening Anderson, and people in. Yeah, exactly, yeah. But yeah, indeed, on the level of the disk, this is well understood, and the game is to sort of export that to general surfaces, yeah. So yeah, there's a well understood theory of tilting modules for the disk, and we want to just export it, that's right. Yeah, it's always the same, it's just this. Whether or not you have boundary. I mean, you had this kind of dramatic definition of what schemes were, and so. Oh, even if you have boundary, the same definition, you just cross with the interval. Yeah, it doesn't enter into the definition. There'll be extra structure because of it. Okay, so I have five minutes to finish the last half of my talk. So that's okay, that's my fault, I had too much that I tried to do. Let me just at least state the main results of the work with. So I apologize, this is a bit rushed. I was a bit telegraphic because there will be two more talks that expand on each of these other themes, but I want to spend the last five minutes doing the part that won't be in either of the other two talks. So we talked about Q character sheaves, and we talked about skeins. And so I want to talk now about quantum cluster algebras. So fucking gone, Shraff. So what they do is they quantize local systems, a modular space of local systems. To deal with some of the problems with the stack, with the character variety stack, they fix additional data that sort of frees up the action. So they consider what I'll call a flagged surface. So apologies to Gus and Sasha because this doesn't match our draft. So this has punctures and it has boundaries, mark points. And these always lie in the boundary of S. So for fucking gone, Shraff, you always have a boundary. And so for instance, here's the disk. It has two marked points on the boundary and it has one puncture. So these are the mark points and this is called the puncture. And I prefer to write a topologically equivalent picture here, which is we grow each of these points into little regions. And then I write G here, T here, and I write B on all of the one dimensional defects. And then there's a flag local system. So is three pieces of data. So a G local system on the bulk, the part labeled by G, a reduction to the borrel on SB and a T framing. So just to give you some examples, for instance, if we would just do a disk with a bunch of mark points, well then there is no interesting local system. So we just get choice of flags, G mod n cross G mod n. But we have to regard this modulo, the diagonal G action which changes the G framing and the T to the N action where we have N marked points. Okay. And I'm running essentially out of time. So let me just say that, right, so there's a category of flag disks. So this is generated just by a disk labeled G, a disk labeled H or T, and a disk labeled B. Okay. And it's defined in a similar spirit to before. This sits inside a category of flagged surfaces and the data of a functor from this thing to presentable categories is the data that sends this to rep QG, this to rep QT, and this to rep QB. So that's already a non-obvious statement. I'm saying that this category that has generated these, by these disks, that I can represent it by these classical objects. And then we define the invariant of flagged surfaces to be the unique extension. And now this exists by Ayala, Francis, and Tanaka. So this is the theory of stratified factorization homology. And let me just state it in the SL2 case. At least for SLN, it's just more combinatorial stuff to state it, which is, so, sorry, before I state that, I need to say one more thing, which is what fucking Guntriff actually do is, given S and given a triangulation, they construct an open set inside this space of local systems. And this open set, they show is isomorphic to just a big torus. But it depends on a triangulation that they have to choose. And they show that any two triangles are birational. So these open sets that you get are a bunch of tori. They're all birational to one another. And they have a globally defined plus on bracket. So fucking Guntriff quantize, this plus on bracket, and the point for them is it's very easy to do because this is log canonical plus on bracket. So it has an obvious quantization in the obvious way. And they've done already the hard work by finding these open sets. And so what we show with lay, Schrader and Shapiro, and I'll close with this. OK, so for every S and every triangulation, remember here for safety sake G is SL2, we have a subcategory, localizing subcategory. And this has a distinguished object in it, which is like the structure sheaf. And the endomorphism algebra, it's a generator. It's a distinguished generator of that subcategory. And its endomorphism algebra is isomorphic to a quantum torus in variables equal to the number of edges of your triangulation. And this is the same as fucking Guntriff. OK, so the point is that we recover what they defined by denerries and relations, the system of birational non-commutative tori. And we rather define a category once and for all. And we show that this sort of non-commutative stack has open charts whose coordinate algebras are the quantum tori that they've defined. And the point of the paper is to rediscover their combinatorics, which I never understood. So the amalgamation, mutation, et cetera, that you do in Guntriff coordinates, they all just follow when you read the factorization homology manual. I'll stop there. Thanks. Thank you.
|
Skein algebras are certain diagrammatically defined algebras spanned by tangles drawn on the cylinder of a surface, with multiplication given by stacking diagrams. Quantum cluster algebras are certain systems of mutually birational quantum tori whose defining relations are encoded in a quiver drawn on the surface. The category of quantum characters heaves is a q-deformation of the category of a d-equivariant D-modules on the group G, expressed through an algebra D q (G) of “q-difference” operators on G. In this I talk I will explain that these are in fact three sides of the same coin–namely they each arise as different flavors of factorization homology, and hence fit in the framework of four-dimensional topological field theory.
|
10.5446/53506 (DOI)
|
OK. So my talk will begin with a sort of maybe a little bit introductory, maybe, to hear your acoustic. So let's begin with D, a representative, V representation. And I'll also make use of a character of my group, chi, V to C star. So associated to this data, we have two spaces, the Higgs branch. So the Higgs branch is defined as follows. You take the cotangent bundle in V and consider the Hamiltonian reduction of this cotangent bundle at the GIT parameter chi by the group G. So you take the zero level of the moan map and then take the projective GIT quotient at chi by G. That's a Higgs branch. And there's also the Coulomb branch, whose definition was, I guess, originally given by physicists. And then mathematical definition of the Coulomb branch was given just a few years ago by BFN, Robert M. Finkelberg and Neckonima, in case you missed it. So I will get to that definition shortly. But I just wanted to give a couple of examples. So there's two class of examples, which will be most important for me. The first example, when G is a torus. In this case, the Higgs branch is called a hyper-toric variety. It has lots of beautiful combinatorial structure. And the Coulomb branch is also a hyper-toric variety. And it's, in fact, what's called the Gale-Duel hyper-toric variety. Maybe I should have actually said from the very beginning, these two spaces are sometimes called the relationship between this Higgs branch and this Coulomb branch is often called a symplectic dual. These two spaces are called symplectic dual. And in good cases, if something appears as a Higgs branch and the Coulomb branch, and then you find out that the same space can be a Higgs branch, then the Higgs and the Coulomb will switch in good cases. So a sub-example of this example, just to see, show you how we're doing, we could take, for example, C star acting on Cn just by scaling and chi just to be the identity. And then the Higgs branch will be the cotangent bundle of the projective space. And the Coulomb branch will be C2 mod C mod n. In my talk, this Higgs branch will always be defining this projective diquotient. So it will typically be smooth. And this Coulomb branch will always be affine. Of course, you could consider affine version here. You could consider resolution here, but just for simplicity, I'll think of it like this. So that's one simple class of examples. Another important example is when Gv comes from a framed quiver. So let me explain what I mean by that. So we fix a quiver. Usually, I mean just no loops and maybe only most one arrow between two vertices. And I have a framing of my quiver. So I put in round circles the gauge vertices and in square brackets this kind of framing vertices. So the V associated to this quiver is hum C2 directs on hum C2 C3 directs on hum C3 C4. And the group associated to this quiver is a product of the gauge groups. So GL1 cross GL2 cross GL3. And it acts on this vector space V. I said that you should act on V. It acts on V just in the usual way. And in this example, well, in general, sorry, maybe you'll see. In general, when G and V comes from a framed quiver, this Higgs branch is a macrogemoc quiver variety. And the Coulomb branch, I'm just getting a little ahead of myself. OK. In general, in the quiver case, the Higgs branch is an macrogemoc quiver variety. And the Coulomb branch is an affine-grasse-money-in-slice. Or generalized affine-grasse-money-in-slice. Associated to the group G, or I call it GQ, this quiver will be called Q, associated to the group GQ, the group whose thinking diagram. And of course, and this, oh yeah, just the circles part. And so in this case, for example, this GQ is SL4. So here I could talk about the generalization of this in the case where we have a, where this GQ is a symmetrizable catamity group. Oh, finally, just in this example, if this happened to be our quiver, and so this was our G and this was our V, in fact, happens to be in this case both the Higgs branch and the Coulomb branch are the cotangent bundle of the flag in C3, C4. Although technically, I guess because of my definition of thinking of the Coulomb branch always in an affine thing, the Coulomb branch will just be the nilpotom cone that I solve for. OK, so there you have some examples. So I've defined Higgs branch, and I haven't defined Coulomb branch, but I gave a couple of examples of Coulomb branch. So now I'm going to define Coulomb branch. So the definition of Coulomb branch is a little complicated. So it proceeds, let me just wrap summary. So you start with your G and V. You produce this kind of complicated space called RGV. And then you consider the homology, the equivalent homology of this RGV. And this thing will be an algebra. And then you take spec of this algebra. So it's a multi-step construction. So this will be the Coulomb branch. And this is a complicated space. So more precisely, we'll need to start by thinking about the affine-gross mining of the group G. So the affine-gross mining of the group G, by definition, is G over the RON series, modulo G over power series, and has a moduli interpretation as it pairs P phi, where P is a principal G bundle, and the disks. V and phi is a trivialization of P on the punctured disk. OK? So this space RGV that we're going to end up taking a homology of to make our Coulomb branch algebra is going to be built on this affine-gross mining as follows. We consider pairs GV, where G is an element in this affine-gross mining. And V is an element in the vector space V. So the vector space V enters now. So it's an element in the vector space V, but over tensored with Taylor series, tensored with power series. And we also have the condition that V also lies in this space. So this G, this G I lives in G of the RON series. So it acts on V over the RON series, and it takes this lattice, V over power series, to some other space, some other lattice, and we want V to lie in both lattices. It has a moduli interpretation. There's P phi S. So P and phi are as here. And S here is a section of the associated vector bundle V associated to P over the disk. Sorry. I'll just write that as a section of the associated vector bundle Vp on D, such that if we take this section and we compose it with phi, then we'll get a section or maybe a pre-compose it. I know. Let's go. Whatever. Whatever. Inverse. Whatever. This thing extends. So this thing will now be a section of the trivial bundle, and we want this to extend over D. So a priori, because the trivialization is only defined on the puncture disk, when we, this S will only become, after applying this phi inverse, will become a section of the trivial bundle over the puncture disk, and I want it to extend over the whole disk. That's the same as this thing. So what do you mean by this? And the character of the bridge? Oh, not yet. None. The character will come in later. I tried to fix it at the beginning because it's kind of useful for my purposes. And the space Rgv has an action of geopower series, coming from its usual action in the affine-grasse mining and also acting on this vector V. And then we form the homology. And that thing will be called the Coulomb-Brench algebra, Agv, or maybe more precisely, A0, and I'll use Agv for the deformed version, so where we take also C star. Is it really G and V in brackets? Equivalent classes of pairs of G and V, rather than? No. Sorry? I mean, in your original definition of Rgv. It's like this. But V is in something where where does it depend the choice of the representative of G? No, it doesn't, because if you change the representative by the left multiplication, it will leave this thing un-arranged. So it doesn't change the representative. It's just like in the usual definition of Steinberg right, actually. Or the potential of the five right. Okay. So one word of warning that this homology here is not quite an ordinary homology. It's defined using the homology of some renormalized dualizing sheaf. Probably the best way to think of it is that, so let me just, so this Rgv maps to this epichrist mining, of course. And this thing is infinite dimensional, and the fibers in this map are also infinite dimensional. So there's two kind of infinite dimensionals going on. But the way this homology works is we consider cycles which are finite dimensional along the base, and kind of finite co-dimensional in the fibers. So the typical element here, we take some sub-riding here and take its pre-image here. Or something. But those are not usual homologies for a different reason. The one for finite dimensional will still not be the usual homology. It's the Barlmore homology. Oh, okay, that's true. The thing is the singular, though. That's true, okay. It's also true because it's Barlmore. Fine, because it's not compact. Yeah. It's not compact. The bag is not smooth, so. Yeah, yeah, yeah. It's more where it's not smooth. Well, homology and Barlmore homology coincide. The space is compact. It's not the, okay. So, there it is. And sorry, of course, the reference for this definition is the paper of Robert and Pinkleberg, and I'm going to do that. Okay. So, maybe I just say this. Here. May I offend? So one would be if G is a torus, then the space you get this way is Coulomb. So the Coulomb branch, I didn't say explicitly, the Coulomb branch, mcgv is the spec of this algebra, a0. So if G is a torus, then this mcgv is the dual-hypertoric weight. First point. The second point, if G, Gv comes from a quiver, then this mcgv is isomorphic to this wmd mu, which generalize that by N'Kris Mannian slice. Where, maybe a word on the notation, so although it appeared in Heraklis-Taght, too, but just in case. So this lambda is directly related to the framing, and lambda minus mu is the sum of, and i alpha i, and these are the dimensions of the gauge vertices. And this is the affine, generalized affine-Kris Mannian slice for the group whose, whose the Dink and Diagram underlies our quiver. Okay. So, this is a background. Any questions on all that? Okay. So the subject of our work is the following question. So can we use, can we use, can we define modules for these algebras? Using, kind of, Springer theory. Do you want more of the Springer theory or for its quantization? Both. But in my notation today, I'm actually using AG for the quantization, and I'm using A0 for the non-quarantine, this version, sorry. Yeah. Now, maybe, maybe before proceeding, let me just note that we have some very familiar looking algebras which show up this way. So, maybe, sorry, maybe I could continue this theorem. Part three of the theorem, or, or even continuation of part two. But in this quiver situation, this algebra AGV, this quantization algebra, is a trunk, well, okay. Maybe this is only exactly true if G is a, if this, if this quiver is a finite type. And maybe I should also restrict the case when mu is dominant. Well, maybe, maybe thanks to Alex, for the recent paper, you don't need this restriction. But anyway, the, this algebra here is isomorphic to y-limbed in mu, a truncated shifted y-limbed. And maybe you don't care that much about truncated shifted y-limbeds, but for example, in the example, if, if we started with the quiver, which looks like this. In this case, then this AGV will be nothing but the universal developing algebra SLN modulo, the ideal generated by the positive part of its center. Well, more generally, you can get other, other central quotients. So, you know, something algebra of SLN, there's a whole theory of the, of this flavor group, which I didn't enter, which allows you to vary the parameters here, but let me not get into that. Okay. This last point here appears in the appendix to this BFN paper, which has many authors. I think that's right. Okay. No. I mean, the, in this, in this quiver case, I mean, there's this other group, the group I was calling GQ floating around. So yeah. The torus of that group X. No, no, no, the group won't necessarily act, but it's torus, for example. Okay. So I wanted to find modules for these Coulomb branch algebras using kind of Springer theory. So let me just maybe briefly recall the usual Springer theory, something about it. So in the usual theory, we consider the contingent bundle of a flag variety, amount B. So that's pairs G and, I'll write B prime. So B prime is a, sorry. That's not right now. X, B prime, okay, let me write that on the ground. B prime. Okay. I'll try to make it look better. Look more like the, what I was writing before. G and X. Okay. Where G is in the flag variety and X is in G of the standard borrel. Okay. So sometimes you might like write a pair consisting of borrel and a no-potent. And sometimes you would write a no-potent and a borrel subalgebra where X is in the borrel subalgebra. And here I'll just emphasize the similarity that I was doing before. I'll write G and L in the quotient. So, which is obviously the same as the, well, not obviously, but it's the same as the space of borrels. And then we have the projection onto the no-potent cone. And then we consider the Steinberg variety, which will be from triples like this. G1, G2, and X where X is in N as well. And also X, Y is in G1, D, and so on. And then in this usual Steinberg situation you consider a no-potent matrix or a point in the no-potent cone. And you consider P inverse X, the fiber of this. This is the Springer fiber. And it's nothing but the set of all borrels which contain your given X. G is such that X is in GB. So the analog of this kind of picture in our situation is as follows. Well, obviously the analog of this would be to consider a space which hasn't yet appeared, but which is usually called TGV or something. So we're back now in the BFN situation. So this G will be a point in the affine-grasmonian. The V will be in here. And then the analog of this condition would be just that V is in here. So compared with the RGV, I didn't impose that V lies in V over power series, just that V lies in V over the run series. Just like here I don't impose that X lies in the standard Braille. And so with this definition of TGV, this TGV mapping to V over run series is an analogous to T star G might be mapping to N. And you might ask, what about RGV? What's it analogous to? So RGV is analogous to something which you don't that often study in the usual Springer theory, but it's easy enough to study it, which is just, well, look at the set of pairs of like that, where G is in the flag variety and X lies in both the given Braille sort of and the standard Braille. And it's no fun. So that kind of space, which is nothing but the union of the co-normals to the Schubert cells. The union of the co-normals, so you could write something like this. So that's the analog of RGV. And some of the reason that we, in this BFN story, why we consider this sort of RGV and not this Z, the fiber product here, because you could study similarly this analog of Z just take the fiber product of two copies of T over V of the run series. The reason why is basically to avoid some infinite dimensionality problems, in particular to avoid having to work with equivariate cosmology with respect to the group of G of K, G of run series. So that's why it looks a little different than the usual Springer theory. But nonetheless, we can still study it. So we're going to be interested in then if we're in fibers of this map, because that's an allegation to this map here, where we have our usual Springer fibers. So given C in V of the run series, we have study V of N Springer fiber. So by definition, it's a set of points in the affine-grasse money such that C lies in the lattice coming from G. So I call it F sub C. So it's relatively straightforward. OK. So let me state a theorem about it. So maybe just as some examples. One example you could imagine is taking C to be 0. In that case, Fc is the entire affine-grasse money. This example, we're not going to consider much. In fact, you'll see in a second I'm going to do something which is going to rule out this example. Another example which you might imagine is you might imagine that this representation V could be just the adjoint representation. In this example, this Fc becomes just the usual affine-springer fibers. This is also not a case I'm going to be interested in. That's true. Yeah. No. No, no, no, no. Even, sorry. No. I mean, in this case, C is in G of the series. You get the usual affine-springer. So those are some cases I'm not so interested in. Let me show you a typical example that I'm more interested in. Here's a more typical situation which I like. We could take C, so another example. One, two, three. We could take our V just to be HOM CK into CN and G to be GLK. In other words, I just have this quiver. Then we could take the choice of C doesn't actually matter very much. As long as, well, OK, let's take C to be actually in V, not in V of K, but just actually in V. Let's assume it's injected. If I do this, then FC, well, it's a point in the affine-gross mine of GLK, which means it's just the lattice in an O lattice in the Rans series to the K. The condition here just translates to that after I apply this injective map that the C of L lies inside power series to the N, but that's actually just equivalent to just that L itself lies inside power series to the N. Sorry, power series to the K. So in that way, you see that the C didn't actually make a very big difference to the fiber. In this case, this FC is just the positive part of the affine-gross mine. OK, let's see some generalizations of this guy in a few minutes. No, not necessarily. OK, now I state my theorem. So the list term, sorry, OK, so this is joint work with Justin. Alex? OK, so the theorem is that FC is chi stable. And so remember we had this chi, the character of G. This is the first place it really seriously enters. So this chi stable means over the field of the Rans series. I wanted the map from G over the field of the Rans series to V cross A1 over the field of the Rans series given by G goes to G, I can see, comma, chi of G to be proper. That's the stable. So FC is chi stable. You can see the U is an element of the Quotientian vanguards V. Now just in V, I don't think I will change it. There's no need to invoke the Quotientian vanguards V. And has trivial stabilizer. The second condition we can relax a little bit, but just for simplicity, let me just state it like this. And notice this rules out these two examples here. Then under these hypotheses, we can prove some nice properties about the Springer fiber and the resulting modules. So one. So each component. So in general, the affine-grasminian of this group G will typically be like a kind of product of GLs, not a reductive group. So the affine-grasminian will have maybe many connecting components. And therefore, FC will have many connecting components. I mean, FC might even have more components than that. But let me just say that those components, by component of FC, I mean the intersection of a connected component of affine-grasminian with FC. So one, each component of FC is a finite dimensional projective variety. So I don't have any stuff you have in here. Two, the homology of FC is a module, and this is just ordinary homology, for ordinary Burrow-Morr homology. Oh, sorry, FC is compact, so Burrow-Morr and regular homology, what can I say? Is a module for this algebra. And if we put C-stereo-covarian, it'll be a module for the quantized algebra. If we don't put C-stereo-covarian for the ordinary algebra. And part three is this module, lines in category O for AGV. So the choice of our character chi allows us to define an ocean of category O for AGV, because the algebra AGV, let me just say it in words, will be graded by the connected components of the grasminian of G. And the choice of chi gives us a map from that set of connected components to the integers. And so that gives us an integer grading on the algebra and allows us to define an ocean of category O. Yeah, quantized situation. When I write A0, I mean non-quantized. No, no loop rotation. Yeah. No, there can be still. There could be still. I mean, I mean, for example, if you took this situation, but took the group SLN, I think it's still stable. No, but homogenism. You mean to get the action of the loop rotation. I forgot the detail of this. It's a good question. I don't believe that it's necessary. The definition is sort of slightly indirect. So because of the definition of R, the definition of this module structure has to be somewhat indirect. You have to justify somehow why loop rotation acts on a spring of fire. Yeah, I mean even to define this thing. Okay, maybe it's. Just to define the action itself. Yeah, maybe that's a good question. So if you see it's constant, there's no problem. Yeah, maybe that's a good point. Okay, I shouldn't think about that. Okay, a priori, let's say we impose that it's constant if we want to get an action of loop rotation. Okay, or somehow, okay, maybe some examples. So let me give some more examples. So suppose that my group is a torus, and so this torus acts on V, which is Cn, and it acts by some characters, A to 1, A to n of this torus, and we can choose some fixed point, maybe, sorry, some point C that we're going to use, and I'll choose something like this. And we have to make sure we sort of have enough non-zero guys here to make it stable. Which is not constant. But it's kind of equivariant to this C star, so I think it will. Not. But which C star? Okay, let me not, wait, what does C star? In this example, this finger fiber. So in this example, that finger is mining, we have a torus. So that finger is mining just a bunch of points. So we just left with a set of points, which will be, well, not necessarily find it, but we left with a discrete set. So all those Z mu, so mu here ranges over the co-weight of the torus. And then the condition that we're in Fc is nothing but, well, this condition at C is in Z mu Z. And so that's this condition, oh, sorry, erased it. But the defining condition of being in the BFN springer fiber is C is in Gv of power series. So here it's written like this. And this is equivalent to a condition that A to I is greater than or equal to the pairing between, so C i is greater than or equal to the pairing between mu and A to i for i equals 1 up to P. So if you think about this, this condition is defining some polytope, possibly not finite, but some polytope. And this is just the lattice points in the polytope. And the algebra in this case, we have a torus. The algebra is this hyper-toric enveloping algebra, which is very, well, its modules were extensively studied by Brayden Lycata, Proudfoot, and Webster. And we can show that for various choices of C, you can produce sort of all or many of the interesting modules for this hyper-toric enveloping algebra by this construction. Yeah, it's probably something like that. The problem is that I simplified, I probably messed up because I've simplified the statement of the theorem because in general there's this flavor group going around and the actual statement of the theorem has the flavor group in it. So this C's direction is sort of buried inside the flavor group. Anyway, that's why. Yeah. Yeah, I mean, in general, we first proved that this is isomorphic to the co-homology of some other space which does have an action of the group. And then that equilibrium homology of that space. Yeah, it depends on the, but not the quantum parameter necessarily. If C is okay, if C is just the constant, then not. But if C is not. I'll let me. You just require the rotation to see that the plant G of O times C. Yeah. That would be. Yeah. But I think we even have a more general version where you can line the flavor. Yeah. So we'll see too. Okay. So another example of this affine springer fiber, which is kind of our really our motivating example, is when GV comes from a cliver. So in this case, let me take C just to be a constant element. So not over the answer. Just need to be. So this C is a corresponds to a framed representation of the cliver. Frame just means I have this auxiliary vector space around. And I have also a map from those auxiliary vector space to my, my thing. So for example, maybe I have a cliver like this. And then I have some framing here. So C corresponds to a frame representation. And then let's examine what FC looks like in this case. Let me write M for the underlying vector space of my cliver, but just the, just the circled vertices. So my gauge group is G. That's the product of GL and I. So FC will be a point, points, I mean, it's a sub variety of the affine gross mind of this group G. Therefore, it can be considered as a lattices in K to the n i for each i. And so, OK, so the, so a priori it's, it's, it's a collection of lattices, l i. And using the fact that this representation is stable, oh, sorry, I choose chi to be in the, just given by the product of the determinants. And using that it's just the fact that it's stable and using Heracl's previous work, you can, we can prove that this l actually doesn't just lie in the Laplace series n i, but actually lies in power series to the, to the, sorry, yeah, in power series. I'll just write like M over power series. And then we have the additional condition that is just a sub module. CQ tensor power series sub module. So in particular, it doesn't depend on the framing at all on the choice of this C. So this generalizes that example I said a few minutes ago where we just had, well, we just had, we got rid of all this, we just had one thing. And then we had the arrows in the way, but it doesn't really matter. And then we just had this positive part of the affine gross mind in showing up and it didn't depend on the choice of that injective map. So similarly here will not depend on the choice of the framing at all. And it's just the space of sub modules of this module tensored with power series. And one, why I like this a lot is because, sorry, wait, but it's also in, so it's, for this particular choice of chi. For this particular choice of chi. It will depend on chi, right? So it's mutual. Yeah. So you can start with, sorry, what's Q? Sorry. Q is, Q is the quiver. CQ means path algebra with the quiver. Sorry, M is this vector space. I fixed the point in this V. The choice of the point in V gives rise to a module structure, quiver module. This thing, this M becomes a module for the quiver path algebra. And therefore M tensored power series becomes a module for the quiver path algebra tensored power series. So I'm just taking sub modules of that module. It's just the space of all sub modules of that module. So the condition on, so C is two pieces of data. One is a framing data. And the other piece of data is the module structure. And I'm saying the framing data doesn't matter. Only the module structure matters. So this is a kind of, well, not a hard term to prove, but it's not obvious statement. Yeah. And one thing that I like about this is because it gives us a way of taking this quiver representation and we get a module, well, we get a module in this case for our A gv or for our A0 gv. And we know that these guys, spec of this is the affegrass-magnon slice. Spec of this one is the affegrass-magnon slice. And this is a truncated shifted Yangian. So we get, in fact, in this way we get a coherent sheet on this affegrass-magnon slice. And in good cases, sometimes this coherent sheet will be actually, here, sometimes, is actually the structure sheet of some MV cycle Z. In general, not, but sometimes there will be. So this idea that homology of a space like this being the functions on some MV cycle, I mean, particularly we would get from, when this happens, then we have the homology of this fc is a coordinate ring of some open sub-variety of some MV cycle. And this expectation, the idea that this should occur comes from work that, related work that I'm doing with Pierre Beaumont and Alain Knudsen. So based on considerations of nothing to do with Coulomb branches or anything, we independently reach the idea that for some quiver representations, we should have such an equality like this. In fact, we expect that this should occur for some, for certain nice pre-projective algebra modules, not just for quiver representations. OK. When you actually get an MV cycle. Yes. No. Like the short answer to the question is that we expect this to be true if and only if the MV basis vector corresponds to this dual semi-canonical basis vector. In fact, we found an example where they don't court that these two basis vectors do not agree, basically by somehow checking that this equality doesn't hold. And so we don't always expect to be true, but we expect to be true in sort of a large, for example, in SL2, SL3, and SL4, it will always be true. OK. So in the remaining time, I'd like to explain one more thing, which is a relation between these BFN springer fibers and quasi-map spaces. So let me recall something about quasi-map spaces. So given the G and V that we started with, how much time do I have left? When do I have until quarter after? Yeah. OK. Should be OK. So given this G and V we started with, we can consider maps from P1 to the stack quotient V over G. So that pairs Ps, where P is a principal G bundle on P1, and S is a geochrovariant map from P to V. And we can consider what's called, well, I call it quasi-map. So although in this guise, it just looks like a base version of this space. So I'll write it just like this. So pairs like this, Ps. Such that I want that if I take S and evaluate it at infinity, by which I mean pick any point in the fiber. So this P is a principal bundle over P1. So I can look at the fiber of infinity. And I want that this infinity is sent to C. But effectively, I just mean that this infinity is sent to the georbit of C. Sorry. C here is a constant guy. And maybe I can assume that this C is also chi-stable. So just an example of this, if I take my simple example with ham from Ck into Cn, and as usual G is just Glk. And that, OK. Well, C will be an engine. C will just be the standard inclusion of Ck into Cn. OK, it's less than n. Standard inclusion. Although I could take any injective map, really. And this quasi-map space will be nothing but pairs consisting of a locally, so here f is a rank k locally free sheaf, P1, which appears as an inside of the trivial vector bundle of degree n. But this map here is not necessarily inclusion of vector bundles, just inclusion of coherent sheaves. But also on the condition, related to this base condition, such that at infinity, this map is an inclusion of vector bundles, and in fact just corresponds to the standard of this corresponds. Is just this usual inclusion of Ck into C. And more generally, or similar to this one, if we consider the quiver related to the flag right of type A as before, we take V to be this coming from the quiver like this one up to n minus 1, and then with the framing of n, then this quasi-map space has been well studied, and it's collections of locally free sheaves, f1 up to fn minus 1, where f i is a rank i locally free sheaf, and they're all embedded into each other, and ultimately embedded into the trivial vector bundle. And this space is called the Lemal space. Now recall fc was elements in the epinegris monium, where c is in g of the v over power series, and we can write that as a triple P of phi s, where P is a principal g bundle on the disk, phi is a trivialization on the puncture disk, and s was a section, or we can think of it equivalently as a map from P to V such that on d cross, it equals c. And in the usual way, if we have a principal g bundle on the disk trivialized away from the origin, we can extend it to a principal g bundle on P1, and then because this section is just equal to c on the puncture disk, it will just extend to a section on all of that principal g bundle. So in that way, we get an embedding of this bfn-sprainer fiber into the quasi-map space. And in fact, these two spaces will actually have isomorphic homologies. So we have an isomorphism between the homology of this quasi-map space and the homology of our bfn-springer fiber coming from this embedding. And the idea of the proof of this. What's the assumptions on c here? Just that it's constant and then chi-stable. And the assumption you made before? Trivial stabilizer, too. Well, maybe you can get away with a little bit less. Let's say that for now. And let me just explain a little bit the idea of the proof. So I explained that there was an inclusion of fc into this quasi-map space. And this quasi-map space actually maps down to a different quasi-map space, which is actually an affine scheme. And it's the quasi-maps also in some space from P1 into the quotient of v by the commutator subgroup g prime. So here I use the affine gig quotient. And then I take the stack quotient by gma g prime, which is a torus. So this here is a torus. This here is an affine gig quotient. And this whole thing here is an affine scheme, which is contractable. In this way, we prove that the homology here is the same as the homology here. So this generalizes. You mean that you want to say that this fc is the fiber over 0? Yeah, yeah. And this fc is the fiber over this. This thing contains 0. And this fc is the fiber of 0. And in this example of Le Mans space here. So when this is a Le Mans space, l nu or something, then this space down here will be a Zastava space, z nu. So it's generalizing the familiar picture of the Le Mans space, mapping to the Zastava space to this general, more general setting. And I guess this central fiber of this map from the Le Mans space, the Zastava space, was studied in this old paper by Kuznetzov. So this BFN springer fiber is a generalization of that. OK, I'll stop there. Maybe one last thing. Sorry, one important thing I just forgot is that in this Le Mans space, in the homology of the Le Mans space, there's an action of a SLN or of a Yangon defined by Fagan, Finkelberg, Frankl and Rivnikov. And actually, it's not a, so that's acting here. On the other hand, because of our BFN springer fiber, we have an action of the same SLN here. And actually, it's not immediately obvious that those two actions coincide, but we expect they do. That's all. No need for more talking. Thank you. Bye. Bye.
|
Given a representation of a reductive group, Braverman-Finkelberg-Nakajima have defined a remarkable Poisson variety called the Coulomb branch. Their construction of this space was motivated by considerations from supersymmetric gauge theories and symplectic duality. The coordinate ring of this Coulomb branch is defined as a kind of cohomological Hallalgebra; thus it makes sense to develop a type of “Springer theory” to define modules over this algebra. In this talk, we will explain this BFN Springer theory and give many examples. In the toric case, we will see a beautiful combinatorics of polytopes. In the quiver case, we will see connections to the representations of quivers over power series rings. In the general case, we will explore the relations between this Springer theory and quasi map spaces.
|
10.5446/53507 (DOI)
|
Thanks. So I want to talk about some paper with Jörn Ganev and David Jordan about what happens to this 4G theory that Sam talked about and David talked about at a Rubyview News scene. And let me be a little bit more precise what I mean by that. So David mentioned the skein algebras. So let's say sigma is a closed surface. David mentioned the skein algebras. So this is some associative algebras which depend on the parameter Q. So what happens is that there's something interesting happening at the root of unity which is the following. So first of all let's say Q is 1. In Q is 1 the skein algebra is just functions from the classical character variety. And if I literally mean skein algebras probably G is also 2. And what happens at nice reserve unity is that this algebra obtains a center which is exactly the classical skein algebra. So in other words this quantum skein algebra at some Ruby in Q is a sheaf of algebras over the classical character variety. And then there is the conjecture something called the unicity conjecture of 101 is that if you look at this algebra with center is generic class of Maya. So when I say generic class of Maya this means that generically it's a vector bundle with a classical character variety and each fiber is a matrix algebra. OK and again this was formulated just for a sol 2. So this conjecture was proven two years ago by Fromman Kania Bartoszinski and Le. But the methods are not very explicit for instance so it's generic class of Maya but we don't know exactly where it says Maya and it's completely unclear how to generalize it to other groups. So in my talk I want to explain the following things. So first of all I want to explain how to generalize the whole story for other groups. Second the commutation of the center is done by hands in the sol 2 case and it's impossible to do for other groups. So I want to explain why the center of the scale algebra at the root of unicity is the classical scale algebra. And finally I want to explicitly describe this as a Maya locus. OK. So geometric fibers. Yeah this is geometric sorry. OK. Yeah. So OK so let me first talk a little bit about the classical geometry of character variety so I will explain exactly how it looks like and where this is my locus will be and then I'll talk about the quantization and how this theory in this 4D theory behaves at the root of unicity. OK so a little bit about character variety. OK so recall that it was defined so David mentioned this is by this I mean the affine variety which is defined by looking at the representation variety of the fundamental group and then I take the affine G at the quotient by the group G. OK so a few things about this representation variety. So let me call the theorem. If the group is a GLN or SLN this representation variety is irreducible. Another thing is that there is a nice locus inside of this representation variety which is called the locus of good representations. And it's the following. So this is an open subset. Sorry. So what I mean by good representations I'll say representation is good. And first of all it's G orbit so there's a G action by conjugation on these representations is closed and then you can also look the stabilizer of the G action on this representation and it's the minimal possible. This is the center of the group. In the SLN GLN yes I'm not sure about the general. So you have the good locus inside of the representation variety and you can look at this image inside of the character variety. And here are a few things about this good locus. So this is a smooth affine variety and this quotient map from a representation variety to the good locus is a G bundle. So it's nice. So it's as nice as it can be. So this is about just a basic geometry. Next I want to talk about Poisson structures. So David already mentioned that there's a natural Poisson structure on the character variety. So this is d2it a bot in the different geometric setting. The algebraic setting is due to Goldman and a lot of other people. And this Poisson structure is the symplectic structure on the locus of good representations. So this is a good representation so it's exactly going to be my locus that I'll explain. Let me say a few more words about this Poisson structure. So here I'm mostly talking about closed surfaces. For instance this result is about closed surfaces. So let's say you have some closed surface sigma and it's written as a disk glued onto some open surface. And let me call this open surface sigma naught. And this is just a disk. Okay, then the fundamental group of sigma naught is a free group so you can write local systems on sigma in the following way. So there's going to be some moment map. I'll explain what this is. Mod G. So here a moment map. It goes from the representation variety for this open surface which is just G to the twice the genus to the group and it's just a product of commutators. And then you take the preimage of one of the unit and then you take that fine quotient by G. So I'll just write this as G to the 2G. So this looks like Hamiltonian reduction so I'll write this G to the 2G, Hamiltonian reduction by G. So this doesn't just look like Hamiltonian reduction. It is Hamiltonian reduction but in a certain group valid setting. So you have a moment map valid in the group rather than in the Lie algebra but there's a well-developed theory of group valid moment maps. And so G to the 2G has a natural Poisson structure. So this goes back to Szymonov-Tanchanski and Pocke-Rossli. And then the Poisson structure you get on the character variety is just the one you obtain from Poisson reduction. It's a product of their cross relations but you should think about this as being G squared. So it's G squared times G squared. So in each G squared there's some complicated Poisson structure and then each of them is a G Hamiltonian space and then you take their fusion. And so there are some cross relations because you take the fusion. All right so next I think I want to talk about quantization. So let me begin by just explaining what I mean by representations of the quantum group at a root of unity and what exactly I mean by that for DTFT for root of unity. So let's say G is some connective-reductive group plus some extra data that will be implicit. Then you can talk about representations of the quantum group. So this is going to be a category. So this will be some vector spaces equipped with extra structure. These are vector spaces equipped with a grading by the weight lattice of the group and the operators of the quantum group. So by this I mean you have these EI operators for each simple root i and n is a natural number. And you should think about this as divided powers. So you can think about this as being EI to the n modulo divided by the Q factorial and the same for fi. And they satisfy standard relations like the same relation and the commutator between simple roots. So this has a natural break-mode structure. Let me say ribbon structure. Okay. So this is the category that the 4DTFT attaches to the point. Now let me remind you the category that attaches to the circle. And so this is break-mode to the circle. You attach a monodal category of quantum HRR-genar to bi-modules which was modules over some algebra in rep-qg. So this was the reflection equation algebra. An important point about this reflection equation algebra is that it has a certain isomorphism. So for every representation V, there is an isomorphism between OQG tensor V and V tensor OQG. This isomorphism is not just a braiding in general. But so I'm not going to explain what it is exactly. Okay. So using this formalism, I can now define, so I want to define quantizations of this character varieties and the character varieties were obtained by Hamiltonian reduction. So I want to explain what is the quantum Hamiltonian reduction in this context. So to talk about quantum Hamiltonian reduction, I need to talk about quantum moment maps. Okay. So let me just define those. So let's say A is some algebra in rep-qg. The quantum moment map is going to be a map from OQG into the algebra. So it's an algebra map from the reflection equation algebra into A, which satisfies the following diagram commutes. So basically you can look at the left OQG action on A and the right OQG action on A and you want them to be the same. So diagrammatically it means the following. So you can put OQG on the right using this tau isomorphism. So you have this diagram which expresses the left action of OQG on A and the right action of OQG on A and I want them to be to compute using this tau isomorphism. So let me maybe just remark that you can replace rep-qg by rep-g, OQG by UG and you can recover the usual notion of quantum moment maps. Now the question for OQG is again, does it just be possible to do or to analyze, to So, the usual quantum moment maps go from UG into your algebra and they are Jacobi variant. Yeah, yeah, I'll get to that. I haven't quantized the character varieties yet. So, this is going to be a theorem that yet can be obtained by quantization. So, first of all, okay, so you have a quantum moment map, you can do quantum Hamiltonian reduction, which means the following. So, I'll just write it's a quantum Hamiltonian reduction by UQG. So, by definition, this is like in the usual setting, you divide A by some moment map ideal, I'll just write this as a relative tensor product and then take UQG invariance. Then using the quantum moment map equation, you can see that this is actually natural in algebra. Okay. So, where the quantum moment maps arise and how they're related to quantum character varieties, okay. So, suppose that, okay. So, here's the theorem of Benzvi, Berchet and Jordan. So, it has two parts. So, first of all, let me assume that I have some module category over this category quantum heritage by modules and it has some distinguished object. Okay. So, you have an action of quantum heritage by modules. In particular, this gives you an action of RQG. So, this is a module category of RQG. In particular, you can take internal endomorphisms of this distinguished object over RQG. So, this is going to be some algebra in RQG. Then the claim is that this carries a quantum moment map. That was stated in our paper and improved correctly. Yeah. So, right. Also, there's a kind of verse that if this is a generator of M, so M is modules over this algebra, then this is actually all you need to get the structure of modules over quantum heritage by modules. And second, how this is related to quantum character varieties. So, recall that the category attached to the surface can be written as a relative tensor product of the category attached to the open surface, tensor rep QG, which is the category attached to the disk over the category attached to the circle. So, I can write this as a relative tensor product. And then endomorphisms of the distinguished object in this closed category. So, this is the scale in algebra is given by quantum quantum term reduction. So, this is, let me just write A, UQG, where A is endomorphisms of distinguished object in this category. Okay. So, in particular, yeah, you can write the, the scale in algebra at subredd immunity as a quantum quantum term reduction of this explicit algebra A. The first, the first equality there was the theorem of Cook's set. The first equality there was the theorem of Cook's set. The first equality there was the theorem of Cook's set. Yes. Or if you like, that's what I want to define. That's going to be my definition. Okay. You know, it has to be careful. Cook's theorem is for Q generic. Okay. So, this is the claim that I made. Yeah. That it generally. Yeah. Okay. Yeah. So, the option is that these scale in algebras can be written as a quantizations of the open character varieties, of open frame character varieties, which are just functions on G to the 2G, Hamiltonian reduction by UQG. Okay. So, so far, this works for any Q. Q doesn't have to be a root of unity. It can be generic. All right. I mean, did you define this thing with something? Did you say it exists? In part one or part two? Part two. In part two. Part one, I thought you were saying you just think of the object. Yes. In part two, fluctuation of homology gives you, so these categories are chronically pointed. There's always a distinction of the object on the classical level this just a structure sheet. Okay. Yes. So, as I said, this is so far a discussion that works for any Q. Now, let me concentrate on Q being a root of unity. So, you're just just seeing out for a defined debate in the work of the unit. Yes. Yeah. In the case of the not-reduing, it's not going to coincide with the original thing. But the expectation is that it does, but the theorem is not there. Yeah. No, I claimed that, sorry. I said that the categories are not equivalent. I also claimed that these algebras are a more stable, they're just both some categories match and that works. Okay. So, now let me concentrate on a case where Q is a root of unity. You have to put some assumptions on Q. I will not say precisely what it is. Q has to play nicely with G. Okay. So, if Q is a root of unity, there's something extremely special that happens for the category of representations of the quantum group. So, let me recall that if C is a bright monotonal category, its Muegr center is just the full subcategory in objects. In C, which was double braiding is trivial. Okay. So, you look at objects whose double braiding is trivial. In particular, this Muegr center is not just a bright monotonal and symmetric monotonal by definition. You mean such that this is true for any Y? Yeah. Yeah. So, this is very similar to saying that you have an associative algebra and you look at elements which commute with this element. So, this is a higher version of a center. Okay. So, in particular, the Muegr center of C is symmetric monotonal. Is it related to the Muegr center? No. This is a lower version of a center. So, if you have a monotonal category, you talk about a driftful center. If you have a braided monotonal category, you have a Muegr center. So, the driftful center will be braided monotonal. This is higher, which is symmetric monotonal. Okay. And here's what happens at the root of unity. The Muegr center, and again, I'm assuming that the root of unity is nice, is actually equivalent to representations of the classical group. So, you can ask what happens if the root of unity is not nice. If the root of unity is not nice, what happens either, there's some kind of language reality that happens here. So, you don't get representations of the group, but you get representations of the language dual group. There might be something funny happening with the weight lattice, or sometimes you get representations of a super group, if you have even a root of unity. Say it again. Say it again. Say that the weight is at the weight of the G on the refugee. Sort of. Yeah. So, this is exactly. So, the action over G on the refugee, is getting by this is Frobenius map. So, the idea is that here we have some weight lattice. Here, the weight lattice is scaled by L by the order of the root of unity. And these generators, E and F, in some sense you take their roots and they'll give you the actions in the classical. Okay. So, I don't know the history of this. The paper I learned it from is Chris Negron's paper, which is extremely recent. But it might be older for depending on which root of unity you're looking at. No, by nice enough for the unity, I mean it's odd, root of unity. If the group contains factor of type G2, then it's not divisible by three and things like that. Then it's not, doesn't divide the determinant of the carton matrix. So. Okay. Yes. So, this is the first thing to say. So, the way you can think about this geometrically is the following. So, you can think of saying that you have a family of braid monolithic categories over BG. For it's a sheaf of braid monolithic categories over the classifying stack. And if you have a sheaf of braid monolithic categories over something, it's naturally, it's natural to ask whether it's fibers. So, you can ask what is its fiber over the base point. Well, so there's a theorem of our heap of n gigs gory. That if you look at the fiber of this sheaf of braid monolithic categories, it's very explicit. This is given by representations of what's known as the small quantum group. So, this is called small quantum group. Yeah. So, this is the, yeah, if you know what the monolithic Frobenius map is, it's the kernel of this. This is the cones. No, I'd rather say, Negron's result implies that this is factorizable. So, this small quantum group is a finite dimensional half algebra. So, it's a finite dimensional half algebra, which is factorizable, which is going to be important. So, let me just draw something precise picture of what's going on at the root of unicy. So, schematically speaking, you can say that this 4DTFT at the root of unicy is a family of 4DTFTs, parameterized by the same TFT where derivative VNC is 1. So, here we have rep kG, which fibers over BG. So, here we have 4DTFT, which fibers over the classical TFT. And the difference between the quantum and the classical is not big. It's given by something factorizable. So, we're going to bind invertible for a TFT, the one associated to the small quantum group. And this is an informal explanation of why we're getting as a myauterus on the surface. But let me get there. Okay. So, invertible means that, so this is a 4DTFT, let's look at maybe three manifolds or, so first of all, it's fully divisible, so it's defined on everything. On four manifolds, all numbers are nonzero. On three manifolds, all vector spaces are lines. And on two manifolds, all categories are invertible in the appropriate sense. Okay. Right. Okay, so what's the story now? So, the upshot, if you just compute this on the surface, what you get is this quantum scale in algebra is an associative algebra. And it contains a central subalgebra, which is functioned in a classical character variety. So, the classical character variety is a Poisson variety, so this is a Poisson algebra. So, it's a central Poisson algebra. Okay, and then you can ask, okay, so you have an algebra and a central Poisson subalgebra. You can ask what's the, what kind of compatibility you can expect between the Poisson bracket on the center and the associative algebra structure on the whole algebra. And this is what's known as a Poisson order. So, it's an associative algebra, central subalgebra, z in A, such that A is finally generated as a z module. And in addition, you have an extension of the Poisson bracket to the whole algebra, by which I mean, and a linear map. So, z acts on itself by the Poisson bracket. And this to the whole algebra, which preserves z and gives the Poisson bracket on z. You might require to be a homomorphism of leogbras, I don't. Okay, so let me explain the source of Poisson orders and why they're useful. So, the source of Poisson orders is the following theorem, which goes back to Hayashi and Brown Gordon, who introduced the notion of Poisson order. And it's the following, so suppose you have AQ, so it's called AT, is a flat family of associative algebras, and z inside of AQ is a central subalgebra at some specific value of t of the parameter. Yeah, you have to assume the finite generation of AQ over z, but this will be implicit. Then, there exists a map from z to derivations, and if z preserves the central subalgebra, this is a Poisson order. If you take z as the whole center, then this automatically preserves. You have to be a bit careful, this map exists, but it's not unique. You can change it by inner derivations, and I think this will spoil the map. If it is the whole center, I think it is going to go into the... No. We can talk about this afterwards. This is the source of Poisson orders, and why they're useful, it's the following result of Brown and Gordon. Suppose that A and z is a Poisson order, and let's say you have a simplelectic leaf inside of z. Let me call this z0. So, this z is an open simplelectic leaf. Then, the claim is that A over this open simplelectic leaf is actually a vector bundle, and any two fibers are isomorphic as algebras. So, the proof is in an analytic topology. The statement doesn't talk about topology. Yes, there's a flat connection along the only along the leaf. No. So, d gives you the connection. I mean, maybe it's not flat, but let's say it's a connection, and using this connection, you can do a parallel transfer. This is a fairly short paper if you want. So, the upshot is that to prove that something has azomaya over the corresponding open simplelectic leaf, it's not to prove it generically. And then, this machinery automatically extends the azomaya property over the whole simplelectic leaf. So, I'm not quite done yet. The problem is that this proposition doesn't directly apply in our case. So, the problem is that we don't know that the skein algebras are flat. So, that's one thing we don't know, and the second thing, O over Q, and the second thing we don't know is what is precisely the center, so that it's not easy to check this condition. And the idea is that instead of doing something after the Hamilton reduction, I'm going to do something before the Hamilton reduction. And I'm going to do Poisson reduction of the Poisson's structure. And so, the goal will be to prove that actually this is Poisson order and then prove some result about Poisson reduction of Poisson orders. Okay, so I don't have a lot of time left. So, let me just kind of go schematically and explain the main tools and steps that we use. Okay, so it's easy to prove that before the reduction you have a Poisson order. Okay, so before the reduction, let's try to prove this actually as a Maya. So, I need to know what is the open splatic leaf inside of G2 to the 2G, and I need to know that at one point it's a matrix algebra. So, the second result is that we prove some general fact about the simplectic leaves of Poisson G varieties. So, the open splatic leaf is given by the preimage of the big cell. So, here the big cell is the product of lower triangular and upper triangular matrices. Okay, so now we have a description of open splatic leaf. So, now it's enough to check as my property at some point in the open splatic leaf. And a natural point to do this is just the trivial element in G2 to 2G. Yeah, at the base point, the category you get, this ZQ is basically factorization homology over the surface for representations of the small corner group. And as I said, this is factorizable, so this implies formally this is invertible. So, this is the invertible category and you can conclude that the corresponding scalar of the scale algebra is a matrix algebra. Alright, so, so far, I know something before the reduction that I have generically an azimuthal algebra, and I know exactly what is the azimuthal locus. It's the preimage of the big cell. So, what do we do with the reduction? Well, so the idea is that, okay, I like writing this in terms of reduction. So, you can decompose this in this reduction in stages. And maybe let me mention that this idea to do counter reduction stages and many other parts are due to some people in the audience. So, we're going to think of Baruch Ginsburg and Barneola Vassero. So, they used a similar technique at least at this stage. Okay, and what I mean by to do in stages is that you can first take this quantum algebra, restrict it to the classical mode map, ideal. Here you do counter reduction with respect to the small quantum group, and then you take the variance, gene variance. Okay. So, there are just a few more steps left. Okay, so next we prove that, so the spouson order structure actually descends. So, what this means is that the quantum character variety is a spouson order of the classical character variety. So, to prove that it's azimaya, again, it's an octodiod generically. Upstairs, we know exactly the azimaya locus. So, what you need to do is to understand what happens with respect to, after you do this small quantum group, counter reduction. And another fact we prove is that, counter reduction of an azimaya algebra, or its lemma, is generically azimaya. Okay, so you get that this counter reduction is actually generically azimaya over the classical character variety. So, the conclusion is that this quantum scan algebra, or the quantum character variety, is going to be this azimaya over an explicit locus, which is the locus of good representations. Let me mention that for G being SLN, this is exactly the smooth locus. So, this is as best as you can get. Okay, and I'm going to stop here.
|
Character varieties of closed surfaces have a natural Poisson structure whose quantization maybe constructed in terms of the corresponding quantum group. When the quantum parameter is a root of unity, this quantization carries a central subalgebra isomorphic to the algebra of functions on the classical character variety. In this talk I will describe a procedure which allows one to obtain Azumaya algebras via quantum Hamiltonian reduction. As an application, I will show that quantizations of character varieties at roots of unity are Azumaya over the corresponding classical character varieties. This is a report on joint work with Iordan Ganev and David Jordan.
|
10.5446/53509 (DOI)
|
So my talk is going to be kind of dual in some sense to Joel's talk, so it's also going to be about the question of how to understand the representation theory of Coulomb branches. But it's going to be dual in the sense of it's going to be looking for more things like projectives rather than things like symbols. So let me sort of introduce a general algebraic setup, which I think it's good to think about Coulomb branches in terms of. So let A be a k algebra, of course here k is a field, and let R inside it be a commutative subalgebra. We can also have a fun discussion about what non-commutative algebras you could put here instead, but commutative is a good start. So whenever you have a setup like this, there's a natural set of modules, category of modules you can study, which is quite interesting. So A module is Gelfand Settlin, and I mean this is in the grand tradition of naming things after people who had extremely little to do with them or at least had some very distant connection in the past. So some people might prefer to call these harsh chandr, which I think would follow the same tradition or weight modules, which I think would be confusing. So I'm just going to use this terminology, and I hope people will forgive me. And of course, that's a lot to write, so I'm going to write Gt. If it's locally finite for B, it is R locally finite. So that means that for all elements of this module, if I look at R times it, and then I look at the dimension of that as a k vector space, it's finite. So of course, this includes all finite dimensional modules, but will also include things like this category O that Joel mentioned. All right, so how do we study modules like this? The one reason I like this definition is that there's an obvious way to break this module up into pieces. If I have a locally finite module over a commutative algebra, then it naturally breaks up into a sum of things that are killed by powers of maximal ideals. So given lambda in the set of maximal ideals of R, I can consider sort of the quote unquote weight space, which is all the things in V that are killed by some power of the maximal ideal. And for whatever reason, I like to think of lambda as something floating off in space, and then it has some particular maximal ideal attached to it. You could sort of say, like, oh no, this maximal ideal, that's the point in max spec. All right, and a very easy lemma. V is R-Gelfan-Setlin, if and only if V is the direct sum of these subspaces. Do I want some finiteness condition on R? Oh, oh god, I always get stuck on these stupid things. Oh, right, I want R to be finite type. And if R is finite type, then this quotient is a finite extension of k. And so it's, yeah, anyways. All right, you're up. Yeah, finitely generated as a k module, as a k algebra. Yeah. All right, so what are examples that might be interesting to think about? So you might choose A to be U of G and R to be U of H. In that case, an R-Gelfan-Setlin module, it's tempting to say weight modules, but that's not quite right. I'm taking a power of this maximal ideal. So it's generalized weight modules. All right, another very interesting case to consider is A being U of GLN and then R being the Gelfan-Setlin subalgebra, which is what this whole setup is named after. So that's the subalgebra generated by the center of the universal enveloping algebra of GLK for k equals 1 up through. All right, and one final example to consider is, well, A is the Coulomb wrench for some group and some representation. And unfortunately, Joel and I didn't coordinate notation before our talk. I'm using the notation of BFN's paper that N is a module over this group G, and I'm using V for something else. So I apologize about that, because Joel, of course, used the notation from my paper. Anyways. So there's a natural commutative subalgebra to consider in here. OK, yeah, well, this is a pretty shitty setup, so I apologize. So A is the Coulomb branch, and I can take R to be the commutative subalgebra given by the equivariate parameters for the group. So R is the equivariate homology of a point for the group, which includes inside the Coulomb branch, of course, because it's the equivariate homology of something else. All right. So a basic question, what are the simple GT modules for a given A and R? So this one of what are the different weight modules for a semi-simple lead group is one that's elicited a lot of study. And in part, people started thinking about this example because this one was too hard. And they said, well, let's maybe add this extra condition, the whole Galfan set when subalgebra acts locally finitely. And that was a problem that also attracted a lot of attention over recent years. But it's one where people got a little bit stuck. And let me try to explain where they got a little bit stuck and how I think we can get unstuck now. So there's a general approach to understanding this category. And I think part of the reason this is such a nice category to consider is we have such a nice approach. And this approach, so definition, we say A R is R is Chandra. If for all elements of A, the bimodule RAR is finitely generated. Well, it's obviously finitely generated as a bimodule. But this is that it's finitely generated as a left or right module. So this is sort of a size criterion that A is non-commutative, but the way it not commutes past things in R is not too bad. There's sort of only finite amount of extra information when you multiply in the left as well as on the right. Look, that's what it says in the paper. This is the definition from the paper. What am I supposed to do? And given this theorem, droste futone and osienko, is that for a Harchandra pair, there's sort of a nice way of describing these guys. And let me just say what it is in words. You look at these, you think of them as functors, and then you say, what are the natural transformations between these functors? So somehow, Galphons-Setlin module is the same thing as saying what all these weight spaces are, and then an action of all the natural transformation between the functors of having weight spaces. So the category of A modules, which are Galphons-Setlin, is equivalent to the modules over some other category, and those should be discrete, so this other category is going to have its apology. And the other category is you just think about, well, what kind of elements of my algebra would give me natural transformations between these weight functors? Yes. Yes. So the sort of important consequence this Harchandra hypothesis has is that if you are generated by a finite dimensional R invariant subspace, then you are a Galphons-Setlin module. So that's sort of the important thing you need to make this theorem work. So here C is a category where the objects are just this max spec of R, and the hams are given by an inverse limit. So the problem from lambda to mu is I take the inverse limit of A mod, let me get this right, M mu to the k plus M lambda to the kA. So I'm multiplying by one of these, oh no, I did the wrong one, multiplying by one of these maximal ideals on the left and by the other one on the right. So this quotient, this is the natural way to get a natural transformation between a kind of truncated version of this weight functor, where you require a particular power to act by zero, and as I let k go to infinity, I get something that's defined on the whole weight space. So as an inverse limit, this has a topology, and I only want representations where that action is continuous when I give the representation the discrete topology. I'll just write discrete, and that's what I'll be doing. All right, so this is a very nice theorem, but it's only as nice as your understanding of what these homomorphism spaces are. So let me tell you there's sort of a slight generalization of this theorem, which is that if I want the simple Galphons-Setlin modules such that a particular one of these weight spaces is not zero, then those are actually in bijection with simple discrete modules over what I'm going to call a hat sub lambda, that's from lambda to lambda. All of the examples I said in the beginning are harsh on the. So for example, if you wanted to understand all weight modules, like in the usual sense over Li algebra, simple ones, where a given weight space was not zero, this tells you there is some algebra out there, but those are the same as simple modules over. But that algebra is horrible and infinite dimensional. It's very large. In particular, its finite dimensional modules are all the different weight spaces of all the different finite dimensional guys, plus maybe some more. So it's going to be enormous. In particular, it should have infinitely many simple representations of all kinds of different dimensions, going to be a very complicated algebra. It's not clear you gain anything from thinking this way. But for Coulomb branches, there's a very nice answer to this question. So I should just pause for a moment and ask, are there any questions? So for the carton inside of U of G, yeah, I don't know anything interesting about this algebra that you couldn't figure out some other way. And I mean, I would say somehow what's going wrong there is that U of H is too small. In particular, it's not a maximal commutative subalgebra. Whereas the Gelfand-Setlin subalgebra is, and this guy is also a maximal commutative subalgebra. Maybe we should find out that two is indeed a special case of three. Sorry? Yes. Two is indeed a special case of three. But you learn very interesting things. If you only care about two, it's worth thinking about three. So I'm going to tell you a theorem about Coulomb branches that tells you something interesting and new about U of GLN. Sir, a random string from last lesson, is it just a bijection of sets? No, it's, I mean, so it is a bijection of sets, but it's a particular one. The lambda weight space of this symbol is a simple module over this guy, and that gives you the bijection. So I mean, you can strengthen this if you looked at all modules over a lambda hat. You would get the category of Gelfand-Setlin modules modulo those that have trivial lambda weight space. So you do learn something about extensions, but when I only talk about simple modules, yeah, the only structure there is a set. But it's a bijection which is very natural. So you just look at this guy acting on the lambda weight space. Here? Discrete. All right, cool. Sorry, this equivalence is you look at all the weight spaces, and these are the natural transformations between the functors of taking weight spaces. So I didn't want to get down to the details of it. You can go read their paper, but I mean, this guy naturally acts on the weight space, and it's that action. I'm not sure if you answered the question. The question is, are the C modules, are they continuous? They're discrete. There's a word discrete here. So yes, they're continuous, and they have the discrete topology. Right? I mean, this is just some way of saying that some power of this ideal is zero, but I don't want to tell you which one. So I don't want to fix any particular one, but I want some power of it to be zero, and that's the same as saying that the inverse limit acts on it with the discrete topology. All right, anything else I screwed up? All right, very good. OK, right. So for Coulomb branches, there's a very nice general answer. So one thing to note for Coulomb branches is this max spec of r. I can really think of as semi-simple elements of g times c. So this is the Lie algebra of the group g times c star, or this is the rotation c star. And this is modulo conjugation. So sorry, this is for the Coulomb branch. So I'm going to specialize from now on and only talk about Coulomb branches. And all other interesting examples I know are actually subcases of this one. So here I'm just writing out for you what is max spec of the equivariant co-amology of a point for g, right? That's the torus of the group, mod the vial group, which is actually the same as semi-simple elements mod conjugation, and I forgot here that I should have put a c star, because I also want the loop rotation. So when I think of the Coulomb branch as equivariant co-amology, I have this equivariant parameter h for c star. But when I get an actual non-commutative algebra like the universal enveloping algebra, I specialize h to be 1. So that means my, that's the same as saying that my semi-simple element is of the form x, 1. So when I exponentiate it, I get the usual rotation action of c star, and then I've somehow adjusted that by some co-character in the group. All right, and out of this data, I'm going to get a description of this algebra for the Coulomb branch. Yeah, yeah, the adjoining action. How do you get the element of the speck of r from that? Element of the speck of r. Yeah, so I'm looking at, you know, lambda plays a role here and it plays a role here. This is a theorem for, yeah, it's the whole category. So for every maximal ideal, I assign a r module to it. And the discrete thing is just that that r module factors through some power of the maximal ideal, but I don't want to tell you which one it is. All right, so defining this requires a little bit of notation. So remember, I have this representation n that I fixed from the start. This is part of the data of how I construct the Coulomb branch. So having chosen an element of the Lie algebra, I have a subspace in here where this is the subspace where x has integral weights. And I have a further subspace in there. So I'm going to call n lambda minus, and this is where x has non-positive weights. No, I was doing that right. So lambda is the result of the same thing? Well, not quite, but yes. So x is lambda thought of as an element of the group. But you know, x is only well-defined up to conjugation, but if I replace x by its image under the adjoint, then I'm just acting by that element of the group on n lambda. All right, and similarly, if I do this in the adjoint representation, I'll get a subgroup which is exponentiate the subalgebra that has z weights under the adjoint action. And then inside here, well, if I was going to be really consistent, I would call this g lambda minus, but I'm actually going to call it p lambda. So this is a parabolic that, again, comes from looking at the adjoint action of this x and looking at where it has non-positive weights. All right. So let me then make one more definition, let y lambda be the sort of the associated vector bundle to g lambda crossed over p lambda with n lambda minus. So let me just remind you what this is. This is I take a coset of p lambda. I take an element of n. So the condition I need is that n is in g acting on n lambda minus. All right. What was the point you were making with the max vector a star g plus b? What was that we were saying about the max vector b? So those are the weights that the commutative subalgebra inside the Coulomb branch could act by. So when I want to analyze a representation of the Coulomb branch, I want to look at this commutative subalgebra acting on it. And if it's locally finite, then that's going to break up into some sum over maximal ideals. And the different options for the maximal ideals are exactly these conjugacy classes. And if I've specialized h equals 1, then these conjugacy classes where the part along, so the component along the loop direction tells you, oh, when I mod out by that maximal ideal, what scalar does h get specialized to? And I want to look at the case where h is equal to 1. All right. So theorem. So I should give some credit for this to Hiraku because he sort of explained this way of stating it to me. You will search in vain for the paper of his that proves this, but it is in my paper. This algebra A hat lambda is the Borel-Mor homology, equivalent with respect to g lambda of the fiber product y lambda over m lambda of y lambda. And of course, more generally, if I take comm from lambda to mu, that's what's tempting to think that it's sort of always this guy where I just take this product. But you'll note there's some weird asymmetry. This g lambda and n lambda, well, shouldn't I take g mu and n mu? But this is only true if lambda and mu are conjugate under the affine-vile group. All right. So that is to say I can choose them in a maximal torus such that they differ by an honest co-character in that maximal torus. I just get 0 if lambda and mu are not conjugate under the affine-vile group. And if they're conjugate under the affine-vile group, well, these weights change, but they change by an integral amount. So actually, this g lambda is equal to g mu and this n lambda is equal to n mu. All right. And of course, there's something missing here. I need a hat here. And that just means I complete with respect to grading. OK. So all questions about modules over the Coulomb branch that have this Gelfand-Setland property are actually questions about this finite dimensional convolution algebra. Yes? Does that have something to do with the multiple terms of the formula? Not obviously. So I mean, an exercise to the audience is figure out how Joel's talk was secretly using this. So I suspect this complaint about the fact that there was non-equivariant homology instead of equivariant homology is that, in fact, of course, there's an obvious quotient of this algebra. Right? If I remove the equivalence, then I will get a homomorphism from this guy to just the usual Borrel-Mor homology of this fiber product. And it's easy to work out. Every simple factors through this map. So I wasn't able to confirm this since I haven't had much time since I saw Joel's talk. But I suspect very strongly what was happening is you guys were defining a module over this and then sort of pulling back and going through this equivalent without realizing that was what you were doing. Sorry? So you are completing after some factor like that. Well, this is by the fact that it's a discrete module. So if I have a module over this completion, which has the discrete topology, then that means some power of the obvious maximal ideal in the completed equivalence homology of this g lambda has to act trivially. And if you're simple, that means that it has to be the maximal ideal itself. So I mean, there's secretly some shift here. So that the obvious maximal ideal over the homology goes to the maximal ideal that corresponds to lambda here. What is g lambda? G lambda. So it's the subgroup whose Lie algebra is the space where x has integral weights under the adjoint action. If you have a co-character, you can think of that as a point in spec of the equivariate co-homology for a point. So the sort of important thing here is, for example, let's see, what can I safely delete? Not much. This is getting pretty tight. So yeah, that's the best way to say this. So for example, if lambda is generic, then this is trivial. So if you have sort of a totally generic central point in spec, then this guy becomes trivial and this statement just says that you get a copy of c. And one interesting corollary. No, no, no, not if lambda is zero. If lambda is generic, then g lambda is trivial. If I look at every root eating lambda and I don't get an integer, then this means that this guy is just, sorry, g lambda isn't trivial. It's the torus. And so I just get the equivariate co-homology of the torus and I complete with respect to the obvious maximal idea. I mean, x is, yeah, x lambda are the same thing. So no, lambda is x1. But these live in the same space thought of in a slightly different way. This guy lives in spec of h star g of a point. And this guy lives in g modulo the adjoint action, or I should say g semisimple mod the adjoint action. And it's left as an exercise to the viewer to remember how these things are isomorphic. Yes, no, no, it's not the centralizer of lambda. It's the stuff that lambdas integral with respect. Oh, maybe you're right. Wait. If you take the exponential of that element of the Lie algebra, then yes. But if you think about it as a co-character, then no. So exactly why I was, oh, this is already trashed. All right, so somehow, I mean, if you don't understand Coulomb branches, you're not going to magically understand them from me saying this. The important point here is that the weights that you can get in a Coulomb branch essentially correspond to elements of the torus mod the vile group. And if you want to understand the representations where that maximal ideal appears, there's a sort of simple topological way to do this. And again, since I'm running low on time, I leave as an exercise to the audience to think through, for example, what thinking about perversives on n lambda tells you about the representations of this algebra. There's a very natural interpretation of this, familiar to anyone who's read Ginsburg and Chris's book. All right, so a very important special case will be interesting to people who just like representation theory and have never heard of Coulomb branches before, which is u of gln itself. So in terms of these pictures for quiver gauge theories, what this means is we take 1, 2, 3 in circles and then n as a square. So that means my g is gl1 times up through gln minus 1. I am stopping at n minus 1. I don't mean to go to n. The gln is flavor. And my representation n is 1 by 2 matrices times 2 by 3 matrices dot, dot, dot up through m by n minus 1 by n matrices, where this group acts in the usual way by pre-n post composition. All right, so the Coulomb branch here, and again, exactly the answer depends a little bit on your conventions about flavor. So I believe I'm following what Joel did by fixing the flavor to be scalars. So this is going to be u of gln mod a maximal ideal of the center. So there's sort of a way to do this where you incorporate the equivariant cohomology of gln as well, and that becomes the center of C. Sorry, the center of u of gln. Let me, since I'm talking about individual weights, each individual weight, the center acts through some character, so I might as well fix that already. And if you think through what's written on this board, let me not quite erase it yet. The space is y lambda. I get out of this. So I can think of a point in here as a quiver representation for this quiver. And I'm almost getting the quiver flag variety, but I'm not acting by gln on here, so I get something a little bit funny. So if I had included the gln there, I could just say, oh, by paper of Eric and Michaela's, exactly what I'm getting is the type an kLR algebra. But that's not quite right. So what's the a lambda I get, or actually, let me be a little more careful about this. So there exist integral weights such that when I sum together, so this collection x, this is some set of integral weights. When I sum over pairs in here, I take this bimodule a lambda mu. I get. Yeah, oh, sorry, yes. So this is almost the type a kLR algebra. So what's different? Well, because I have this square here, my strands with label n aren't allowed to move. So let me just tell you the type an kLR algebra, I'm definitely running out of time, that we want here. And this is with one strand label one, two strands label two, etc. So this is an algebra where the elements are string diagrams like this with dots on them and labels from this set of nodes. Maybe I have a one here and a three here and a seven here and a two here, but with property that strands with label n don't cross. So in terms of quiver flag varieties, somehow putting this box here means I should just choose the standard flag on this guy and only look at flags of this quiver representation that agree with the standard flag on this guy. And because I can never change that flag, here I have a strand label n, here I have a strand with label n, they're not allowed to cross. Other guys are allowed to cross them, but they can't cross each other. And to emphasize the difference here, I'm going to draw these strands with label n as red. It's exactly that on the nose. So the interesting question is, do you want dots on here? And the answer is it depends whether you keep the center or not. All right. So these things, they're all not just for the quiver and quiver. Yes, Joel is pointing out that this is an interesting special case of a much more general theorem by Joel Peter Tingley, myself, Alex Weeks, and Oded Yacoby. This works for all quivers. Another exercise to the audience is to, well, OK, if you want, I can change it to my name. This works for all, but I certainly should say symmetric. So a very interesting question is, what happens when you mash together this talk and Herakus and try to figure out, well, how does this apply to these slightly generalized versions of Coulomb branches? And what one might hope is that you would get KLR algebras for non-simply lace type. And I will tell you that that is surely a vain hope. And what you will actually get is KLR algebras of this bigger diagram that you got your interesting Dink and diagram by folding up, but not folding in the dual way, which I think I've now decided should be called furling. So for a quiver gauge theory, yes, you always get some version of a KLR algebra, suitably generalized. And essentially, these guys corresponding to tensor products, but without the sort of cyclotomic quotient relation. All right, running very low on time. So I think I'm going to skip the actual correspondence here. So the way these match up is that when you have a weight, you diagonalize it. You look at the weights that occur along the diagonal. You put those on the real number line, and then you label them with which node they came from. So I'll just say that. That's the correspondence you guys can work it out. One interesting corollary simple Galvan-Settlin modules for U of GLN are more generally for these quiver guys, which includes orthogonal Galvan-Settlin algebras, et cetera, correspond to a dual canonical basis. And the obvious question is dual canonical basis in which space? And let me just say, in this case, you take the negative part of U of SLN. So the negative Unipotent Radical take its universal enveloping algebra, and then you tensor with N copies of the standard representation and look at a particular weight space in there. And for the other orthogonal Galvan-Settlin algebras, you'll get the other weight spaces in here, et cetera. And the generalization to other quivers involves, Joel talked about this representation. He was using lambda for a highest weight here. So here you tensor together the fundamental representations corresponding to lambda. So the answer is no. No, I mean, I can explain to you what's going on. But somehow the point is that, yeah, so there is a sort of spherical thing that happens, which is, yes, for, so which algebra you get depends on this choice of central character. And somehow the central character is telling you where these red lines are. And this doesn't actually happen for U of GLN because all the regular blocks are the same. But for some other quiver, you know, other representations where you have kind of more general highest weight, yes, there are some situations where essentially these red lines get close enough together that there are sort of some item potents that don't occur. And in this quiver case, we call these parity KLR algebras. But in general, yeah, it can sort of happen that you can't, you get sort of a KLR algebra, but maybe you can't get all the item potents. There's some sort of obstruction. Yes. Oh, well, I mean, if you have the parabolic ones, you get something Merida equivalent, right? So I can choose, yes, there's some, some work, I'm running out of time. So we can discuss this later. Believe me, I have thought about this issue. I'm not completely screwing this up. I'm just being a little, you know, skimming over a few details. Is that what you're asking about? I don't, wait. But by spherical, like, you're thinking that there should be some item potent. But you mean as opposed to the complete flag variety. Yes, but that's not the issue. We can discuss it later. So there is something you have to be a little bit careful about here. But I mean, for most weights of even the spherical guy, you get this. Really that, so right, the issue you're digging into here is I had a parabolic P lambda and I'll only get the KLR algebra. It's kind of on the nose if that parabolic is a borrel. And of course, usually it is, right? Like most, most co-characters have a borrel attached to them, not a more complicated parabolic. Really that more, somehow that more complicated parabolic isn't the issue. All right, because of course if you replace it with a borrel, you get a merida equivalent algebra. So like, obviously that's not the issue. All right, but I'm totally running out of time, so let's not get totally sidetracked with this. Let me just say another important example is if you take G to be GLN and N to be the adjoint representation plus L copies of CN, then you will get the rational Terednik algebra of GL1N. And there you should be concerned about spherical things. Again it doesn't make that big of a difference, but let me sort of, right, let me just add spherical here. And under this isomorphism, where does the equivariant co-amology of GLN for a point go? It goes to the algebra of symmetric polynomials in dunkel-optam operators. So now a Gelfand-Setlin module for the spherical rational Terednik algebra is one where these dunkel-optams act locally, finitely. You could call that a dunkel-optam module if you wanted to. And it's a known theorem due in part to many people in this room that the simple module in category O over these rational Terednik algebras correspond to dual canonical basis vectors for a twisted fox base. And you can actually extend this theorem to dunkel-optams. So simple dunkel-optam modules are going to correspond to a dual canonical basis in the twisted fox base tensored with u minus of GLN hat. And since I'm low on time, let me try not try to explain anything about this, but let me just say these algebras you get a lambda or more generally the sum come from what are called weighted KLRs. Yeah, twisted fox base. I mean, this is your conjecture, so you better know what it means. Yes. So the correct statement depends on what your maximal ideal looks like, but it's always whatever sort of you want to fix a maximal ideal and look at its orbit under the affine file group. And whatever you got in category O, you take that and you tensor it with u minus of GLN hat. So you always get sort of the same amount extra. It's always kind of category O plus this guy that's fixed and doesn't depend on the parameters other than kappa. Right, I should say like GLE hat where E is the denominator of kappa. And in particular, if E is not rational, it's GL infinity. Or do, and I should say that if you do GLPN, this seems to be very closely related but a tiny bit different. But this would be weighted KLR of the same GLP. Yes. Yes. I'll just mention that this sort of seems to fit into the same bucket, but it's a little bit more complicated. It definitely has this kind of correct form. It's a rational Galois order. But this is something I'm currently working on with a grad student at PI. All right, so, okay. Oh, damn. Maybe I should just stop. That's too bad. Well, if you wanted to know about duality, you shouldn't have asked so many questions. So this all explains why symplectic duality works, but I guess I'll leave it as an exercise to you to figure out why that's true. All right, so I'll just stop now and if you have more questions, feel free to bother me. Thank you again and have a great evening. personalized rodz показывucken.
|
The algebra U (gln) contains a famous and beautiful commutative subalgebra, called the Gelfand-Tsetlin subalgebra. One problem which has attracted great attention over the recent decades is to classify the simple modules on which this subalgebra acts locally finitely (the Gelfand-Tsetlin modules). Ininvestigating this question, Futorny and Ovsienko expanded attention to a generalization of these algebras, saddled with the unfortunate name of“ principal Galois orders”. I’llexplain how all interesting known examples of these (and some unknown ones, such as the rational Cherednik algebras of G (l,p,n)!) are the Coulomb branches of N=43 Dgauge theories, and how this perspective allows us to classify the simple Gelfand-Tsetlin modules for U (gln) and Cherednik algebras and explain the Koszulduality between Higgs and Coulomb categories O.
|
10.5446/53468 (DOI)
|
So last time we heard, I told you how to construct iteratively left sets, elements using this chronicle lemma. Okay, so last time we had this chronicle lemma, which helped us to essentially what it gave us is that bias pairing property, that the bias pairing property, so the non-degeneracy or the non-degeneracy of the Poincare pairing at ideals implied somehow in co-dimension 1, co-dimension 1 implied the existence of left set elements. And then, well okay, so to close the loop, we have to discuss how in general this bias pairing property, bias pairing is implied maybe by left sets in co-dimension 1 again. And this closes the loop. And this is what I will, some of the first half will be spent on explaining that. Essentially what we discussed last time was how we get, some of last time, last lecture, we discussed the bias pairing property for ideals of the form i sigma delta, where delta for sigma is sphere and delta, I could have mentioned one sphere. And I want to essentially now go to general delta in a few small steps. And then I want to argue how these things then really fit together, sum it up. And this somehow to give you at least a feeling for how the proof of the left set theorem works using this inductive principle. And then towards the end, so in the second half and tomorrow, I will go over another proof using transcendental theory, which does not really rely on an inductive principle, but really just exploits some nice residue formulas. So let's go over this case of these ideals, i sigma delta again, so last time we discussed this case, i sigma delta, where delta was a could I mention one sphere. Today I want to verify this bias pairing property. And in the case where, well, where delta is general, I will argue that it's enough to consider the case where consider a pair, i sigma delta, which is just to remind you, this is a kernel of a map, a of sigma to a of e. And for sigma, and we will look at this in the case where sigma is 2k minus 1 sphere, dimensional sphere over r field k, and e is a could I mention one manifold in sigma, and I will assume that such that e is k minus 1 acyclic over k, meaning that the homology of e vanishes up to dimension k minus 1, and here is homology with k coefficients. This is the case that I wanted to look at, and let me just remind you what am I trying to do. Well, I'm trying to show that if I look at the Poincare pairing in sigma 2k, and I restrict it to this ideal, so this is the perfect pairing, and I want to restrict it to this ideal i sigma delta, then this pairing here should still be non-degenerate. So we want this to be non-degenerate. Well, you only have it in the middle, right? Yeah. Wait a second, so let me get to an example again. If case 2, then I'm looking at a three-dimensional sphere, the manifold is of dimension 2, and I'm saying, ah, okay, you're right, up to dimension k minus 2. That we say like that. Yeah, yeah. Yes, thank you. Yes. Otherwise, it's just a sphere again, or a model. Is this connected or is it a zero? So h0, it doesn't have to be connected, but I mean the case where we have a one-dimensional sphere is kind of trivial anyway. So we are, I mean, the case that we're interested in is really, is going from k equals 2 upwards, so three-dimensional spheres and higher. So we don't have to assume connected, but with these assumptions, it will automatically be connected once we hit the interesting cases. But yeah, let's ignore the case of one sphere for now. All right. So we want to show that this is not degenerate. Okay. And now let me, let me first discuss how the criterion to show that, that this pairing is, that this pairing doesn't degenerate. And once again, what we're kind of, what we can look at is the case where, well, we can look at i sigma e, but now let us observe again that, right, so now it kind of, max seems to be a mark becomes a little important. Okay. So this is, so it's a middle pairing, the middle pair. Yeah, yeah. So this is to a2k sigma, which is just k. So e separates sigma into two components, into two components. Right? So I have, I have a component m and I have a component m bar. And again, I have that i, that this, these two components, they span my ideal. And they are obviously orthogonal on each other because if I have a monomial supported here and a monomial supported here, then they multiply to zero because, well, they line different components, so they don't form a face with each other. All right? So they're sound, obviously, orthogonal on each other. So if I want to prove this, I really can restrict to proving it for, somehow, we can restrict to showing non-degeneracy, non-degeneracy on each other. Okay? And here's the theorem. So i sigma bar, i sigma m bar. Satisfies the bias pairing property, if and only if. Okay, so now what I do is I look at ak of e. Now ak of e, this is, okay, so this is just the quotient corresponding to my hyper surface. Now if you remember in this, when we discussed the partition complex, we remarked that there is a map of the cosmology into ak of e. So specifically, we had a map from hk minus 1 of e with k coefficients to the d choose k to this ring. All right? So this came from the discussion of the partition complex in the context of Pankar duality. And now, well, this here is, okay, so this e is a sub-mortem of m. So I can write a map from hk of m, sorry, from hk minus 1 of m with k coefficients to, I can write down a map from hk minus 1 of m to hk minus 1 of e. And again, I carry with me this tensor, this tensor product coming from the, coming from the, from the costual, coming from the costual complex. And now, okay, so now I have a composition of these two maps, and I want this composition here to be an isomorphism. If and only if, all right, let me call this map here star, and I can say star is an isomorphism. All right? All right. The proof for this fact, it turns out to be just a little diagram chasing in the end. So what I do is, okay, so I write down a short exact sequence like this. So let me write down, okay, let me, let me start with zero up here and then try to make the arrows short enough so that I don't waste too much space. So I have, I have ik me, which is defined for me as the kernel of a of ak of m to a of e, ak of e. All right? And this is a subcomplex, so this is a suggestion, this is a good nice. And then into ak of m, well, again, I have, I have my partition complex. So I have the map from hk minus one of m with k coefficients. Let me just not write down the k coefficients to the d choose k. All right? And what is the core kernel? Well, I mean, let me not describe it for now for the moment. Let me just call it b of m, bk of m for the moment. And now notice that our mystery map, right, this mystery map also appears again here. All right? So it remains to, to, to, to understand this map. So let's, let's understand this a little. So this here I can think of as mapping, so this is, this is included into i sigma m bar, which is isomorphic to a sigma m bar, which we discussed last time. And then, okay, so what is b of m, bk of m, let us discuss this. So b of, b of m, this is a of m, modulo. Well, now I look at the image of the direct sum over vertices in m, a star of the vertex in m. All right? So this was our partitioning map. In particular, this is just the kernel under the partitioning map. All right? So, sorry, this is just the, the image under the partitioning map. In particular, this is the same as looking at a of sigma m bar, modding out the annihilator of i sigma m bar. But as we discussed last time, having an injection from here to here, right, this injection, right, this is an injection from an ideal to the, the pocularity algebra to the strain modulo, the annihilator of the ideal. So this is, the injection here is equivalent to, to the bias pairing property, bias pairing property. And in fact, it is an injection if and only if it is an isomorphism because what these spaces are, punk-aridules. So this is an isomorphism if you know only if it is an injection. So now let's, let's look at this. So this, this map here is an isomorphism. Okay, so if and only, okay, so this map here is an injection if and only if this map here is. But then these two, these two spaces are of the same dimension. So if this map is also a surjection, this here will be an isomorphism. So these two spaces will be the same. In particular, I get this here is an isomorphism if and only if, this here is an isomorphism if and only if this is an isomorphism, which is exactly what we wanted, right. So isomorphism here is exactly the isomorphism here, which is exactly the isomorphism here. It's really just diagram chasing. So diagram chasing gives us a result, gives the isomorphism. All right. And that's it. So it's really just a little diagram chasing. All right. Okay, so now, okay, so now I want to, okay, so now I went to, I discussed hypersurfaces. And now I want to discuss general complexes, i delta, i. So how do we prove, how to prove that i sigma delta satisfies the bias pairing property in general? So how do I do that? So I'm again in this case, okay, so I'm looking at a simple complex, right, a simple sphere and a subcomplex in it. So we are considering, so sigma is of dimension 2k minus 1 and delta subcomplex. And really I can assume that delta is of dimension, of dimension k minus 1. Why? Because I'm caring about this ideal, which is, all right, I'm caring about this ideal in degree k. So I'm really only caring about, I'm really only caring about the monomials up to degree k, which means I'm only caring about the faces of the simplisher complex up to the current monality k, which means that I'm only caring about the simplisher complex up to dimension k minus 1. So the idea now is construct hypersurface, hypersurface e containing, containing delta, such that a of sigma, oh sorry, a of e is isomorphic to a of delta. And in particular, that i sigma e is isomorphic to i sigma delta. And we will cheat a little, we will not achieve this in sigma itself, but we will achieve this in a subdivision of sigma, but we will see that this doesn't matter, that we do that in a subdivision. Okay, so let me, let me explain that and I will mostly restrict to the case where sigma is a PL sphere just because I want to get to the transcendentality argument. So I will sketch the argument and this construction and why this is enough. And then I will kind of, this will be the, then I will summarize the, the, the, the, what, what we did and summarize the proof and then I will go to, to the transcendental proof. Ah, okay, so. Are you going to construct e and u, such that e is a cycle? Yes, yes, I will, okay, so I will construct e such that it is a cyclic. So this here will be k minus 2 a cyclic. That's, thank you. I will, I will, yes, thank you. So, okay, let, let me, let me go over this construction. So first of all, let me, let me note that the decomposition theorem implies something nice about some invariance property of this, of this, of the bias pairing property. Of, of the invariance property of the bias pairing property. There are too many properties. So an invariance of the bias pairing property under subdivisions. So, and for simplicity we will mostly assume, assume that sigma is actually a PL sphere. Because, well the construction is a little simpler in that case. Okay. So first observation, subdivisions of sigma outside that do not affect, do not affect, do not affect delta preserve the bias pairing property. This, so subdivisions here, for, for us we will take stellar subdivisions. But it's essentially every, what you can take is essentially every simple, a simple map that preserves the fundamental class. I will give an example in a second. So for us, for us here stellar subdivisions, for us stellar subdivisions survives. Because I restricted to the PL sphere case and I will just indicate what you, what other marvelous things you can do if you want more general. And let me make a picture for, to illustrate this. So let's say we have our sum potential complex delta. Alright, so this is delta and it sits inside our sphere sigma. This is delta and it sits inside sigma. And then, now perhaps for some, for some gods forsaken reason we don't like sigma so much. It looks, for some reason it looks ugly somewhere. And we want to refine it. So let's say here there's a, here there's a triangle somewhere and we want to really look at the blow up. And so we want to take stellar subdivision here. So we have this and we want to replace this triangle by its stellar subdivision. Alright. In any case, whenever you have a subdivision of simple complexes, you can induce a map. So from, from the complex before the subdivision to the sphere after the subdivision. And this here will be an injection. It's kind of, it depends on which point to take, which point of view you take. But I mean if you think about it on the level of converse polynomial functions, it is obvious, right? Because a converse polynomial function before the atinian, before the subdivision will be converse polynomial after. Alright. So now you just have more, alright. So you, you just have more space but somehow you just don't have break points at this, at the points of the subdivision. That's fine. Okay. And then if you think about it, what you can show is that, well, that the subdivision here is really, right? The image under the, the image under, under this, under this map, right? It includes the converse polynomials before and to those after. So this is just the pullback map plus a component that really comes from the subdivision. So this is really the image of the G-syn. So let me just call it G, not G, G. And this here is orthogonal under, this decomposition is orthogonal under the Poincare repairing. Alright. And now let's think about it. Let's look at, look at i sigma delta and its subdivision. So i sigma prime delta and i sigma delta. So i sigma delta really consists of all the monomials that, so this here, all monomials are not in delta. What does this mean? Well, these are all the monomials outside of delta. And now if I, okay, so this here, all the monomials in sigma prime outside of delta. So really, I have the same decomposition here. So why does this, why does this mean, okay, so again, so I have this decomposition, this is orthogonal. Why does this mean that the bias pairing property, so the non-degeneracy of the pairing is true before, before if and only if it is true after? Well, it's kind of clear, right? Some of this, the splitting here is orthogonal. So I really only have to, so here I have it, okay, so if I have it before, then I have it here. This bearing, this here is the image of the G-syn on which I have Panker's raduality anyway, right, because I have it, right, so otherwise I would already fail here. So I have Panker's raduality here, in particular I have it here. The other direction, right, if it fails here, it must fail on one of these components. And that's it. There's nothing fancy about it. So this subdivision really preserves the pairing. Now let me know that really there was my, so now I did stellar subdivisions. I could have been, I could have been more fancy. I could have said this triangle is really subdivided into like, into something more fancy like attaching a torus. So this looks like a, I attach like a little minion now to this, but I mean, so really I could have, I could have been more fancy and more general, but let's somehow, so for the purposes of PL spheres, I really don't have to go into any fancy kind of subdivisions. But somehow, this is really somehow, this is a more general principle with, whenever you can define this map on the, on the polar, convex polynomials, right, whenever you have a nice, simple map, then you can define the map on the convex polynomials. Then you have to make sure that this map preserves the fundamental class, so the Pochettipa pairing is preserved, and then you get this decomposition. All good and nice. So now, the next step is something that is kind of very nice and old in the ultropology is, is for claw fact that if I have delta, a k minus 1 dimensional complex, and delta lives inside sigma, a 2k minus 1 sphere, then delta embeds into the boundary of its regular neighborhood and regular neighborhood of, of delta. All right, so example, if I have a graph in the three-sphere, and then I take the neighborhood of the graph, then I can just by, just by basic general position, I can move this graph into the boundary of the regular neighborhood. In particular, in particular, there exists a refinement, refinement sigma tilde of sigma such that, well, okay, so I should say refinement sigma tilde of sigma not affecting, not affecting delta such that delta lies in partial of the boundary, somehow delta is a sub complex of the boundary of the regular neighborhood, which is again a sub complex of sigma tilde. All right, okay, so how does this help me? Maybe I should move this up because that's better. So now, okay, so now I have delta sub complex of some closed hyper surface, right, delta boundary n. So this is what we have arrived at now. Now we are in, I mean, careful now we are in, in, in sigma tilde, but as you, as, as we observed there, the Poincare, but that we can look at i sigma tilde instead of i sigma tilde delta instead of i sigma delta, so we are fine. Yes, that is because, okay, so what you can do is take the regular neighborhood, all right, let me, let me try to, I mean, I cannot draw an example, I'm a very two dimensional drawer, so I cannot really draw it well in dimension t, but so here's the, the gist of it, right, so here's your regular neighborhood, all right. And now push your delta into general position, all right, push it into general position. Now in dimensions, if you do this in dimension three and not two, what you will have will not intersect the original delta, so now you push it into general position, maybe like this, all right. So the issue is here now, of course, yes, yes, yes, you do just do a radial projection to the boundary, that is, yes, all right, all right, all right, where was I, oh yes, okay, so we have delta in the boundary of n, okay, so, so we have, let me draw delta and hyper surface, so, so we have delta and then it lives inside this hyper surface here, all right. The issue is, of course, so what do I want, what I wanted that to be, A of E, so the quotient of the, the phase ring, the quotient of the phase ring of sigma corresponding to E was actually isomorphic to A of delta, at least I need this in degree k, all right, and again I should write only in white, but in general at this stage I only have a surjection, all right, I have a larger complex, so I can only say that I have a surjection from this larger manifold to delta. So what do I do, well, if this is larger, then there is some monomial outside of, then there is some other, then there is an element of, then there is an element of, of, of A boundary n in degree k that lies in the kernel of this restriction map, in particular there is a monomial, right, some of this, this is generated by some monomials and there is a monomial in this, there is a monomial supported outside of delta that generates this element in the kernel, all right, so there is a monomial somewhere, perhaps here, that generates an element in the kernel. So what do I do, well, what I can do is I can, I can just remove this phase and its neighborhood, all right, I remove it and I leave it a little whole here, okay, and I can do this and repeat this several times until I have a surface with holes such that this is an isomorphism, so there are still elements in the kernel, in kernel by introducing holes, okay, now so far so good, now the issue is, okay, so it is not so hard to see that we can make this, this surface sufficiently connected, right, if there were two different components, all right, so if, if boundary n had two different components, then what, just because maybe our graph had two different components, right, or delta had two different components, then what we can do is we can just attach a handle inside, inside sigma and make them connected, so the connectivity is not an, it is not an issue and we, us drilling holes does not affect the connectivity but of course now we have a surface with boundary, all right, so E is now not closed, problem, E is not closed and so now what we have, so we have an E that is sufficiently connected, that in degree k precisely encodes delta but it is not closed but now it is really simple, there is a simple trick that we can use to actually produce a closed hyper surface that does a trick for us, that somehow, that encodes, that encodes the bias pairing property nicely, so let me, let me, let me use this backbot to finish this, at least that part, all right, so we have E, k minus 1 is cyclic, but it has bound, it is now ak of E is isomorphic to ak of delta but E has boundary and now basically what we are doing is just we double E, so we look at, we look at E solution, so solution, well take E, double it, compactify and consider the resulting ideal, so let me explain what I mean, so, okay I will explain what I mean, what I mean by compactify, so let us look at E on its own, all right, so E is my surface of the boundary, all right, and then, okay so as Pierre already says, okay I can just double and obtain, all right, I take E, another copy and identify them at the boundary, the canonical way, all right, so I have another sheet on the bottom, like this, so there is no compactification here, you are all right, but E lives somewhere, all right, so I should have said, this construction that we did is obviously, it is obviously orientable the way that we did it, so orientable, so now, okay but, so I have, how do I deal with sigma outside of, right, so this is here outside, I have sigma tilde, all right, so really there are some faces here intersecting, so faces of sigma outside of E, and now I have an orientation and I have an upper part and a lower part, I have simplices intersecting my E from the top and simplices intersecting my E from the bottom, all right, and when I, okay so now I want to see this in a new, so now I want to look at a compactification of sigma tilde without my surface E as amount, all right, so I want to compactify this such that the boundary compactify, such that the boundary of the compactification is exactly the double of E, all right, and really what I do is I really just encode what faces intersect from the top, what from the bottom, and that is it, right, I get a new superficial complex, so I get, all right, so this is my sigma, sigma hat, and now I am done, now I have a closed type of surface in a new sphere, so sigma in a sphere sigma hat, and what remains, and this is somehow, okay this is a diagram chasing I will skip is to show that I sigma tilde E satisfies the bias pairing property if and only if I k sigma hat double of E does, all right, and now I can go back to this theorem that I did for closed upper surfaces, and apply it here, and I am done, all right, okay, so let me summarize, there is one more caveat that I have to go over, and then after the summary, if I identify this to copies I get back my, yeah, but I mean, so, yeah, so to fill it in here, fill the resulting antibody in, this will be our homologous here again, all right, all right, all right. So, for the summary, so what do we have? So, we have the conic alema. And what did the conic alema give us specifically? It told us we can construct left sheds elements provided we can prove the non-degeneracy of pairing, non-pairing not, let me say, let me be in signal, we can show the non-degeneracy of pairing. What pairing did we look at? I mean, not the pairing. What ideal states we look at? We looked at the kernels of these generic linear combinations of Xv, but this is not the model. It's not so important for the moment what kernels we looked at. We looked at the kernel of the already constructed map, our candidate for the left sheds map that came close in link, in A of link of the vertex W in signal. All right, remember, there was this, okay, so we wanted to apply this basic representation theory of the conic alema. So we noticed that we have to show that the kernel of the old map mapped under this model, the perturbation, the map that we want to add, the divisor that we want to add, and we want to say that this does not intersect the image, and then we observed that this is equivalent to just saying, okay, so the pairing on either one of them does not degenerate, and then we were done. But so this we have to show. So this here, right, so notice that this is in co-dimension one, so we really gained something here in terms of induction. And now, then the second ingredient was bias pairing property, so some non-degeneracy of the pairing at ideals, okay, so at i sigma delta, well, is implied by left-shed properties at hypersurfaces for hypersurfaces. All right, so this was exactly we identified, we looked at A of E, all right, so why was this a left-shed property? So what we looked at here, all right, was we looked at A of E, and this was, okay, so this was a co-dimension one manifold inside sigma. Sigma was of dimension 2k minus one, this E is of dimension 2k minus two, so the co-dimension of sigma of A of K of sigma is one higher than the co-dimension of K of E, so we took out this additional element, and we wanted the kernel under this element to be prescribed, all right, so now we wanted to have the kernel and the co-currency under this element under this multiplication to be prescribed, which is a left-shed property, all right, so this here, somehow, this was K of E, all right, we took out somehow a linear system of parameters, theta for coming from A, coming from sigma, but this was really somehow, it was one too long, so we think of this as really of K of E modding out linear system of parameters that was one shorter, this here is the attenuary reduction, and then we said something about how this last element, L, it's nice that last and left-sheds have the same letter, that this last element, and then we said something about how would this last element act, all right, and this last element acting, this is really something saying about middle isomorphism, saying something about the left-sheds, all right, so this is a left-sheds property, all right, so the non-degeneracy of the pairing at ideals of this is applied by the left-sheds properties for hyper surfaces, and so we see that we actually, we gain in dimension in both steps, all right, so seemingly somehow we are in a very nice position, the only issue here, this is why somehow I said there's a caveat, and we have to want to explain a little how to get around that caveat, is that these ideals here, and these ideals, okay, so let me just say it like this, so these ideals here are not in general of this form, all right, so these are not in general monomial ideals, so let me explain briefly how to get around that issue, and then after the break, then we will do a break, and then we will go to the transcendental theory proof, these ideals here are not monomial in general, not monomial in general, or orthogonal complement to monomial, all right, so the issue is not, so if I just looked at this here, this would be the orthogonal complement to a monomial ideal, that would be fine, but I'm pulling it back to the link of a vertex, which I tell, I'm intersecting them to ideals, that is not so easy to say that this is again monomial, so I have to say something, so intersection of monomial ideals in general is not monomial, so how do I deal with this? Let me make some space somewhere, let me make some space here, all right, so maybe it's not so clear that this is a vector, I mean that this is not in general monomial, it's in general just the orthogonal complement of the monomial ideal, but I could just as well look at the orthogonal complement, which would be the image of this generic linear combination, and this would be clearly a monomial ideal, because the image, if I already satisfy the transverse of prime property, is just the span of the images of the individual maps, but this is just generated by the monomials of the individual, V and the index at W, okay. So how do I remedy this? Well, let me say that this here is the orthogonal complement, so let me remedy this, orthogonal complement to a monomial ideal, because if the union of the vertices star V and sigma, where I go over V in my index at W, and I take the union of the vertices again, the sigma with the star of the new vertex star W and sigma, if these two objects here are submanifolds, okay. If this is the case, then these are nice ideas that I can deal with in a nice algebraic way. So how do I then, well, let me give you a situation where this doesn't happen, and then let me describe how I get around it. Yeah, it's really just the neighborhood. So I take the union of the stars, right? So I take all the faces that intersect my vertex set, and then I take the simpler closure, right? The issue is that tomorrow, this, right, I could have some picture that looks like this, and then maybe another vertex here, and this could have some nasty singularities here. So this here could be, let me draw it nicer. So this here could be another one, another vertex, and my index set could look like this. And the intersection here is somewhat degenerate. It's not a manifold, so I'm not in a good business, okay. So that's a good position. It's not a manifold. So how do I get around that? So let me sketch off how this is circumvented. So yeah, sketch of proof. So, conventing the caveat. So here's the trick. So let's say we want to prove, we want left sheds for, let's say, for explicitness, let me say, for sigma 2k sphere, 2k dimensional sphere. And let's say it doesn't have an order on the vertices that satisfies this property. So let's say I want to show that most spheres that you will be looking at in daily life, they will not have an exhaustion of the vertices in some order such that at every intermediate step you have a manifold. So let's say this is one of these nasty spheres where you don't have an order that is nice. So let's say we just fix an arbitrary order. Order on vertices. And now the trick is the following. So we look at sigma and this is a sub-manifold. It's a co-dimensional once here. Is this sort of a logic to shellability in geometry? Yes, yes, yes, exactly. So it's a weaker property than shellability here. But still you can find many examples where you cannot get it. That's right. So this is a weak form of shellability. Weaker than shellability, but still much too strong. But what we do now, so sigma we can think of as a co-dimensional one sphere in a bigger sphere 2k, in a bigger sphere 2 sigma bar. So this is then a 2k plus one sphere. And then we saw that observe left sheds for sigma. This is what we discussed on Monday. This is equivalent to saying that the ideal sigma satisfies the bias-peri property. And now this is equivalent. Well, now I can again reduce to the k-skeleton. So this is in degree k plus 1. So this is equivalent to saying that sigma bar and they take the k-skeleton of sigma satisfies the BPP, the bias-peri property. All right. So now let me call this an empty composition because everything intermediate is a manifold. Maybe m is not so nice, but an empty composition. So there's an order on the vertices such that every intermediate step you are a manifold. So now what I do is I find a refinement, sigma hat of sigma bar such that there exists e, a hypersurface with boundary such that, well, what do I want? So such that, well, a k plus 1 of e is isomorphic to a k plus 1 of this k-skeleton of sigma. All right. In particular, the hidden motive here is that the ideals are the same. Sigma hat e isomorphic to i sigma hat e sigma. All right. That's the motivation, the hidden one in degree k plus 1. So I find this and such that e has an empty composition. So it turns out that in this code I mentioned, you can actually construct e in such a way. You can construct many hypersurfaces with the first property. And it turns out that basically by doing some surgery and twisting this hypersurface a little, you can ensure that this decomposition is actually a nice empty composition that you get. So now, OK, so now what do we have? Then the bias pairing property for i sigma hat e in degree k plus 1 is equivalent to a left-shed property for e. All right. This is what we observed. But now this e is nicely decomposable, and we can actually imply the induction. So now the e is nicely decomposable. But to construct the left-shed elements here, well, we apply the Konek Alema, but now the kernels here, they are nice. In e, they are nice. They are nice and orthogonal complements to monomials. So let me, I have this somewhat bad habit of if there is not enough space, and I will just squeeze it, so let me not do that. But in e, the kernel of this direct sum of an initial segment v initial in the order under the stars of vertices in sigma pulled back to a link of w in e is monomial. And so we can complete the induction now, because now we have a monomial ideal, right, and we have gained the dimension, right. So e is of the same dimension at the starting sigma. So the dimension of e is the same as the dimension of the sigma that we started with when we wanted to prove the left-sheds. But now we actually gain a dimension because we are looking at the link here, right. So now we have the dimension, this is the dimension of the link of the vertex in e plus one, but now we gain a dimension and therefore we can complete the induction. And that is the moment, that is the overview of this argument. All right, and this is somehow where I want to end with this argument, then after a 10-minute break I will go to the transcendental theory argument. All right, so for the final section, so what we will do is we will actually, we will talk about slightly more general objects and we will give a new proof based on some very nice residue formulas. So we have the cycles and transcendentality, transcendentality, not in kind of a new H-way, but in the sense of transcendental extensions. We have what is our object? We consider mu as a simplicial cycle, k cycle, which for me is a pair of a simplicial complex mu, so this is a simplicial complex mu in these brackets, simplicial complex, which I will call the underlying complex, support or underlying complex. Maybe let me specify the dimension here, simplicity of dimension d-1, of dimension d-1. I have mu, the underlying complex and this is also of dimension d-1, and then mu is an element in the homology dimension d-1 with k coefficients of this underlying set, of this complex and this is what I call for me a simplicial cycle. Now let me consider a of the underlying complex, this is always in a hidden way, there is also always a linear system of parameters here. This is just the phase ring again, so this is really just again k of polynomial ring modulo i of mu. Now what I will think of is, I already, when we discussed this last time, I explained that there is a canonical isomorphism between hd-1, this underlying complex and ad of the simplicial complex. So this is a canonical isomorphism here and what can I say then, well now what I can consider is a dual to some mu, this is a dual to mu in the homology. What I can in particular do is I can think of mu as a quotient of ad. This is just a top degree, it's just I'm pairing in the front of me. Now, it makes no sense. We have kind of like fundamental chain, but you don't have canonical homo to class. Maybe you can say that you have canonical chain. Well, I still have a pairing, right? I can still just... Maybe you're right, because you have not an homo to class, but canonical chain, which interprets as a quotient chain. So I can write and then what I can do is I can consider B of mu and this will be the class of a mu such that B of mu in degree D is exactly this class in degree D, right? So I can just kill everything, yeah? You said you wrote just like the order ad of mu, maps to... It's homology, it compares with this homology class it can make to your field, but not to M check, yeah. So I just consider this as a one-dimensional... It makes no sense. What is mu check? The mu is... You arrange your top-demy... Okay, so... So mu check is... Okay, so now I have a pairing of degree D to my field, right? 2k to my ground field, yeah. Get a functional for your things too, yeah? Okay, so now this pairing will give me a one-dimensional quotient of this top degree, right? Of ad? Yeah, one-dimensional quotient of homology, yeah. It's not a class in homology. Yes, I get a one-dimensional quotient, yes. You said that maybe you want to write mu check as a dual to mu in HDM as well, it's a quotient. Yes, yes, yes, that's what I mean. So it's a quotient of HD minus one, therefore I think of it as a quotient of ad. No, because literally you get an element here... Ah, okay, so yeah, okay. Okay, so quotient, okay. So it's quotient, yeah, okay, thank you. HDM as one, yeah. All right? All right, and what I get here is essentially the quotient of A of bracket mu with the fundamental class, right? Now with the prescribed fundamental class induced by mu, all right? So, and again, this still depends, right? So B of mu still in a hidden way depends on theta, right? So this is still theta inside, so I should write, right? There's still theta encoded inside, right? And this is a Panker-Raid-Wallet algebra. Panker-Raid-Wallet algebra. Duality algebra. The fundamental class in degree D. Okay, so now I can ask again, okay, so I can ask again, does B mu have the left-shut property? Again, the theorem, maybe I should not start on the bottom of this to write the theorem. Yes, sir, you put the third-year-old theorem in a cycle. So you have something, you have a manifold, mu at least inside the manifold. It doesn't have to be in a manifold, it's just a simple complex. Yes. And then you have a homology class in the simple complex. There's no manifold here anymore, okay? And so the theorem is as follows. This is going with Thomas Papadakis and Valisiri Kepetrotou. And for this, I will not just say a generic element, so I will be a little more specific. So mu, a D minus one cycle over K arbitrary. This is an arbitrary field. And now I take a field extension, and what is the field extension I take? Well, I have the matrix theta and its entries, and then I have the element L, the left-shut element L and its entries. And I see them all as a algebraically independent number. So each of the entries is a variable, and I think of my new field, I give a field extension. Each of the coordinates is an independent, an algebraically independent variable. Okay, so I have all my entries here. And then basically I take the field extension and take the field of rational numbers with respect to all those variables, and this here will be K tilde. Okay, so K tilde, rational field of rational functions of rational functions over K, and I extend by the individual variables theta and L. And now I mean, I mean, I mean, I mean, I mean, immediately a generic element. Then B of mu satisfies the left-shut property. Alright, so IE, so let me B mu K to B mu to the D minus K L to the D minus 2K is an isomorphism. It satisfies the whole amount of relations. Alright, so the Hodgman-Bailinia form QKL does not degenerate at monomial ideals. And then let me give one additional property that is very nice, but we only know it in characteristic two. So if the characteristic of the ground field is two, then we have something even nicer and even more beautiful. This is that Q, the Hodgman-Bailinia form, never degenerates. So QKL of U, U is not equal to zero for all U in B mu K. Alright, and here I should say K should be less or equal to D half. So that's kind of the strongest form of not degenerating anywhere. This is somehow no ideal. I degenerate at no ideal. That's kind of surprising. That's what happens here. So now we are over this field extension. Maybe I should say, right? This is now over this field extension, K tilde. Okay, so why is this surprising? Go to the classical Hodgman relations. You have the signature plus one and minus one. So definitely there will be one point where it will just be zero, the pairing with itself, just by intermediate value theorem. So I will show that tomorrow, I mean, if I pass to the algebraic closure here, right, if I take K tilde, but take the algebraic closure, then definitely there will be points where this is zero. So this is really something that only works in this. You don't have any trust. I'm not talking about trust or any idea. I'm just saying that you're going to accept the whole thing. Okay. But here you impose a very strong problem that the coordinates of the functions are algebraic. Yes. You can achieve this in the confidence of the project. Yeah. I mean, you can also achieve this. I mean, okay, so let me open question. What happens in other characteristics, right? But the point is this will never be true if you pass to the algebraic closure, right? So you take the extension of the complex numbers by these transcendental variables, then it might be true. But if you take the algebraic closure of K tilde, it will not be true. That's the point. Okay. So let me give like the first few... Okay, let me try to give you the first few indications of what we will need. I mean, we will end in like 10 minutes. So I think I will have to repeat anyway. So maybe let me give you some corollaries and then give you... Let me give you an idea of what we will do. So corollary... Well, what are nice cycles, right? So if sigma is a sphere, right, then you take just the fundamental class as a cycle. Then B of... B of... or A of sigma is just B of the fundamental class. Similarly, if M is a manifold, orientable closed manifold, then B of the fundamental class is really just... What is A of M? Model law for the kernel of the partitioning map, right? Kernel of M to the direct sum over the vertices in M, A star of the vertex in M, right? It's just the kernel of the partitioning map. But somehow, what else is encoded by these? Well, for instance, pseudo-manifolds. If you have a pseudo-manifold, if P is a pseudo-manifold, and it's orientable, then B of mu satisfies left sheds. B of fundamental class satisfies left sheds. So these are nice examples, but these are... One has to be a little bit careful. So this B of mu here, you could look at its Betty numbers, right? You could look at the dimension of the K graded component and try to extract some combinatorial meaning. So for instance, for manifold sigma, or for manifolds or spheres, and you can also show that for pseudo-manifolds, the dimensions of the graded components are, in these cases, the dimensions of the graded components here, of the Bi, is independent of the linear system of parameters. So they have a combinatorial meaning, even though in this case, we don't have a closed formula. No closed formula. So in the case of cycles, in general, one has to be careful in the sense that there is no combinatorial meaning immediately. So it's really somehow a rather interesting property to have left sheds for cycles. And perhaps if I'm done with the sketch early tomorrow, then I will give some applications of this. But really, there seems to be no algebraic geometry analog. So this is kind of left sheds, but really there seems to be no good algebraic geometry interpretation of this left sheds. Because it should depend on the coefficients on the cycle, yeah. Yeah, it depends on the coefficients. It's kind of a constructable function. Yes, yes. It's a left sheds theorem that is very far from any kind of geometric interpretation. All right, so what will be the idea, what will we work with? Well, we will basically look at residues of the degree function and work with them. And so the main trick is, well, first let's define the degree function. So the degree function is just the map. It's just a way of identifying B of mu with k, B mu on degree d with k. So how do I define this degree function? Well, here's a way to write it down explicitly for facets. So define this, or kind of what if you want, just normalize this by defining it on a facet. Define this by, OK, so I take a facet for f facet of the underlying complex. So this is, again, a d minus 1 dimensional face, d minus 1 dimensional simplex. And then we can evaluate the degree of the corresponding monomial, xf, in the following way. I consider the vertices ordered, and then I look at my cycle, right? And I look at the ordered, the oriented coefficient of my simplex cycle. So remember, mu, this was an element in h, d minus 1 of the underlying complex. And then I divide this by the determinant of theta, restricted to the minor corresponding to f. OK, this is the determinant of this. Remember, theta was this matrix, right? I think of theta as a matrix with some entries, and then face f cuts out a minor. And I compute this determinant. And I can prove, actually, that there is some more. I will actually, yeah, let's not do this today. So we will tomorrow prove that somehow we will see that this is consistent. Maybe I will give you just intuition why this is consistent. Maybe we will look at this function, and in particular, not for faces f, but we will look at faces. We will try to evaluate this at monomials of the form x tau squared, where tau is a face of cardinality. Cardinality d half, or at most d half, inside of cardinality d half inside my complex. And then we will see that this function really has very nice poles, and then we will compute some, yeah, we will do some basic analysis with this, prove some nice identity, and then this anisotropy, this total anisotropy, the anisotropy everywhere will just fall off, and then we have to modify a little to do this in general characteristic, and we will prove just the Holler-Mann relations there. All right, and I think somehow it doesn't make sense to start with the proof now. So let's finish here, and then, because I would have to anyway, repeat a lot tomorrow. Okay, thank you. Thank you.
|
Lefschetz, Hodge and combinatorics: an account of a fruitful cross-pollination Almost 40 years ago, Stanley noticed that some of the deep theorems of algebraic geometry have powerful combinatorial applications. Among other things, he used the hard Lefschetz theorem to rederive Dynkin's theorem, and to characterize face numbers of simplicial polytopes. Since then, several more deep combinatorial and geometric problems were discovered to be related to theorems surrounding the Lefschetz theorem. One first constructs a ring metaphorically modelling the combinatorial problem at hand, often modelled on constructions for toric varieties, and then tries to derive the combinatorial result using deep results in algebraic geometry. For instance - a Lefschetz property for implies that a simplicial complex PL-embedded in R^{4} cannot have more triangles than four times the number of it's edges (Kalai/A.), - a Hodge-Riemann type property implies the log-concavity of the coefficients of the chromatic polynomial. (Huh), - a decomposition type property implies the positivity of the Kazhdan-Lusztig polynomial (Elias-Williamson), At this point one can then hope that indeed, algebraic geometry provides the answer, which is often only the case in very special cases, when there is a sufficiently nice variety behind the metaphor. It is at this point that purely combinatorial techniques can be attempted to prove the desired. This is the modern approach to the problem, and I will discuss the two main approaches used in this area: Firstly, an idea of Peter McMullen, based on local modifications of the ring and control of the signature of the intersection form. Second, an approach based on a theorem of Hall, using the observation that spaces of low-rank linear maps are of special form.
|
10.5446/53465 (DOI)
|
Okay, so let's start. So I want to quickly finish with matroids, and then after matroids, I want to go to the left shads beyond positivity. So at the end of the last lecture, I stated that for matroids, we have the Haldriman relations and Haldra-Schadthier. And let me, I want to just quickly, we stated here, because we left off there last time, and then explain its applications and give an indication of its proof. And so we had M, a matroid. And then we had B of M, the associated fan. This is a Buckman fan. Fan of the matroid. Matroid. And then we looked at A, B of M, and they're satisfied. Well, first of all, from territoriality. Well, we didn't talk about reg of matroid, so let me just say the fundamental class in degree corresponding to the longest chain. And then we had left have for all L, OK, so here's a way to state it, so we had, we take A1 of Bm, ample. And here what I think of as ample is the restriction of the ample cone of the free matroid on the same number of vertices, on the same number of atoms, which was a complete fan, right? So this was just a Boolean lattice. I had every possible subset of the index set, so I had a complete fan. There I know what ample means. I can just restrict this. Then I have for these, I have hard left sheds and trimmons. And the application of this is usually to questions in combinatorics concerning matriots. So let me state the simplest one of them, and this is an application to what is called the characteristic polynomial of a matroid, chi of m. So chi of m is defined, well, you can define it in several ways. Let me just define it recursively. So what I can do is I can delete an element from the matroid, and I can contract one. Without going to the details of the contraction, think of it as the matroid is an abstraction of the concept of vector configurations, and then the deletion of an element is clear, and the contraction is really just a projection along one of these elements. So I have chi m.e. And then I only have to, I mean, then I just have to. You can, so you think of that like vectors and you project them to the vector space mod e. Yeah, exactly. Do you omit the zero vectors? Yeah, you omit the vector that you contracted. You don't, OK, so you omit the vector that you contracted. You allow for loops. You allow for zero vectors. I will explain now what effect they have. They basically have zero effect. Here's the reason. The definition last time from matroid, apparently, you didn't have zero vector. OK, so the zero vectors, they always, they have no effect on this characteristic polynomial. So here's the reason. So now you norm this, right? So you want to say something about the simplest possible graphs, the ones on one edge, and then you say, OK, so I have this, and it should be zero. The effect should be zero. And then I should have this, and it just should be, I hope I do the signs right, lambda minus 1. If you have a graph, this is essentially, I mean, if you come from a graph, then this is essentially the chromatic polynomial. You multiply with lambda again, the whole thing, but that's essentially the chromatic polynomial. And then you can look at questions for the coefficients of this polynomial. Where is the lambda intervening in the? Lambda, is it right or? Maybe I should. Maybe, OK, maybe I should lambda. So the characteristic polynomial is a polynomial in lambda. And this, your relation observe the fine-situ inductively starting from what? Starting from these two. But those are math only, so. Ah, OK, so I told you a special case of words are those coming from graphs. So if I have a graph, I can take the independence matrix of it. I can take the matrix of vertices and edges, and I orient the edges in some way. And then I make a vector for every edge. For every edge, I make a vector. Let me have some edge here. I make a 1 for this and a minus 1 if these two vertices are corresponding to. And this gives me a vector configuration. And that gives me a matroid. A vector configuration gives me a matroid. If you're given the field of a ratio of? Yes, yes. Let's look over the reals, yes. I think of it just as it's the matroid on one vector. There's the 0 vector, and there's the unique vector that is independent. That's it. That is what I'm saying here. These are the two matroids that you have on one element. So if I have the vector in this one, non-zero vector, which is the second case, we should get them the minus 1. But in the quality, you can start to delete. But you don't allow to do it when you have only one vector or this relation. If you apply it for m consisting of one vector. Let's not allow it when m consists of one vector. Let's not go to the empty set. I mean, you can also define it directly via a recursion formula without the recursion formula. But let's not go there. OK, so let's forget this entirely. Think of matroid just as graphs, and then just think of chromatic polynomials. What word is a chromatic polynomial? Did you define it? Yes, yes. I think we did it maybe the first lecture. Chromatic polynomial is just the function in terms of lambda of how many proper vertex colorings are there of your graph with lambda colors. That's it. So if you have the graph just the second case, you want to cover to have different colors on the two vertices. No, you mean the coloring means that the coloring is of the vertices or the edges? It's a coloring of the vertices. I think you get lambda. Yes, yes. But there is a slight difference between the characteristic polynomial and the lambda. But it's a factor of lambda. So that doesn't matter for the question of dividing by lambda. Yeah, yeah. So now, I mean, you can think of it as you take one vertex and fix the coloring there. That would be one. So I think we can go back to the previous lecture. That would be one. Now you can ask questions about the coefficients of this polynomial. And it turns out that you can compute them algebraically. So question, what can we say about lambda? Sorry, what can we say about the characteristic polynomial about chi and lambda? And well, I mean, you could ask whether it's real rooted, for instance. That is not true. So the roots are dense and complex plane. You can, the next thing you can ask, well, you can look at the coefficients. And you observe that some of the absolute value of the coefficients, they're unimodal. So they rise onto some point and then they fall again. OK? So that was it, yeah? No, they are alternating. So the absolute values, OK. And then it turns out, if you think about it, something stronger seems to be true. And they're integers. They're integers, yes. You can, yeah. This you can see quite easily from this recursion formula. I mean, it's clear that the chromatic polynomial satisfies a recursion formula like that, right? I mean, if you want to color a graph, you want to color this graph. What, how many colorings can you have? Well, I mean, let's remove an edge. Then you have this here, right? So now you can ask, well, are all colorings of this graph and clearly not, because sometimes these two colors are the same. So you have to deduct the colorings of this graph, which is exactly the contraction here, right? So chi of this is equal to chi of this minus this. Do you see it, Gopher? It's fine? But you would have to check the inner dependencies in the, when you identify, ah, OK, it really looks OK, yes. But then what about this question of the generic case, because we said that the recursion doesn't seem to hold for the very bottom. If the argument is correct, then it's usual problem of doing thing by induction, very start from there. You have to know where the arguments are to be valid. So here I asked you about this. OK, so I mean, OK, so just this one, this just this graph of two, these two vertices. And then apparently the recursion does not hold, which suggests a very good problem. Maybe, and also, of course, you have to know that you divide by lambda. I mean, it's not to divert. Yeah, it's actually fine, right? It's actually fine. It's actually fine. You can actually contract once more. This here is this minus this. It still works. Well, I mean, OK, so what is the chromatic polynomial of this graph here? It's lambda times lambda minus 1. So here you have these two vertices are just independent. So you have lambda squared minus lambda. So here you can just color in any way you want. There's no relation between these two. Anyway, let's color graphs in the break. I think it's somehow. Yeah, I mean, so it's a convention that fix the coloring on one vertex. Fix one vertex arbitrarily and fix the color in there. That's right. I mean, and really you can also divide by lambda minus 1, because also it will always appear as a factor. And this is in the project device characteristic polynomial. OK, fine. You can ask coefficients here. But it turns out that the coefficients, a i squared, they seem to have the property that they. They have a standard without h in the chromatic polynomial along the power 10. So I think there is some issue with connectivity. So if you want to be metroid, I think there is. Yeah, OK. Metroid with 10 connected components you have to define it. Do you want me to? Yeah, you have to be careful with the connectivity. Yes. OK, then you have to divide by some power of lambda. OK, so it's not exactly what you have written. It's just to give you an intuition anyway. It's not my main point is proving these left-shed theorems. Not, OK, otherwise. Yeah. But you have to know how to formulate the formula for it. Because there are just small details if you confuse it. I mean, I understand. Still, it's nothing. And it turns out that you can access the coefficients of this polynomial by computing some intersection numbers. So. The a i have the absolute variance of the coefficients. What are the a i? Yes, so the a i's are just the coefficients. I mean, they're alternating. It's not hard to show that they're alternating. So the coefficients or the a i's are the coefficients of the polynomial chi. And it turns out that you can compute them as intersection numbers. So you can compute, well, actually the absolute value of these, as an intersection number of alpha to the i times beta to the d minus i, where d is the degree of the fundamental class, of the fundamental class. And alpha. OK, so let me give you a coordinate version. So alpha j, I will define as the sum over all the xf. Well, again, remember, f, these are our subsets. So f are subsets of the index set. And I take all f that contain my given element, j. And j is an element in my set of atoms. And then beta is analogously defined. Beta j is the sum over all those xf, where f is not in j. Or f does not contain j. It turns out it's not hard to show that alpha j, the class of alpha j in A of Bm, is independent of j. So all the alpha j and beta j are the same. And you can kind of now imagine why such a thing, why these intersection numbers measure something combinatorial. So if you think about it, if I multiply alpha 1 with alpha 2, then what do I get? Well, first of all, I start with all alpha 1 is just all of those f that contain a given element. And then I start multiplying with the next with alpha 2. And these are all of those that contain a given element 2. And now if you think about it, if I multiply them, I can only get those products where, well, OK, so let's see the combinatorics of the fan again. So what is xf? xf is not variable. This is the variable. Remember that in B of m, I have a ray for every subset of the set of atoms, E. So remember, somehow B of m, the rays of B of m are in correspondence with elements of the lattice of flats without the empty set and the total set. And now remember that somehow these rays, they spend a cone together if and only if the corresponding flats, they were related by inclusion relations. Now you see, if I multiply alpha 1 with alpha 2, then really I start with those elements that contain 1. And then the only non-trivial products can be coming from those elements that form a common element at L with 2, 1 tomorrow, between 1 and 2. So I start with 1, and then I multiply, and then I get 1. I have all those flats containing it. And then I restrict to those that also contain 2, and so on and so forth. And then I multiply through with the elements. And then I see that in some way I'm counting chains in my Metroid B of m. I'm counting chains of inclusion. So I remember that you defined this term the same as the last time. And then the xf are the same as the notation ef of last time? No. Well, ef is the ray. So now I'm thinking of xf as the characteristic function of the ray. OK? xf is an element of the ring. Ha, xf is in B of m. Yeah, A B of m. Ef is an element of B of m. That's the distinction. So what is xf? It's a characteristic function of this ray, e of f. All right? Think of this A was the ring of converse polynomial function. OK, so this is the one that you discussed before, that is on the ray, if they coordinate and on the other rays it is 0. And if they don't coordinate, there it is. This is an element in the fan. And this is an element in the ring, right? In this wing of converse polynomial, it's more or lower the ideal of global linear functions. OK, so then you wrote that something is independent. Yeah, so I defined these classes alpha and beta. All right? In the ring. Yeah, I basically, first I gave you a ring before I mod out by the global polynomials. I gave you an element alpha j. That depends on j. So now I take this element and I'm modding out the global linear functions. And then I'm claiming this element alpha j is independent of j. OK? OK? So all alpha j is equal to alpha k in B of m. That's what I'm saying. If you think about it, this is just because you mod out the global linear functions. The global linear functions make all of the basis vectors the same. Well, I have to think a little bit. But think of it again in terms of the fan. It looked somewhat like this. They sum to 0. So you have E0, E1, and E2. All right? Now let me take the linear function that is 1 and positive on this ray and minus 1 on this ray. All right? Then in particular, so now you see that, OK, so I look at all of those that contain, OK, maybe that was not the smartest version. You have to look at the corresponding your definitions, so you have to take the j containing j. They have containing j. Yeah. And so. Yeah, actually, that's correct. I take all the, all right, so notice that I'm telling you why the function, why alpha 0 minus alpha 2 is a global linear function. That's what I'm telling you here in the picture. OK? Therefore, they are equivalent in the quotient. That's what I'm saying. Right? Some other classes are this. What is there, what is there? I mean, you can always take the complete matrix. You can always take the free matrix for this, OK? Because all of these alpha j and beta j, I can think of them as coming from, as restrictions from the free matrix, all right? All of this is compatible with the restriction from the free matrix, OK? So I can always think of evaluating these relations inside the free matrix. And so then I just have all of the subsets of the index set that correspond to rays. Yeah? And now I'm telling you alpha 0 and alpha 2 are the same. All right? Why is that? Well, I'm claiming that alpha 0 minus alpha 2 is a linear function. All right? And if you think about it, it's exactly this function that is 1 on this ray, all right? It's 0 on this hyperplane that divides them. It's minus 1 on this ray. And then it extends linearly to this ray and this ray, all right, to all the other rays in these half spaces. That's the calculation that you make. That's a geometric image that you should have, OK? So the matrix is 0, 1, 2. Ah, OK. Yeah. OK, I see. And then OK, so it's not so hard to verify. Yeah, it's not hard to verify. And now you can, OK, so now the point is if you compute these intersection numbers, first of all, there's another claim that these alpha and beta are nf. OK, again, not so hard to see. So they are not strictly convex on your fan, but they are in the closure of the strictly convex ones. So they are convex, all right? That's it. Um, and finally, OK, what's the, um, what do we have? Finally, well, now observe, I will not, I will not go over the detailed calculation why this intersection number corresponds to the coefficients of the polynomial. I just want to convince you that computing a product like that has some combinatorial meaning. And if you think about it, let's just multiply of the alphas. OK, let's just take a power of the alphas. What happens if I take alpha 1, right, inside the lattice of flats, all right, my lattice of flats, it consists of 1, 2, right? So I have the atoms here. And then I have above this, I have flats corresponding to one-dimensional subspaces. And so what I do is if I take alpha 1, then basically I'm restricting to this ideal in the poset. If I'm then multiplying alpha 1 with alpha 2, well, then I'm restricting to the intersection of this poset with this poset. So I'm taking the ideal of 2 in 1, and so forth, and so forth. I could multiply with another one. And you see that this has a combinatorial meaning, this intersection number. And that is exactly what is happening there. And if you think about it, computing these chains in the end, this gives you the characteristic polynomial. And that's somehow why it is important, why it is important to have the odd dreamer relations here, because then again, so in the same way that we discussed the Alexander-Authentic relations the last time, following from the odd dreamer relations, now because these alpha and beta and f, we get the log-concarity of these numbers. So degree alpha to the i times beta to the i squared is larger equal to the degree of alpha to the i minus 1 beta to the d minus i plus 1 times degree alpha to the i plus 1. And I will not write the power of beta, because I'm not able to fix it. And that is it. So that's why it is important. So you get this equality using? Inequality. Inequality using the odd dreamer relations. So last time, or maybe it was the second lecture, I explained how you got the log-concarity in the Alexander-Authentic relations, Alexander-Authentic inequalities from the odd dreamer relations. So the idea was, well, I write down the odd dreamer in degree 1 for two ample classes. Let's say big A and big B for the subspace generated by these two ample classes. And then what do I have here? I have A squared. I have AB. I have AB. And I have B squared. And now what I do is, OK, now I compute. Well, OK, so now I want to understand what is the signature of this matrix. And the odd dreamer relations tell you there's one positive eigenvalue coming from degree 0, and all other ones are negative. So whatever this matrix is, it's definitely indefinite. That's how Eric used to say. And in particular, the determinant of this matrix cannot be positive. In particular, the product of these two degrees minus the product of these two degrees is not positive. That's just the formula for the determinant. And then the fact is, OK, so these classes are only Neff, but I can approximate Neff classes by ample classes. Therefore, whatever inequality I get from this for ample classes, I get also for Neff. And that's it. So the connection is going to be longer. And the coincidence is by going to re-use the determinant? Yeah, yeah, yeah. Yeah. So you have to divide by the. Yes, yes, yes. I'm cheating a little. Yes. OK. And, yeah. OK, so let me not spend too much time on the proof, but let me give you the idea. And, well, the idea is essentially we use McMullan's argument. So this iterative, all right, some of this, hiking the mountains argument for Hodgman and hard left sets. So we have this vertical, this some of this. The send part where we use, where we prove hard left sets for a matroid by using the Hodgman relations in code dimension one. And then, basically, to prove the Hodgman relations, we use this deformation argument. And if I am in the free matroid, all right, that is quite clear. So the free matroid, again, this is this matroid. But I can start just from projective space, all right. And then I can iteratively blow up. And these are my deformations. So I introduce first this one, then I blow up here, and I blow up here. And that's it, all right. And then I control, I know that the hard left sets is true. I control the signature in these blow ups, some of the analogs of my flips in the original proof. And that's the idea. If the matroid is more complicated, right, so if the matroid is perhaps of a lower dimension, then I have to argue a little that in some way, I can still define these blow ups. So for instance, I could look at the matroid M on ground set 0, 1, 2. And then the flats, I could be 0 and 1, 2. And then the fan would be just this one. So this is E0 and this is E1, 2, all right. And then I have to argue that in some way, there is a nice way to go from the skeleton of this projective space, right? So I don't take the matroid over the fan over the syplex itself, but the fan over the skeleton of a syplex, which again is algebraically just projective space. And I have to argue that I can define, so this is not a refinement now, so I don't have a nice pullback map of the combative linear functions or combative polynomials, but I can, at least not immediately. But the way that I can think about this is I extend it linearly to the free matroid and define my pullback there, and then I have a pullback map. So then the idea is to go from the skeleton of M0 and define a pullback map to M1 and so on until I am at the Berkman fan of my favorite matroid, and I have to trace the Hadriman relations through this deformation. And that's the idea. That is, again, it's, again, just the semi-classical proof of the Hadriman relations. And that is what we knew how to do in this case of positivity, where we have Hadriman relations, where we have ample classes. And what I wanted to do today, or I wanted to start with, is finally to go beyond that. So I told you last week that somehow the coolest version of that lechert theorem that we have is actually one where there are no more ample classes. And this is what we will go to now. And this we will do in detail for the rest of the Hadamar lectures. So let me erase. Let me make some space and say it for something. So now you're proving something which is not the same as. So this is. I will restate what I proved now, OK? OK, so now left sheds and hard left sheds without positivity or whatever you want to call it, ample cone projectiveness or convexity. So we really cannot use the Hadriman relations in any nice way. All right, and the theorem that I will focus on is the case of sigma, a triangulated sphere of dimension d minus 1. And now the first difference is I will allow any field. So k, any field. I will just impose its infinite field. And then I consider the ring, a sigma. And this is parametrized by this linear system of parametrized theta. In which the sphere of the homologous sphere is 4k. Again, yes, yes, yes. As I said last time, I want a triangulated sphere to be shorter and for I want to be k homologous here, OK? So k homologous here. Where homologous sphere is really in the weakest sense, so it's a homology manifold, meaning that the links of vertices are again a sphere, are again small complexes that have the homology of a sphere of the appropriate core dimension. Another way of saying this would be Gorinstein complexes. So Gorinstein's implicit complexes, all right? All right, think of Gorinstein. Gorinstein with fundamental class in degree d. And what I'm considering. So then for this generic attenu reduction, all right, and for and l in a1 sigma theta, again a generic element. Now, we have a hard left set 0. We have hard left sets, meaning, all right, so again, isomorphism from degree k to degree d minus k induced by the power of l to the d minus 2k. And then we have this replacement for the Hotrimann relations, which we call the Holler-Laman relations. And we will see later where they come from. It's kind of unfortunately that maybe I chose the name, unfortunately, because the acronyms are the same. But so what are the Holler-Laman relations? Well, this is that the Hotrimann-Baliniar form, they can still define qkl, all right? This is just sending a and b to the degree of a times b times l to the d minus 2k, that this quadratic form does not degenerate at any square-frame monomial ideal, does not degenerate at i square-frame monomial ideal. And this is what I will, that is kind of the innovation that goes beyond the classical techniques for the proofs of hard left sets. So it's really an entirely new approach. And so I will first follow the 2018 proof. And then towards the end, I will give a second proof. I will sketch the second proof that is joined with Stavros-Papela, Kiss and Vasa Petrolu. And both rely on something that is on the non-degeneracy of this pairing at many, many subspaces. So that is the theorem. And this is kind of what will occupy us for the rest of the lectures. And the story starts with a rather simple lemma that comes from, well, it's essentially basically the algebra lemma. So let's focus on the first non-trivial isomorphism. So let's say sigma is of dimension d minus 1 equal to 2k. And we want to prove the middle isomorphism. The first one, that is non-trivial. So from ak of sigma to ak plus 1 of sigma. And we want the mystery map L here. We want the isomorphism. So how would we attack this? Well, I mean, at first it seems rather hopeless. I mean, how do I even describe a generic element? What I could try to say, well, OK, so maybe what happens if I just take the variable corresponding to a single ray, a single vertex? I don't really understand it that well. But what I understand immediately is the kernel under the multiplication of the image. So the kernel under the multiplication with xv, from here to here, well, this is, so let me be explicit, from ak to ak plus 1. Well, what is the multiplication with xv? Well, it's just a pullback to the star of the vertex. And then the kernel is just a sigma relative to the star of the vertex of sigma. And the image of xv, well, this is just, OK, so this is in degree k, of course. And the image, similarly, is just, well, it's the pullback to the star, right? I pull back to the star of the vertex, and then I multiply. So I have ak star of the vertex in sigma. But then I multiply this with xv. So I have at least somehow, I know what kernel and image are. So now what would be the next thing? Well, I mean, so the next thing would be I take another map, maybe the variable corresponding to another vertex. And now what I would do is, well, I could try to, well, I could multiply with this map, but again, probably has a kernel. And the image is also rather small. So how do I get back to this? Well, I could try to say, I'd look at the generic linear combination of xw and xv. All right? And look at, well, OK, so what would be the ideal way of things behaving here? Well, the ideal thing would be that the kernel of these two here is the intersection of the kernels. All right? That has come out of my hope. I want to create an isomorphism in the end. So I want the kernel of the generic linear combination to be as small as possible. OK? And similarly, for the image of the generic linear combination, all right, the best thing I could hope for is that the span of the individual images. That's my ideal hope. And again, so here this plus in quotation marks means generic linear combination. Generic linear combination. So how do I describe a generic linear combination? This is where a very basic and simple lemma by Konecker comes in. So this is Konecker, a few of whom have. Yes, yes, yes, yes, Maxime, you're spoilering. Yeah, yeah, yeah, yeah, yeah. OK. Do you want to say it's the lemma in Konecker-Kuever? So lemma goes essentially back to Konecker. So I have x and y two vector spaces over K. And A and B are linear maps from x to y. And then my conclusion, my first conclusion, should be I want, let me say the conclusion, let me write it a little on the right. So I want that the kernel of the generic linear combination of A and B is the intersection of the kernels, right? Kernel of A intersection kernel of B. So how do I ensure that? Well, here's a nice and sufficient condition. It is that I take the kernel of A and I map it under B. And then I intersect it. So now I'm sitting in Y. And I intersect it with the image of A. And I want this intersection to be trivial. Then this is true. So this is the first point. And then I can write down the dual. So I can look at B to the minus 1 of the image of A. And I can look at what it spans together with the kernel of A. And if this is x, then the image of the generic linear combination is the combination of the image. So yeah, so let's go back to Kronika. A very simple and very beautiful lemma. You can do all kinds of very beautiful stuff with it. But it turns out that for a miraculous reason, it works even better if you have a nice intersection ring or a nice ring like that to work with. Because there is a small but beautiful miracle happening if we consider this. So now we want to prove this. So what do we do? Let's say A is the previous map. And B is a new map. In this case, previous map is actually the new map. Or somehow, it's not the new map, but it's somehow the new component, some of the perturbing component, or whatever you want to call it, which is, in this case, xw. Then what I want to measure. So I want to look at x, let's say I wanted to prove, let me just arbitrarily decide for proving one of these. So let's say we're trying to prove this. So then what we're doing, so we want to look at xw times the kernel of xv. And I want to intersect it with the image of xv. I want this intersection to be 0. So first observation is, I mean, if they intersect, they must intersect in the pullback, in this ideal of xw. So I can just intersect once more with the image of xw. So that's the same thing. Let me actually, not to be confusing, let me write this separately. So this is the same as xw current xv. Intersection, and now in a bracket, image xv. Intersected image xw. Next observation is? Just for classification of intercouples of the pistachio of one square query. What was I? Ah. So now, OK. Just a little bit, because each of them contains already contrainted by the image. Well, I mean, there is not a trivial ingredient here that we will see. All right? So there must be. Otherwise, I wouldn't have to restrict to a generic linear system of parameters. Maybe I should also give the example where a non-generic is not enough. OK, I will do this in the next section. I think it fits better, though. OK. So now, notice that the kernel of xv and the image. So this is the sub-space of ak sigma and the image of xv. This is the sub-space of ak plus 1 of sigma. All right? So they form exactly orthogonal complements. So what do you do, so all the spaces are dual, in some way? Yes. I can say this is considered one of the very special situations. Yeah, yeah. That's a special thing about this situation. We have dual spaces. That's the beauty of this. So we have orthogonal complements. All right? Yes, everything is tautological. Everything is trivial here, but small. Let me still say it. So now, OK. So I have orthogonal complements. But now, I restrict it to the ideal of xw. I restrict it to the ideal of xw. So I have xw. So let me write it like this. So I have xw times kernel of xv. And I have the image of xv as a sub-space of the ideal. I intersect it with the ideal of xw. And these are now in the ideal of xw in ak of sigma. All right? This here is isomorphic to xw times a of the star of the vertex in degree k. Now, OK. So now, OK. So both of them are in the star of this vertex. Now, this here is a sphere of co-dimension 1. So this here is isomorphic to ak of the link of the vertex in sigma. All right? OK. And now, I can look at xw kernel of xv and the image of xv. All right? But they are orthogonal complements. OK? So xw kernel of xv. OK, so and image xv are orthogonal complements in ak of link of the vertex w in sigma. But now, OK. So now, let's go back to the criterion that I wanted to verify, actually. Conveniently, it is here. Right? The intersection should be 0. I have orthogonal complements. When is the intersection of orthogonal complements 0? All right? Intersection of orthogonal complements in ak of link w sigma is 0 if and only if, right? The Poincare pairing ak link times ak link to the reals does not degenerate. Degenerate on either of them. No, very big, small field k. Ah, thank you. Yes. On either of them. Of them. All right? So this Konecker lemma, this basic presentation theory of the Konecker quiver plays, for some miraculous reason, beautifully with spaces that are dual to each other. It's kind of a, I mean, it's a miracle. It's somehow, it's some, I mean, they couldn't go better with each other. It's like a white one in fissure. So now you see that, now you suddenly see why constructing a left shed element, right? Because constructing an isomorphism is related to non-degeneracy of the pairing at subspaces. That's it. So that is a property that you want to prove. Right? So in general, what you want to prove inductively, so. So how do you know that, so you have to have the orthogonal complement kxv and image xv in different degrees k and k plus one. Yes. Then it goes to the pairing that goes to the degree 2k plus no. Yeah, this goes to my, OK, so this goes to the, right, Toma? This goes to the pairing. This is a pairing in a sigma. All right? So I have the pairing times with 2k plus 1. I took a 2k dimensional sphere. All right? And now I pass to the link. I pulled back to the link. I multiplied with xw, so I pulled back to the link of the vertex w, where the pairing is now of degree k times degree k to degree 2k, because now I'm a 2k minus 1 dimensional sphere. But how do you know that those things are exactly orthogonal when you take, ah, OK, you go somehow to the link. Yeah. If you think about it, it's just a pullback of this. So you hold the relation between the links. OK, let me finish off what I wanted to say, and then we can discuss over the break. So what do you want to prove inductively then? Well, you want to prove the Fourier property. So you want to prove the transversal prime property that for all subsets w of the vertex set of your sphere sigma. And this is specifically that if I take the generic linear combination of the xv, where v goes over the element in my index set, maybe I should. Yeah, let me say it was v. And then the kernel of this here should be the intersection of the kernels of the elements, actually. And similarly, what do you want? Dualy, well, dual, I mean, this is just really equivalent because these spaces are dual, is that the image of the generic linear combination of the xv is exactly the span of the images. v goes over the element w, v and w. All right, and what we want to do is we want to prove this inductively. The transversal what property? Transversal primes, somehow. Because I mean, I just take the torus in very prime devices and I want to say that they're transversal in some way. I mean, it's just a name that I don't know whether it's a good name, but it's a name I chose, so deal with it. OK, and we want to prove this inductively. Prove this inductively. Prove inductively by adding vertices one by one. By adding vertices one by one. Adding vertices one by one. OK, that's a goal. And now this is how I get this middle left-shed isomorphism. This gives us the idea to construct this middle left-shed isomorphism. And I mean, what I have to explain, what is more complicated now is first of all, OK, so I will argue that I can always reduce to proving the middle left-shed isomorphism. So that is a critical one. But the next thing that is more important, I have to argue that I can actually close the induction and prove, well, this non-degeneracy of the pairing in some way by induction, and it turns out that this will be proven by using a left-shed property in positive co-dimension. So we have to prove this iteratively. We have to exploit this non-degeneracy of the pairing at the convent, the image. Or actually, if you do it at one of them, and this is what is more complicated, what is left to explain. But now, I think maybe 10-minute break. What do you want? Yes, OK. All right. So let me just say clearly why the transversal prime property. Why once we have proven the transversal prime property, at least we have established the middle left-shed. Well, so transversal prime property for w equal to the vertex set of sigma. All right. Then I have the generic linear combination of the xv, where v goes over all vertices. And sigma has a kernel which is the intersection of all the kernels of pullback maps. All right. So v in 0. But by prankariduality, this intersection must be 0. So if you've proven the transversal property for the entire vertex set, this must be 0. Therefore, the kernel of this generic linear combination will be 0. Therefore, if we prove the transversal property, we are done. That is it. So let me go to, well, now actually what we want to do is we want to prove this non-degeneracy of the pairing at a subspace. All right. And this is what I call bias pairing theory. So let me go over this theory of understanding the prankarid pairing at subspaces a little. Maybe before I do that, I should explain why the genericity of this theta is necessary. So this was something. Should theta be generic? Theta be generic. So let me construct a sphere sigma together with a linear system of parameters theta that is where the genericity of theta is necessary. And the trick is to start with something, to start with a very simple sphere and theta that is not a good linear system of parameters. So let me start with the following sphere. By the way, you have a genericity. So you have theta and then linear l. But is it genericity for the pair? It's genericity for the pair. So both theta and l, or theta comma l has to be generic. In the sense of the sum of the risk to the forward. Yes. So why should theta be generic? Why? And I want to say that there is a bad theta, or that there are spheres with bad theta. So I start with a following sphere. So sigma, the boundary of the simplex on four vertices, 0, 1, 2, 3. And I take the boundary of that. So this is geometrically just a tetrahedron. OK. And now what can I do with this? Well, I want to take the following linear system of parameters. And this is just given by the following matrix. And let me just be naive. So 0, 1, 2, 3. So I have some generic entries here. I have some generic entries here. So how many do I need? The linear system of parameters should be of length 3. So I should add more generic entries here. But let me just add 0 here. So this is my linear system of parameters, schematically. Some generic vector, some generic vector, 0. Of course, that's not a linear system of parameters. This is not a linear system of parameters for this sphere. Because the last linear form is 0. So let me not cheat and make it into a linear system of parameters. Well, what I do is I take a stellar subdivision at every facet. And this introduces. What do you mean a system of parameters in the sense of linearity of algebra? Yes, in the sense of commutative algebra. So it's three linear forms, one of which is trivial. And if I take the quotient of them, the cold dimension of this object is not 0. So k of sigma has cold dimension 3. OK. Now it's graded. And you want to look at linear forms which form a system of parameters in the sense of commutative algebra. And as with the quotient, it's finite dimension. But if you take the ones that you have written, which are what, which are not? It's some generic vector here. Some generic vector here. And then 0, the 0 vector. Generic vector where? I mean, it's just a generic element of k1. Another generic element of k1. And then just a 0 vector. It's not a linear system of parameters. It's quite correct. Oh, of course. Yeah. Yeah. So that's not a linear system of parameters. But let me make it into one by taking the stellar subdivision at every single one of these faces. The stellar subdivision at every single triangle. This introduces me four new vertices. Let me call them 0 prime for the vertex opposite to 0, 1 prime for the vertex opposite to 1, 2 prime for the vertex opposite to 2, and 3 prime for the vertex opposite to 3 in the tetrahedron. So if this is vertex 0, then 0 prime is this vertex here. Because it's resulted from the stellar subdivision, of the blow up of the triangle opposite to 0. And then what I do is I just take generic. I take this tetrah and extend it generically here, extend it generically here, and extend it generically here. All right. So now I have a tetrah on 0, 1, 2, 3, and 0 prime to 3 prime. OK. And it turns out that now I'm a linear system of parameters. If you remember, the condition for being a linear system of parameters was that if I take a minor corresponding to a face, then it has to be of full rank. So before I had the minus corresponding to these triangles before I sub-divided, and they were not of full rank. But now these faces, the original faces, 0, 1, 2, it doesn't exist anymore. I sub-divided. So the triangle doesn't exist anymore. The only faces that exist, they involve at least one. So this here must be the vertex 3 prime. So they must involve 0, 2, 3 prime. They'll always involve one of the vertices prime. And now, so 0, 2, 3 prime, this will be, if I choose this all generically in K1, will be of full rank. Will be of rank 3. So that is now a linear system of parameters in the sense of commutative algebra. So why now can claim, so this sub-division, let me call it sigma prime, with this new linear system of parameters theta prime. I claim, even though this quotient now is a finite dimensional vector space, so this is a linear system of parameters, that this can never satisfy the left-shed property. All right? And for this, let us look at the quotient. So let us look at A of sigma intersection sigma prime with respect to this linear system of parameters theta prime. All right? This is a quotient of A sigma prime. I'm sorry, so you say that this theta prime is generally for sigma prime. This theta prime, sorry, what? I mean, it's confused. This theta prime is generally for sigma prime. Yes. But you still say that it doesn't stay very fine. Well, I mean, it is, well, it is not what you mean. It's not a generic linear system, because it's still zero. It's still rather degenerate on some of the vertices. You mean the section sigma prime? Well, this is just, I mean, the, all right, I took all these, I took the original sigma, and I subdivided some faces. So what is intersection? Well, these are just the original vertices and the original edges that I have. All right? I'll resound the two seconds. I'll resound the two. Yeah. All right, this is just, well, if you want, this is just the graph, a complete graph on the vertices, 0, 1, 2, 3. OK. So now, OK, so let's compute this. What is a 2 of this and a 1 of this? Sigma intersection sigma prime. Well, I mean, essentially, this is just a of sigma with the original linear system of parameters theta that I had, restricted to degree 2, because I mean, degree 2, that is just what I have on the edges. D1 is just what I have on the vertices. And so I just have the original linear system of parameters theta here. So this here, this was my original theta, and now I extended it to theta prime. So what is it? Well, this is, OK, I start with the free polynomial ring on four variables, and then I mod out two linear forms. The third one is trivial. So what I have here is, well, this is isomorphic to a vector space over k that is three-dimensional. This here, well, I'm missing one linear form. So this is isomorphic to a vector space k2. But now, I see that I can never have a left-shed element on sigma prime. Why? Well, if I had a left-shed element, and I would have sigma prime, if I had left-shed here, then I would have isomorphism from here to here. In particular, I would have a suggestion from here to here, just by the commutative diagram of the restrictions. No, from a1 to a2. And then the dimensions got the big picture. Yeah, sorry. You're right. Sorry. Yeah. The other way around. So I have the isomorphism here. Otherwise, it wouldn't be a contradiction. Thanks. Thank you. Yes. But that cannot happen. All right? So the confusion is that the generality by saying that the faces divine or correspond to the faces should be nonzero is not enough? Yes. Yes. So this is only enough for guaranteeing that you have a linear system of parameters. But it is not enough for guaranteeing the left-shed element. All right. All right. All right. All right. So what do we want? All right. So we want to have our ring, a sigma, for sigma, the d minus 1 stream. All right? Because we say that i ideal in sigma, a sigma, satisfies the bias pairing property. So in degree k, if, well, what do I want? Well, I want to look at the Pankeray pairing restricted to this ideal. So I look at i k times i to the d minus k to my ground field. All right? That's a Pankeray pairing. And this here is non-degenerate, should be non-degenerate in the first factor. All right? So I say that a of sigma satisfies the bias pairing property in degree k. If, well, for all, if it satisfies the bias pairing property at all square-frame monomial ideals. Yeah? Maybe better use the right one. Ah, yes. Yes. So this is related to the property that we want to show, right? We want to show that the pairing, the bias, that the Pankeray pairing does not degenerate at certain ideals. And this is exactly what we are trying to do. And it will turn out to be again related to the left-shed property. And the following, the first is the following descent lemma. So consider sigma a sphere of dimension d minus 1 and k some entry less than d half, strictly less than d half. Then a of sigma satisfies the bias pairing property in degree k if it only, oh, sorry, let me just state the if version, if a of sigma, a of link of a vertex in sigma satisfies the bias pairing property for all vertices v in sigma. So this means that we can always reduce this bias pairing property to the middle degree. So when we pair, when we are looking at a 2, right now, what we are looking at is a 2k minus 1 dimensional sphere. And so the implication is we only need to consider sigma 2k minus 1 dimensional. So we are comparing ak of sigma times ak of sigma 2k. So you want just the non-generacy in the lowest dimension, in the lowest k, not in the second one. So here you only want it in the case where it's actually the middle pairing. That is right, in the lowest k. The original thing, do you want it to carry less than the local dealer? In the end we want it, OK. So I can state this definition for all of them. But in the end I would only want it for k less than or equal to d half. So let's restrict. So we only want it in this case. In fact, we only want it in the middle case in the end. Just like when, if you remember when we had this perturbation lemma, it reduced to a pairing question in the middle degree. All right? So, OK, so now we have to understand the pairing in the middle degree. And what I will do is I will go over this in two steps, in two levels of generality, to convince you that understanding this property, all right, so now this is really just the middle pairing in the ideal, all right? So it doesn't matter whether I say that it's non-degenerate in the first factor and the second factor is non-degenerate. And I want to convince you that this non-degeneracy of the pairing that I want is again related to a left-shed property, OK? So let me, I have some space here. So let's consider sigma of dimension 2k minus 1, all right? Now, we want to consider square-fee monomial ideals, and they come from restrictions to sub complexes. So we want to consider ideals i of the form a of sigma to a of delta, some sub complex, OK? So now let me imagine a very simple kind of sub complex. So let's consider the case where delta is a co-dimension 1 sphere in sigma, all right? So that kind of turns out to be a rather simple but powerful case that we can look at. So what we have, all right, so now we have some odd-dimensional sphere. I cannot draw interesting odd-dimensional spheres, so I will draw an even-dimensional sphere. This is my sigma, and here is my sub complex delta, all right? It parts my sphere into two components, let's say d and d bar, d and d bar. So i sigma delta, well, it is generated by i sigma d and i sigma d bar, all right? So it is generated by the monomials in the northern hemisphere and the monomials in the southern hemisphere. In fact, these two hemispheres, they stand orthogonal on each other. If I have a monomial here, all right, and I have a monomial here, they multiply to 0, all right? So these are orthogonal on each other. Let me say it in words, standard orthogonal on each other. Hence, if I want to prove the bias pairing property for i sigma delta, I could just as well say, prove it for i sigma d or d bar, it doesn't matter, all right? I can prove it somehow. So prove bias pairing property for i sigma d bar. OK. So first observation, i sigma d bar, this is isomorphic to what I called a sigma d bar, which is, I have to press a little harder with the chalk, which is, OK, so this was the nonface ideal of d bar modulo the nonface ideal of sigma, and then I mod out the linear system of parameters theta, all right? In fact, it fits into an exact sequence, a sigma d bar to a sigma to a d bar, OK? So these two are isomorphic. I d bar, this is the nonface ideal of d bar modulo the nonface ideal of i sigma, for sigma. So your definition of a sigma d bar is, I think it was given last time. Yes. It means the quotient of the two ideas, what is it? OK, so let me define again k sigma or k a, k over pair, k a, b for a, a simple complex and b a sub complex of it. And this was defined as the nonface ideal of b modulo the nonface ideal of a, OK? So it is a kind of non-unitarian, something like this. Yeah, and I can think of it as a module over the phase string of a. That's fine, yeah. OK, so these two are isomorphic. Well, it's an ideal in the ring of a. No, it's an ideal in the nonface ring of algebra of a. Yeah. All right. And this is for simple complexes with the same vertices, so not necessarily. What is for simple complexes with the same? So a and b have the same vertices? They don't necessarily have to have the same vertices, but you can always think of the ideal in the larger polynomial ring. So you can add an arbitrary number of vertices, but then the new vertices, if they are not in b, they just correspond to nonfaces of b. So you mod them out again in i, b. Right? So, yeah, that's so. Think of i, b as an ideal over the polynomial ring generated by the vertices of a, the vertices that are not in b, you just ignore, you just kill them again because they're not faces in b. Yeah? I mean, that's the natural way of going about it. Second observation. Well, let's say I want to prove the bias pairing property for an ideal j. Let's say we want bias pairing for an ideal j in a sigma in degree k. Then, yeah, lemma, this is equivalent to an injection from j in degree k. So, I have to add a sigma in degree k to a sigma modulo the annihilator of j in degree k. All right? The bias pairing property in this Pankar-Leyduality algebra is just saying I have an injection from j to the annihilator of j, sigma, a sigma by the annihilator of j. That's just an almost empty statement. All right? So, let's combine this. So, we have i sigma d, which is isomorphic to a sigma, sorry, d bar, a sigma d bar, which is isomorphic to a d, boundary d. All right? Now, what is the annihilator of j? All right? What is the annihilator of j in this case? What is the annihilator of my ideal? Well, these are exactly, what annihilates my ideal? Well, these are exactly the monomials supported in the interior of d bar. So, and what is, okay, so what is, okay, so I take out, so, annihilator of j. And what is the annihilator of i sigma d bar? Well, this is exactly, is equal to the, is exactly i sigma d. Hence, a sigma modulo this annihilator, that is exactly what is left. Well, if I take out the ideal supported in the interior of d, well, what do I get? Well, these are exactly, this is exactly, well, now I mod out all those phases that are not contained in d. So, I really am left with a of d. So, hence, we want for the bias pairing property of i sigma d bar, we want an injection, we want an injection from d, boundary d to a of d in degree k, injective. All right? If we have this injection, then we have the bias pairing property. All right? So, it's just a little reformulation here. Okay, so now, how do we prove this injection here? So, the a of d, the a was already depending on some theta, yes? Yes. And so, those, all those things work for which theta? For now, I haven't made any assumption on theta. All right? It's just a linear system of parameters. There's no assumption yet. Now, I want to extract somehow, I want to extract the meat of it. Why, what's, what's the condition? All right, somehow, I want the bias pairing property here. All right? Hence, I want this injection and this injection will now turn out to depend on theta, whether this, whether this map here is injective will depend on theta. But you know, to have this annihilator is equal to something, you know, the two things annihilate each other, but to have the annihilator is exactly something you need. No, it turns out that this, for this is enough that it's a linear system of parameters. There's no genericness there. For this, it's enough that it's a linear system of parameters. There's no genericness needed. To know that the annihilator is exactly. Yeah. Let's not go over it. This is a simple, it's a simple commutative. There's the coin-mechordiness stuff. Yeah, yeah, I use the coin-mechordiness. But this is somehow, this is still the classical commutative algebra. There's nothing fancy here. Okay, no, I can understand. This is the key, kind of the behind what you're using. Yes, yes. But now, let me, okay, so now I want to say why should this be injective? When should this be injective? And now we will see the connection to the example that we did last time. So, when do I have this induction? So, the trick is to consider this, well, let's try to consider this, this map before we have, before we take out the linear system of parameters. So, I will do two things. I will take theta, this is my linear system of parameters. I will write it as a linear system of parameters that is one element shorter and a final element, right? Theta as theta tilde with an additional element l, okay? Next, observe that if I look at k of, well, before the itinian reduction, k of d boundary d, then, well, what's, if I map this to k d, all right, I will take the phase ring of d, what is my, I mean, first of all, it's an injection, right? Every monomial, all right, so, remember, this here was, this was a polynomial ring, k of x, modulo i d, this here is i d, i boundary, i boundary d, modulo i d. What is here is just k of boundary d, all right? That's it. So, before the itinian reduction, this is a short exact sequence. In particular, I have an injection here, all right? Before the itinian reduction, I have an injection, okay? So, you see perhaps why I took theta tilde, why I split theta into theta tilde and one additional element. Well, okay, so, this object is called Macaulay, this object is called Macaulay, this object is a sphere, it's also called Macaulay, all right? So, what, what is a cold dimension? Well, the cold dimension here is a dimension of d plus one, and this is a dimension of boundary d plus one, which is one lower, all right? So, this here, all right, if I choose this in some sufficiently generic way from theta this splitting, then theta tilde will be a linear system of parameters for boundary d. So, by calling Macaulayness, what I get is k of d boundary d to modulo theta tilde to k d boundary d, sorry, k d modulo theta tilde to k of boundary d to theta tilde, to zero. But because, all right, theta tilde is regular on the boundary of d, therefore I can take it out, this is all exact, it's fine. But now I want to take one, all right, I mean, to get back to a of d, I want to take out this additional linear from l, all right, the one that is missing. Okay, so what do I need? So, this is degree k, this is degree k, this is degree k. Well, I mean, I could try to just mod it out, but then I see that this l will no longer be part of the linear system of parameters for boundary d, all right? So what I have is boundary d from degree k minus 1 to degree k of boundary d modulo theta tilde, and I have this multiplication. In general, there's no reason to expect that this map is injective, but this here is a sphere of co-dimension 1, right? So this boundary d is a sphere of dimension, okay, so I started with a sphere of 2 dimension, 2k minus 1, so now it's a dimension 2k minus 2, all right? So this here from degree k minus 1 to degree k, these are exactly the Poincare dual components, right? The fundamental class lives in degree 2k minus 1, all right? So this is exactly the Poincare dual component. This is exactly the middle left-shed map. So results are, conclusion, the injection A of d, boundary d to A d in degree k, which is equivalent to the bias pairing property for i, right? For i sigma d bar. This is equivalent to the left-shed property. Well, for k boundary d, it's equivalent to the left-shed property, right? So this is equivalent to the left-shed property. So this is equivalent to the left-shed property, right? The injectivity of this multiplication is equivalent to getting the model, the exactness under modding out the last element, L. In particular, the left-shed property is equivalent to this injection, right? Notice, right, so, Ofer, are you happy? So you have the snake there or something like that? Yes, yes, it's just a snake there, yes. Okay, so the injectivity is equivalent to the kernel being subjective. Yes. And since it is a system of parameters, it's the kernel here is 0, like coin or coin, okay, and so it's the same as the kernel, okay. All right, and this explains this example that we had last time, all right? So remember, I told you p1 cross p1 is a bad example for the bias-perry property or for this whole amount of relations, all right? Why? All right, so p1 cross p1, it looked like this, all right? And now what I took is, all right, so this is sigma. Let's say d bar is everything in the slow ion sphere, all right? And then I take my linear system of parameters, which was, okay, so my linear system of parameters, let me restrict it immediately to the boundary of d, all right, equal to the boundary of d bar. Well, this was, this is some vector, well, it's some non-trivial vector, then another non-trivial, well, okay, so now I'm in degree two. I mean, it's some non-trivial vector and then the zero vector, all right? The second, the redundant linear form here is just zero. In particular, it is not a left-shed element, all right? In this case, I'm going from, all right, in this case, this is isomorphism from, the left-shed property from degree zero, from a zero to a one, and because the form is just zero, because L is just zero, it's not a left-shed element, therefore the bias-pairing property is violated. So the left-shed property in co-dimension one governs this bias-pairing property. That is a good takeaway that we have. Okay, I think we are badly over time, so let me stop here.
|
Lefschetz, Hodge and combinatorics: an account of a fruitful cross-pollination Almost 40 years ago, Stanley noticed that some of the deep theorems of algebraic geometry have powerful combinatorial applications. Among other things, he used the hard Lefschetz theorem to rederive Dynkin's theorem, and to characterize face numbers of simplicial polytopes. Since then, several more deep combinatorial and geometric problems were discovered to be related to theorems surrounding the Lefschetz theorem. One first constructs a ring metaphorically modelling the combinatorial problem at hand, often modelled on constructions for toric varieties, and then tries to derive the combinatorial result using deep results in algebraic geometry. For instance - a Lefschetz property for implies that a simplicial complex PL-embedded in R^{4} cannot have more triangles than four times the number of it's edges (Kalai/A.), - a Hodge-Riemann type property implies the log-concavity of the coefficients of the chromatic polynomial. (Huh), - a decomposition type property implies the positivity of the Kazhdan-Lusztig polynomial (Elias-Williamson), At this point one can then hope that indeed, algebraic geometry provides the answer, which is often only the case in very special cases, when there is a sufficiently nice variety behind the metaphor. It is at this point that purely combinatorial techniques can be attempted to prove the desired. This is the modern approach to the problem, and I will discuss the two main approaches used in this area: Firstly, an idea of Peter McMullen, based on local modifications of the ring and control of the signature of the intersection form. Second, an approach based on a theorem of Hall, using the observation that spaces of low-rank linear maps are of special form.
|
10.5446/53466 (DOI)
|
So last time I stopped with the argument for Poncaredoality, we will do this more general anyway soon, but now I want to actually go to the hard left sheds here and its applications a little and then first of all I will cover the classical version, well, for toric varieties at least, and then I will go to this, do the left sheds version beyond positivity. I will motivate it by questions in PL topology and others, but the meat of the mini course will be the proof of that. Okay so one three, the hard left sheds theorem, one is just the kind of the overview without really the meat of the proofs, that's one for one. So hard left sheds in company. So let's do the classical version first. So last time we discussed sigma in the case of it being the boundary of a simplicial polytope and let me fix the dimension so P is a simplicial D minus one polytope, sorry D polytope, so that the sphere is of dimension D minus one. Alright and then we consider this algebra A of sigma, alright, we consider algebra A of sigma and there were two ways to define it that were isomorphic, so there was a way of thinking about this as the algebra of cone wise polynomials modulo, the idea generated by the global linear functions and then there was this phase-ring way of taking a polynomial, just the free polynomial ring and then taking some quotient and somehow now it's good to remember the definition as a ring of cone wise polynomials because I want to consider this ring and I consider L in A1, so I did give you one function, so this is just a function that is cone wise linear, alright, so it's linear on every cone of sigma and this should be convex and let me say, okay, so let me say strictly convex and explain what I mean because the convex geometries in the audience might protest a little because this is not strictly convex in the convex geometry sense, so let's say I have my sphere sigma, alright, and I see it as a fan and what I want is the function to be, well, cone wise linear convex and the domain of linearity should be exactly the cones of the fan, alright, so this, I could draw the level set at one and this would be an example of such a function and this would not be an example because it is not strictly convex here, alright, so this, the domain of linearity is just larger than the domain of linearity. What does the drawing mean? So this is a fan, okay, I think of it, so I told you think of it for the moment as the functions, cone wise polynomials on the fan. So now what I draw is the level set at one of the functions. Is this function positive outside of zero? Yeah, yeah, I mean I can always assume it because, oh well, yeah, yeah, I take out the global linear functions, I can always assume that it is, somehow that it is positive everywhere and now I take the level set at one, okay. By the way, the notation from last time you had a ring, some ring which was some dimension and then you made a zero-dimensional ring by dividing by, yeah, and so this is the full one, this A sigma is the one. No, no, no, it's a finite dimension. Is it a finite? No, no, no, this is a finite dimension, this is a quotient. Yeah, but it seems to be, it depends on some free parameters which, yes, yes, yes, but I'm not dimensioning them for now, right, so I'm thinking about it, somehow here I have already the simplisher d-polytope and as I said, if I, if I have a polytope then I have already the vertex coordinates which give me the linear system, all right. This was, I mean, here if I think of it as the algebra of commerce polynomials, I implicitly already have the linear system, all right. I took, so here, for here, think of this here again as P of sigma, more or less the ideal of global polynomials. Global linear function, sorry, I mean the global polynomials also, but this ideal generated by the global linear functions. So and P of sigma, here this was the combwise polynomial functions, polynomial functions, okay. This was a coin. For the moment, this is general enough. So is there enough to make the question final dimension? Yes, yes, yes, yeah. Remember, okay, we have to check on each, okay. Yes, yes, it's exactly this condition that each of these cones here is full dimensional, all right, each of these, each of these simplices spans a cone, so each k minus one simple expands a k dimensional cone. You also have this coordinate that you describe, yes, but it's a sphere, all right, it's called a coordinate, yeah, it's even going to stand, that's right. So and I have this strictly convexity, this is what I drew here. So this here, this is a level set at one and the red part is not strictly convex, so convex but not strictly convex and the white part is. Just want to, the ample line model. Yes, yes, yes, yes. Exactly, so these are the ample line models and if it's just convex, it's just an F. Then for all k less or equal to d half, we have first the hard left-shed theorem. Let me call it the property. This is a real space, if I have rational coordinates, right, if my vertices have rational coordinates, so this is over R, that's right. So it follows from how left-shed is in a big geometry, everything is rational. Yes, but we don't even need that, we will see a purely convex geometric combinatorial proof of this, okay, we will not go to the algebra at all because I want to get to theorems where we don't even have a variety anymore and we will still prove the left-shed. So in the end you should, I mean for the mini course you should forget that there is a variety because we will not use it at all, okay. No, by the way, the projected variety in general is not smooth. Yes, yes, it's not smooth but we will go even further, so it doesn't really matter. If it's rational then it will be toric-oberfold. But if it's just real, right, if the polytop is just real coordinates, you don't even, I mean there are some constructions that kind of mimic this in the real, when you have real coordinates, so you can build things that behave like a toric-variety but then you lose, usually you cannot apply the classical proofs of left-sheds anyway. So we will do it anyway purely combinatorial, okay, Ofer? Yeah? Is there also a theory of like harmonic form theory in this context that can be used to exactly... Yeah, Kehl occurrence, Kehl occurrence, yeah. This is Moises-Zohn manifolds, these things, there are things that exist in this context but it's still, I mean we will not use it. Yeah, but this is going to be a pair of such a... Yes, yes, yes, yes. But I mean this leads into the wrong direction, let me cut it off at this point. So hard left-sheds, so, all right, so this is the isomorphism between... Well, so, I mean we already know that these spaces are isomorphic vector spaces because we have Panker-Reduality but we want to realize this isomorphism by multiplication in the ring, so L to the d minus 2k and this is isomorphism. By the way, there is also, I forgot like it is true but suppose you have several hunter line bundles and you decide to, instead of... Yes, yes, yes, yes, yes, mixed. Yeah, this is a mixed version of the left-shed theorem. It also works, the proofs that we will encounter, they immediately imply this mixed version as well, it's immediate, yeah. But let me not state it for now, okay, so... L is this L. Okay, okay. So for Maxim this is ample line bundle, right? Okay. All right. And the theorem does not come alone, it comes with a relative that is, that will be proven in company with it, this is the Hodgman relations. And for this, well, I define the following quadratic form on degree k and with respect to L and this is just I take ak times ak and I multiply to degree, well, I want to have a perfect pairing, so I multiply to degree d and what do I do? Well I take a and b, all right, and I send this to, well, okay, so I take the degree of a times b times L to the d minus 2k, all right, that's exactly what I want from the left-shed theorem. And so the degree is just a canonical identification of this degree d component here with the reals we will encounter this later explicitly. But for, I mean to give it an orientation you can just say, well, okay, so you just have to say what, we would just want to say what is positive and what is negative and let us come to the convention that somehow the degree of a monomial here, right, a face, a monomial of a facet is positive and that's it. And now what we want to do is, well, we want to make a non-trivial statement about this form, right, so the hard left-shed just says this will be perfect but we want to say a little more and this is, if we look at the primitive forms under L, ak. The even homology, all of this is like taking the even homology of some project, you know what I think? Yeah, yeah. Is it considered the odd homology? Because it's like, correct? Yeah. It's correct, it's only coached classes. Yes. We have only coached classes. Ah, okay, okay, okay. Yeah, okay. The homology are on the, okay. Exactly. Where was I? I wanted to say what this is. I want to say what this is, yes. So I take pl ak and this is the kernel of the map from ak to ad minus k plus 1 of sigma induced by taking this element L but multiplying 1 to 5. And so L to the d minus 2k plus 1. And then how are they connected in the Hodgman relations? Well we want this to be psi. Then ql, qkl is definite of sine minus 1 to the k on the subspace pkl ak in ak of sigma. All right? And that's it. That's Hodgman relations. Oh, too far. Okay. minus 1 to the power k. On the subspace that I defined here. This is a kernel. It should depend on just the primitive, okay, not the other one. Just this one. All right. And now let's talk about some applications. So remember, so for corn-mecali complexes, for delta corn-mecali, the H vector, which was? No, no, no, no, for delta corn-mecali. What does that mean? No, corn-mecali is property of the ring. Yeah. Well, it's no, no, no, no, no, but it's a corn-mecali, but we had Hofstra's theorem. Right? We had Hofstra's theorem. Of course it's Hofstra's theorem. By Hofstra's theorem, it's not, it's a property of the ring, but which is also a property. You can also formulate it as property, right? So over. Oh, you get just the range of those years. Yeah, yeah. Well, I mean, yeah, not necessarily. It's a homologi-wise, it's a wedge of spheres, right, with respect to the k-homology. The H vector was a, right, which was the dimensions of the graded components, right, of the, of this ring. This was an M vector, right? So we had that, that this was characterized as the, the Hibbert functions of polynomial rings, right? So this was just what we did last time. And now with, with this theorem, what we can say is that the following vector, so the G vector, the G I vector, which is the successive differences of the H I, H I minus H I minus one. What is the answer? So for the polynomial it's not, it's not a concriduality. Yes, yes, we don't have concriduality. But now we have more, right? We have the symmetry. By M vector, in the record. And M vector, this was the Hibbert, this was the coefficients of the Hibbert series of a polynomial ring, right? Graded commutative algebra generated in degree one. Right? It's clear that now we have this inclusion here from here to here because this is already a graded, right? This is already a graded, this is already a polynomial ring. I have a 10 vector system, really tall. Yeah, yeah, yeah. In McCauley's theorem, just said that you can go in the other direction then. But now we can say more, right? So now we have this, we have concriduality. So we have the symmetry, but we have even more, we have this left-shed property. So what it says is that if I look at this vector g i, right, consisting of the successive differences, then, well, I can look at the g vector, right? I can look at the g vector. It's not normal to put it in the picture, right? It's no longer, well, it is up to the middle, right? G vector, which is the dimension, well, so, okay, so this is the dimension of a i modulo the left-shed element a i. So let's, for simplicity, let's just define this for i less than, less or equal to d half, right? And then it is positive. This is an m vector, right? This is the implication of the left-shed theorem. What we will not do in this lecture, but what is also true is that there is also a reverse construction. So for every m vector that you write down like that, you can find a polytope. Ah, thank you, yes. So this inclusion here, this is done by Villera and Lee. So this is a combinatorial construction that we will not spend too much time on. But if you want, I can, yeah, in the coffee break afterwards, I can explain it. All right, so that's one application, all right? So what we're saying is for any m vector, this polytope seems to be going back to the point. Yes, for every m vector, there is a, there is a simplisher polytope whose g vector, all right, so successive differences of the h vector, realize it. Ah, yes. So this, this one, to me, sit here up to half, you'll be even for polytope, yeah? No, no, no, no, no. It will only be, right now, we are talking about polytopes, right? We are only talking about polytopes here. Polytopes, but polytopes, but polytopes, so it's some more general object which you don't get seen. Well, what do you mean more general object which I don't consider? I say that you should not polytope. Ah, yeah, okay, so here, so now we are here in this situation, we should say, yeah, for sigma, the boundary of a polytope, all right? So this is the situation. The Cormacol is too weak, right? It doesn't have Cormacol at all. Yeah, yes, yes, yes. Exactly, we will go, we will see that this works essentially whenever this, whenever this phase 3 is gaunted down. So can you detect the dimension of the polytope, so you have this H vector, so the dimension of the polytope can be seen from? Yes, yes, it's just the dimension of the fundamental, the degree of the fundamental class. Ah, okay, okay, okay, in the, in the, in the, in the Gaumannstein case. Yes, but polytopes, it's always, in the case of polytopes, we always have a sphere, all right? So, in other versions, any n vector comes from, comes from, of course, you start with the, from, from, from the, the other. Yeah, no, no, no, no, any n vector, but we are, somehow by, by passing to the g vector, I lose the symmetry, right? Right, I'm only looking at the g vector up to the middle. No, for, for any n vector, you don't know the dimension, I mean, the dimension of the polytope is not, I mean, the, the polytope is not. No, no, no, okay, so if I just give you the n vector, you want to, you have to give me in addition the dimension of the polytope that you want, that's right. You have to, in addition, give me the dimension of the polytope or you have to say, okay, the n vector goes up to some entry and then the polytope will be of double the dimension, right? This here only, the g vector here, it only goes up half the dimension. You remember in the one above, the stuff above that you recall, if you give an n vector in Macaulay's theorem, if you give an n vector, what is the dimension? You cannot say that because coning preserves it. In Macaulay's theorem, you can always take a cone, right? If you give an h vector, you could always take a, and you could find a simplicial complex at, sorry, you give an n vector, find a simplicial complex, a common Macaulay's simplicial complex that realizes that m vector as its h vector, then you could take a cone over that and it would still be the same h vector. So then you cannot recover. Just in the Goehrstein case, if you have the h vector, right, if you know that it is, okay, even then you can say, okay, even then you can cone, but if you say it is a sphere, then it must be, then it's the dimension of this first determinant. The coning that's... Can you forget about coning Macaulay? Can you forget about coning Macaulay at all? Well... Is it a special role? We will see later that it plays a special role. So, Con Macaulay will remain in the background a little. So, can you finish, find the g vector you have the dimension or not? Okay, so from the g vector, if you give me the g vector, you have the dimension because it's just double, yeah, yeah. All right. All right. And let me just briefly mention, okay, so this was an application of the hard left side, let me briefly mention an observation, I think it goes back to Timurine, but maybe it's also earlier. The application of the Hojima relations, and this is kind of important if you want to go back to the local cavity that I mentioned at the start for characteristic polynomials. So we take alpha and beta two convex elements in A1 of sigma. All right. And yeah, let me stay that. No, it doesn't have to be for whatever, okay, so for the moment, let's say strictly convex, okay. But we will see in a second that we can delete it. So then you can write down, well, okay, so let's write down the Hojima by linear form in degree one. But let's not write it down completely, but let's just restrict to the subspace by alpha or span by alpha and beta. It's a two-dimensional subspace. Now, you just write that down. So then what do I have? Well, I could, so if I think of, so let me think of beta as my L and alpha is just another form and then I could write down degree of alpha times beta and then I have to multiply with beta to the d minus second power or some other element. Let me just say, okay, so L is, let me just, let me just, L is any other element, right, to the d minus two. And actually I want in this entry, I want, right, so I shouldn't have written it like this. So here's my cake, so I want to have, I want to write down the Hojima by linear form on this two-dimensional subspace. So I have the degree of alpha times beta times L to the d minus two and then I have, I'm stupid, alpha squared, alpha times alpha. This is degree of alpha times beta times L to the d minus two and then I have here the degree of beta times alpha times L to the d minus two and then I have the degree of beta squared times L to the d minus two. All right, and I have this, this is the matrix that I get for the Hojima by linear form restricted to this two-dimensional subspace. And what do I get? Well, okay, so now let's look at the signature of this matrix. So I have one positive eigenvalue coming from degree zero, right? This is minus one to the zero, right? This is one positive eigenvalue and all other eigenvalues on A1, right? So now then we consider, consider here QL in degree one, in which order did I write it? Q1L, right? This is the Hojima by linear form on degree one, right? All the other, all the other eigenvalues are negative. So what happens to this matrix? This is, what is the definite, this is the signature of this matrix? Well, it's an easy argument for matrices that the signature, somehow the signature can be neither, can be neither definite, positive definite nor can it be negatively definite. So there's a positive eigenvalue, there's a negative eigenvalue. So meaning that in particular if I compute the determinant of this, all right, it will definitely not be definite. So the determinant will be negative. So what do I get? Well I get that this times this, so degree alpha squared and then L to the, L to the d minus two times degree beta squared times L to the d minus two. And then what else do I have? Well I have this times this, so I'm not just compute the determinant, all right? So I have minus degree alpha beta L to the d minus two and this I have squared and this here is less or equal to zero, all right? So what does this mean? Well, if I put this to the other side of the equation, then this suspiciously looks like the Alexandro-Fentiel inequality and it is. This is Alexandro-Fentiel. So you look at the mixed volumes of convex bodies. So you take the Minkowski sum of convex bodies, so A convex body, B convex body and then you take some other convex bodies and compute the volume as a function of the dilation. So let's say you have the function, compute the volume of t little a a plus t little b b plus some other convex bodies, okay? And then you measure, you want to look at the coefficients here of this X volume and you want to look at the mixed coefficients, right? Ta times tb, which is exactly this coefficient and this is larger equal to the product of the two adjacent, so ta squared times tb squared. That's the point. That's Alexandro-Fentiel. Sorry, may I have a question? Yes? Can you please explain why this matrix couldn't be negative definite? What if alpha and beta are both primitive? That is negative definite, isn't this true? So the Ho Chi-Riemann form is negative definite on the first primitive source. Yeah, this is why I assumed that they are both convex. I see, okay. And of course, the point is now that I know this inequality, I can remove strictly, all right, because any convex form is an approximation of strictly convex forms and then I don't no longer need convex. I don't no longer need strictly convex on the fan, okay? Yeah. Thank you. Excuse me, can you hear me? Yes? So Alexandro-Fentiel, I think you only obtain if you use the mixed version of the outlaps. Yes, I mean I cheated here. I took the same L, you're right. But I won't go there. Yes, I cheated, but don't tell anyone, okay? It's, yeah. Yeah, you're right. So this is not quite the most general version. So and that's all good and nice. So we have many applications of these theorems, but there are some questions where somehow this version of outlaps and odd dream is just not enough. So let me just give two of them that are interesting. The first is kind of immediate, all right? I mean, so we characterize the G vectors for the boundaries of the potential polytops. But we know by also theorem that the Poincare duality extends more generally, all right? The Poincare duality, we mentioned last time, it works for general triangulations of spheres, even homology spheres, all right? Also we worked now over the reals, but I could go to any other characteristic, all right? So, well, does this theorem extend? Does this characterization, by the way, I should say that this is due to Stanley, this observation. Does this observation of Stanley extend? All right? So question one, does Stanley extend? Does the G theorem extend? Well, two, okay, so let me spoiler the most general version we could do, at least for homology spheres would be, yeah. So K homology spheres, for K homology spheres. We will even go more general, but for the more general I have to explain a little bit, for two, two, two, two, triangulated, yeah, sorry, two simple, yes, thank you. All right, I mean, and this is really just one direction. If we prove the Hart-Levschad theorem for the A-rings of simple homology spheres, then in particular we get this automatically because the Bilet-Roli already gives us the other direction. Okay, that's one theorem, or one question for now, it will be a theorem in, well, I don't know whether it will be small, whether we get done this week, but probably not, but by the end of the Hadamard lectures we will be done. And my second favorite question in this context, let me start a new blackboard because it's my favorite question, and it's my favorite result because it's just so, it's rather beautiful. This was the Grünbaum conjecture, right, some of this Grünbaum problem of we look at a simple Schoen complex, embed it into R2D, and we assume that for us the embedding is PL, and then we want it, all right, so we want the question, and as I will explain the theorem, that the number of, let me, the number of k phases of delta is at most k plus 2 times the number of k minus one dimensional phases of delta, that's the theorem that we will prove. And how does this, okay, so we already explained how this G theorem follows from the left-shed property, how does this follow? Well, so to start with, let us, instead of just embedding it into R2K, we could also embed it into S2K, obviously, all right, we could just compactify. What we also could do instead of just embedding it into S2K, we could just think of delta as a subcomplex of sigma, a triangulation, a triangulated S2K, all right, and this is the only case where we, the only point where we use a PL-ness to extend delta to a triangulation of the sphere, and then what we can do is, well, we can play around with the numbers a little. So let me see what was the oldest part, so go here. You mean that each symbol says kind of sum of symbolizes in triangulation, yeah? Yeah, yeah. No, no, no, wait, wait, wait, you don't want the sum of symbolizes, this is really. No, no, no, but here in this thinking of this as a triangulation, all right, there is a simplification, there is a simplification complex, all right, that contains delta as a subcomplex. Why so? Because it's a part. You have to, yeah, there is an argument that you can make for PL embeddings, ladies, for instance, I mean, it's written down in Bing's, yeah, it's not obvious, but it's also not too sparse, yeah, it's written down in Bing's note on three manifolds, I think it's called topology of three manifolds and Poincare conjecture, something like that. Maybe Johanna, can you look up whatever I cite in the, I will leave it for the moment. No, no, no, it's not an odd thing. It's an old thing, yes, but it's not an odd dimensional thing, it's not something about the three sphere, yeah. But here everything is standard, you mean, when you say triangulation, there is two K, it's not anything exotic, it's just something K equivalent. Yes, yes, yes, yes, that's the point, it's more, as long as you can extend it to a triangulation of the sphere, this bound applies, but it's not obvious that you can always do that. It's Bing's geometric topology of three manifolds, or it's the 40 of the AMS. Metric. Topology of three manifolds, it's from 83. Of three manifolds. Thanks. All right. And so, how do I go from here to here? Well, here's the observation. So let's look at, we have a sigma, all right, and we have, it's quotient a delta, right? That's just the restriction map to delta, to the phases of delta. So we have the subjection to a delta. Actually, let me, anticipating a little, let me write it like this. All right. So here delta was, what was delta? Any simplification complex that embeds into the two K sphere, all right? And I extended it to a triangulation. The integration simplification complex, which is really good at the sphere, all right? It actually doesn't have to be. Tom, I don't actually care whether this is PL equivalent. It can be just a triangulation, a non-PL triangulation. In fact, I... So what are the triangulations? Well, they are non-PL triangulation. And I actually don't care that the sigma is even a homotopy sphere, all right? So this, the theorem here applies whenever sigma, or whenever delta can be realized as a sub-complex over homology sphere, over K homology sphere. Yeah, you need only K homology. Yeah, yeah, yeah. But I mean, the original question here, it starts out by a PL embedding, all right, into R2K. So I have this suggestion, that's good. And now I can look at the dimensions of the graded components and try to, I can try to bound them a little. So for instance, I could look at the degree K component of delta. And well, let me try to give a generating system, all right? This always estimates things from above. So I try to give a generating system. So what better than the cardinality K faces? So this is at most the number of cardinality K, or dimension K minus 1 faces of delta. All right? And right, if I estimate something from above using the number of generators, I can do a similar thing for an estimate from below, right? I can just take the generators and then estimate the number of relations. So let me do this in a specific dimension. So the dimension of a K plus 1 of delta, all right? So I write down the number of generators. So the cardinality K faces of delta. And then I have to work a little and think a little about the number of relations that come in, right? So if I was talking about torque varieties, no, I would look at rational equivalence to 0, right? And OK, so this takes a little thought. Maybe we'll, somehow, we will later see a model of this ring where it is kind of obvious. But for now, let me just say that this is K plus 1 times the number of faces of delta. So for every cardinality K face, right? It's a ring condition, right? Comes from one degree below. I get K plus 1. But you divide by, when you divide the ring of sigma, which is in dimension 2K, you have to divide by linear form. But the number of them is a dimension. Is that something like 2K or 2K? Yes, yes. But I mean, so there are some elements that just take out the square full terms, right? Some of the monomials, somehow, I restrict, I will take out some elements, right? So some monomials that contain squares, right? That are not square free. So the first, if you think about it, if I am looking now at the cardinality K plus 1 faces, so the first K plus 1 linear forms, they will just kill off, sorry, the first K linear forms, they will just kill off square full terms, right? They will just affect terms, somehow, they will kill off the monomials in my ring, they're square full. And then only after that do I get, do I do the square, do other square free terms affected? That is intuition what is happening, right? It's a free polynomial ring, so it has a lot of square full terms, right, initially. Square full meaning here, not square free. I don't know whether this is a standard terminology. And only after then I will take out the square free. We will go later to a model where it is obvious, okay? But of course you have to justify it regularly by saying that. Yes, yes. But the trick is here, we will later see a model where it is obvious, okay? We will go to the Ishida complex and then it will be obvious. Doing it in this model is kind of tedious. Yeah? Or me? Here you are working in a situation where you don't have a uniform, so you have to take out the quotient by this generic uniform, yeah? Yes, yes, yes, yes. I'm still working with A, right? Yeah, that's right. It doesn't have to be generic. And so here in this situation also the ring, in each degree, is generated by square free. Yes, yes, yes. It's actually, I cheated a little, right? I didn't say that it's generated by square free, but tomorrow, tomorrow. Because A is generated, right? As a k vector space by the square free elements, but we didn't actually do that yet, you're all right. Okay, because it is not clear that you can choose the linear form to be generated. No, no, no, no, as long as the linear forms reduce some of the dimension to zero, reduce the cold dimension to zero, this is true. The ring will be generated by square free elements as a k vector space. Okay, did you start again on the regular? Yes, but again, instead of, all right, this was some of the overview, the introduction section. Instead of explaining this now, we will later see a model but it's obvious. Okay, so we will introduce another model for this ring and there it will be obvious. For now, take it as a mystery, but later we will see it in detail. Okay, Ofer? Okay. Okay. I mean, we're still hiding the subtlety of the dimension and the number of the hidden theta that we're depending on. Right? So this will be assumed that we have the same number as the dimension and here we don't, so the three. Don't remember that we're using the same data for both and so we have more than we used for. Yeah, that's right, that's right, that's right. So right, so now the number of thetas, so the length of the system of thetas is the cold dimension of K sigma, which is in general, right, this will be larger than the cold dimension of K delta. And the stuff about generated by square three is for any embedding of something delta to a homology sphere or something like this. It's any time you have any delta and you take out the linear system of parameters, so you take out enough linear forms to make this, to make the cold dimension zero, it will be generated by square three. Okay, we will see it. Okay, we will, we're getting distracted from this, from the equation. Because now you see, now the inequality becomes even nicer, right? This inequality was already very nice, but what I now have to show is just that the dimension of the K graded component of delta is larger equal to the dimension of the K plus one graded component of delta, right? And now why do I do this? Well, this is the reason that I wrote things like this. So I have, well, I have an isomorphism between degree K and degree K plus one of sigma, they are Poincare dual, right? The spheres of dimension 2K, so the fundamental class lives in degree 2K plus one. But if I have the left sheds property, so if left sheds is true, if left sheds property prize is true, then what I can do, well, I can look at the following quotients. So a K of delta, a K plus one of delta, all right? And then I have above here, I have the, on top here I have the left sheds element. And then of course I have the induced map here, but this will be an isomorphism if I have the left sheds property, these are subjections by construction, so this here will be a subjection, okay? So this here, so this here will be subjective, implies this subjective, which implies this inequality, which then implies my desired inequality, you know? That's it. All right, and now let me, let me go to the theorem and let me use the big block for that. There is a similar argument for degrees less than strictly less than d over 2, right? So now we are at the middle, but somehow the middle is the most interesting inequality in the end, at least for me, because the intersections, if they exist, right? If delta is so dense, we expect some more transversal intersections. Oh, by the way, you can get very large complex, this is some kind of inequality, yeah? Yeah. Is it some kind of, because it should be some kind of tight object, like for graphs in plane there will be some, generalizations, yeah, it will be something special. Yeah, but the maximal objects are really tricky to understand in higher dimensions. I mean, you can always, you have edges that you can, so here's the issue that you can, that you can, that you'll run into in higher dimensions. So if you start with a graph in R4, all right, then you could ask, well, how many, how many, how many two-dimensional faces could I add, or how do I embed it optimally? And in general, this does not really exist. What is, I mean, there is, I mean, as I said, you can, if there is an isomorphism here, then you extend to a triangulation in a nice way, in a certain sense, in a, but otherwise, that is not true, right? All right. So here's the theorem that we approve. And there's two versions that I will, so the proof of the hard left sets, the hard left sets theorem has two proofs, one from to the 18, and then we have recently another one together with Papadakis and Peltrotu, and this is 21, that gives a slightly different flavor of, of, of, it gives a slightly different flavor of argument that nevertheless relies on the same intuition and the same, on the same basic idea that I will explain. So we have, we start with sigma, a k-homology sphere. And this here for me is really just, okay, so once again, so this is a k-homology manifold, all right, such that it globally also has a homology of a sphere. So it's really just, if you want to think about it, Goronstein, Goronstein, I think it's Goronstein star, so the dimension of the complex is the same as, or the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the, the degree of the fundamental class. So, cardinality, or, let's see, say, dimension of the, of the complex is the same as the degree of the fundamental class, class minus one, all right. And then the statement is as follows. So we have a sigma, all right, a sigma, specifically it came, it came with a, with a linear system tether, all right. So this here was a linear system of parameters. And what I can now do, I can look at the modular space of these, of these a sigma, all right, but I can, I can vary the tether, all right. There's, there are many choices for the tether. And so what I take is a generic tether, for generic tether, okay. And then once I took generic tether, I took, I took a generic L in a one, generic, a one of sigma, generic. And then we have the following theorem, well, the first thing is the Hart-Leffsched theorem. Yeah, there's an open dense set of linear forms tether, all right, that I can take, that's the point, yes. And the generic tether and L means the generic in the set of pairs, tether L, or generic cross generic. Yeah, yeah, generic in the pairs. Yeah, yeah, generic cross in the pairs, yes. Oh, Hart-Leffsched, all right, so now I have ak, ah, then for all, then for k less or equal to d half, ah, I should have said of dimension, of dimension d minus 1, I have the isomorphism between degree k and degree d minus k induced by, induced by the multiplication with L, L to the d minus 2k, and this is an isomorphism. And that is somehow the, that's the analog of the Hart-Leffsched's. And again, somehow the classical of Leffsched's, it came with this Kazan of Hojima relations. And it also, also this version does come with the Kazan, that equally concerns this bilinear form, q, but in a different way. Here is the 18 version. What it says is that I look at q, ah, qk L, right, this was from ak times ak to ad, right, as before, that's the same bilinear form. And now what I do is I restrict to certain subspaces, especially, I basically restrict to the total equivalent, ah, invariant subspaces, ah, if I want to think about this in a total geometry, ah, or if I, if I want to think about this just in terms of the ring, then I'm saying that this, this formula is perfect when restricted to any, ah, square free monomial ideal. So I can no longer say anything about the signature, right, Hojima said something about the signature on certain subspaces. I no longer have any, any handle about the, the signature in any way or form, here. But I say, okay, so this form at least doesn't degenerate at many subspaces. The subspace is specifically motivated by the combinatorics, okay. Yes, Maxime? No, I didn't distance. Square formula ideal is something left that we sent here. We want to suspect case less than or more. So these are, these are just the ideals. What do you mean by two ideals? I take a, I take a sub complex delta of sigma and I take the kernel of the restriction map. These are the ideals I'm looking at, yeah. Of course you have to, I mean, for the statement to be non-trivial, they have to witness something, they have to witness something from the low degrees, that's right. From the k, from degree k less or equal to L. That's, that's the statement number one. And, huh? Yes, thank you. Thank you. Yeah, yeah, yeah. Okay. Yeah. Of air? Okay, there is some sub complex delta, okay. Take any sub complex, yeah, okay, so for any ideal of this form, okay, yeah. I'm sorry, may I ask a question? And regarding, regarding the signature of despairing, do you have some counter examples or you just don't know how it behaves? In general, there's, okay, so you can give examples where the signature, where you just have no chance of getting the right signature. Notice that this does not even depend on L, right? Somehow if I have a sphere of odd dimension, all right, so let's say 2k minus 1, then the fundamental class is in degree k, so it's just the Poincare pairing on degree k. And already the Poincare pairing in general, for general linear system of parameters just has a wrong signature. I mean, you can even, I mean, even for one dimensional sphere, you can show that there are linear system of parameters where you don't have a signature of one positive eigenvalue and all the remaining ones negative. So signature of plus, minus, minus, minus, all right. Yeah. Obeet? Yeah. There's a geometry in this motorless space where I control it. I mean, I don't think it's so easy to understand the geometry of this motorless space. So in particular, there's the left sheds locals if you want, right? The space of theta and L where you get left sheds. We just know, I mean, the proofs just give it for generic. We will see why, but I mean, I don't have any control of it. You're not making a point of any integer of this motor. You don't have this in the composition of the picture. No, no, I don't think we have a, I don't think, yeah, I don't know whether this is feasible. All right. Let me state this new version. And let me, in the interest of saving a little time, let me actually just state the characteristic 2 version that this came, comes from. So this is the version of A18. So now we go to the paper by Papadokos and Petrotou in 20. Then it's for general version P21. And I will just state the characteristic 2 version. So if the characteristic of K is 2, then there exists a field extension, K tilde of K, such that Q, K, so now I take the, so more, and then I take theta, theta in K, over K, over this field extension. And I take L also in this field extension. All right. So I really just take a nice transcendental field extension in the end, such that Q, K, L, there I said it doesn't degenerate a monomial ideal. And now I'm saying it never degenerates. Even if you just take any linear form, sorry, any dk element, multiply it with itself in this form Q, the result will be not 0. Never degenerates. And this is, yeah, so Q, K, L, alpha, alpha will not be 0 for all. Defined over the original field you mean? Alpha defined over also the larger field. So this is now, I should maybe say, so now we are looking at a tilde to make clear that we are in the larger field. So a tilde just says, so we have a sigma and theta, but everything is now over the larger field. And this will never be 0 for all alpha in a tilde. Not 0. Yeah. Alpha. Yes, yes, for alpha in a sigma, there are not 0. That's right. All right. And these are the two versions. I think now is a good time for a short break, all right? Because now we will go somewhere, that's the end of the statements of the theorems, if we want to go to. All right. So now we will go a little to some homological tools, some very basic homological tools. And this will be called section on the partition complex. I started feeling kind of naked without a mask, but it's on the partition complex and conqueror duality. Or if you prefer Goronstein, Goronstein property, go duality. And this will have two sections. So we will do 2, 1, the works. And then we will do 2, 1, 2, 2, the cheats. So once we will go, I mean, we will explain why certain algebras are, conqueror duality algebras in this context, in particular, for spheres in general, we'll explain it a little, just because we will encounter a tool that is just useful in all kinds of contexts also later. And then we will see that, well, if you're just interested in conqueror duality, there's a cheap way to always get it for any simplificial cycle. So let's start with the works. Cut and then party out. So how do I prove conqueror duality? In a polynomial ring, I mean, how would I prove that, how would you prove that? A sigma, right? So a polynomial ring is a conqueror duality algebra. Conqueror duality algebra. Well, I mean, here's the thing. It is equivalent. So Pancaré in this ring generated in degree 1 is equivalent to saying that the so-called, is of dimension, so-called. So the so-called is all those elements that are annihilated by every element of positive degree. So whenever you multiply with anything of degree at least 1, you get 10 to 0. So the so-called of the algebra should be of dimension 1. So there's just one element that gets killed with under any multiplication that is not just a constant, not just a multiplication with a scalar. So and this is kind of obvious if you think about it. What does it say? In this direction, it is obvious because what does Pancaré duality algebra state? It says that every element of degree k that is not the maximal degree, I can multiply with some element of another higher degree such that I end up in degree d and I am non-trivial. Because this powering is perfect. That's what it is. The other direction is, well, if the circle is of dimension 1, then if I have some element x in degree k, then at least I can multiply with some element y to get x times y in degree y. So y, because I'm generating degree 1, I can say that it's of degree 1. So it is some element y times x in degree k plus 1. And then I can y prime times y times x in degree k plus 2 until I am in degree d. This gives me the perfectness of the pairing. That's it. Anyway, community was a brand, ten-year ring is going to stand in the middle of the circle. Yeah, that's just what I'm saying. Yes. Yes, it's trivial. But sometimes people don't realize this because they're thinking about Pancaré duality and for manifolds. You're right, it's obvious, but you have to say the obvious thing. The greater the goal, the better. OK. So what do we need for the ingredients? To prove Pancaré duality? Well, OK, so now to prove Pancaré duality for sigma triangulated sphere of dimension d minus 1. Oh, dimension d minus 1 and. Yeah. Yeah, it should be dimension d minus 1. The dimensional dimension is a simple complex is always one lower because the degree of the fundamental. Yeah, it's a perfected from the C of the Pancaré. Yeah, yeah, yeah. We are not talking about the right, so I have to. So what do I need? Well, I need two ingredients. So A, what I need is that for all k less than d, if I look at the degree k component of my ring, well, then I want to be able to pull back to one of the prime divisors. So what I need is I want to look at this map restricting to the stars of the vertices. Let me explain. I don't think I introduced what the stars yet. I want to say that this map here over all the vertices in sigma that this map is an injection is injective. All right. What you said last time is the shelling stuff. This was what is only a special case of this. We said for coming from a boundary. Yeah, yeah, this is only a special case. Now we are. This is a triangulated sphere. For me, from now on, triangulated sphere will always k homology. This will always be a k homology sphere. So this is much more general. All right. So triangulated sphere for me will now be k homology sphere, always. I don't have to say it again. Yeah, yeah, the links are again k homology, again the k homology of a sphere. That's right. This is some of the links are related to the v is zero dimension as if it is a vertex. And a star, the star is the. OK, yeah. I didn't say what the star is. So the star of a face in a simple complex is defined as those faces inside my complex with a property that tau union sigma is also inside the complex. And tau is the zonkome sigma? No, no, no, that would be the link, all right. OK, OK. So I want to want to k on the right side. On the. Yes, thank you. Yes, you're right. And this is that the reflection map? Yes, some of every individual, right? If I ignored this direct sum, it would be just a subjective restriction map. But now I take it all together and then I want this to be injected. V is what? V is a sigma, V is in sigma. V is a vertex. V goes over the vertices. So the zero dimensional simplicity of sigma. All right. All right. And this. I want this map to be injective. Is this, does it follow from Pankare duality? We will see. Yeah, OK. It follows from Pankare duality, but we want to prove this to prove Pankare duality. Yeah? All right. OK. And B. B, well, OK, so this is the pullback. And then what we want to say is that I have the star of a vertex in sigma and I'm sitting in degree k. And then I multiply with the corresponding variable. And this will create an ideal inside, actually, the ring of sigma. And I want this map here to be injected. And I want this to be injected as well. So these are the two properties. One question. When a ray is a k sigma, it depends on all the linear systems. Yes. And it's true for all the linear systems of all generically. Ah, OK. So for now I just said what we want to prove. And this will be true. This is true. The linear system of all generically. This is true. So for all linear systems of parameters theta, well, I mean, they have to be linear system of parameters, so they have to reduce the co-dimension to zero. Ah, sure. Yeah. Yeah, yeah. So it has to be a linear system of parameters. Is it a second property for any vertex B or is it something? For all vertices, sorry, for all vertices B. For all vertices B. And so you have an arbitrary base field, so it's not necessarily. OK. So is it an easy map? Yes. Yes. That's it. That's a easy map. But again, yeah, we won't go into algebra, into the algebraic geometry. That's right. So let me prove B for you first, because it's much easier. And the trick is, OK, I mean, the trick is always for these kind of things, look at it before you do the attenuion reduction. Look at it before you take out theta. OK, so then I have a star of the vertex in sigma going to k of sigma. And OK, so here's a multiplication by x d. And what is the co-carnal here? Well, that is the restriction map to k of sigma without B. All right. OK. It goes to 0. OK, so this here, before I do this attenuion reduction, it's more or less clear that this is a short exact sequence. All right. And now what do I do? Well, OK, so sigma for me was a sphere. All right. Sigma without V is there for our homology disk. It's still called Macaulay. All right. And now I'm modelled by theta. But because it's called Macaulay, the linear system will be a regular system for sigma without V. But then the sequence says exact. All right. So theta is regular. Well, it's not only regular here, but it's also regular here because it's a homology disk. And then I can mod out. And I get the same exact sequence for the a's. All right, for a of this. But then I, in particular, have this injection. That's it. You get regularity upon the first by easy homology. Yeah, yeah. This is just, yeah. Just exceeded. Yeah, just a relative. Yeah, relative homology. Yeah, that's it. All right. So just, I mean, you take a triangulated sphere, remove a vertex. What you have left is a disk. Don't you need one of these to be relative? No, no, no. Because I multiply with Xv. Yeah. I save myself the relative stuff. OK, so that's part, the simple part. So now for part a, we will need what I call the partition complex. So once again, the trick is, so I will do this actually in the generality of, let's say delta is a Cormacoli complex. OK, delta is Cormacoli of dimension d minus 1. And. Sorry, when you were taking out sigma minus v, you took Xv equal to 0. So I mean, these are being a function. Yeah, yeah. And in the story, you do the same, you just restrict to just the one. Yeah. Yeah, yeah. This is just the restriction to, I restrict, yeah, I quote a lot by Xv. Yeah, that's right. So, and the trick is, once again, I look at the unreduced version, delta. And then I map to the direct sum over the vertices in delta 0 and take the k of the, the phase ring of the star of the vertex in delta. And then next step, I go, well, what would be more natural to, then to go to the edges next, right? The edges in delta. So these are the one-dimensional phases and then I take k of the star over the edge in delta. And then I go on. Can you add some sign? Yes, yes, yes, you have a sign. If you think about it, well, how do you choose a sign without actually, well, with being a little lazy? Look at this in degree 0. All right? Look at k. Look at these rings here, but only restricted to degree 0. Then what I want is to be in degree 0 is naturally the Czech complex given by covering sigma with the open stars of vertices, right? The interiors of the stars. And now I take just the Czech complex in degree 0. The natural choice of signs. Okay, time for the denoldering of the vertices. Yeah, well, I mean, okay, so I can order my vertices and then give the sign. That's right. And it turns out if I give the sign this way, then naturally this will be exact in positive degree. So degree 0 component is Czech complex. Wait again. So Luska, I think, Luska will frown. I think it is this version. Or is it? I don't know. I don't remember. This axon, I think. Czech complex. And in degree larger than 0, this is just exact. And let me call this partition complex of delta. There's no theta yet. So the... It's like augmented Czech complex. I don't know whether you want to call it augmented Czech. In degree you mean... Degree, I mean the degree of polynomials here, right? These are graded rings. Okay, here it also is the degree in the complex. So the complex is concentrated in the terms of the degree of the complex. You have to choose a convention. Like the star is 0 minus... I mean, here is some convention on which degree you... A medical grade. It's a homological degree. Yeah, yeah, yeah, yeah, yeah. There are too many degrees, that's right. So there is some... A polynomial degree. A polynomial degree. You're right. Thank you. And a polynomial degree. Yeah, homogen... Yeah, yeah, yeah. Yes. So the Czech complex calculates the reduced homology of delta, yes? Yes. So it has something in dimension d minus 1? Yes, yeah. And only in this, okay. Okay, that's... My first complex. And then, okay, so now what I do is I take the Caussure complex coming from theta. All right, I take my theta. Let me see where I have space. All right, then this is some other kind of... Probably one of the first things that you ever do in... When you do homological algebra or respect to sequences. Okay, take... From Czech to Hungarian. From Czech language to Hungarian language, yeah. From Czech language to Hungarian language. Okay, yeah, that's... Okay, this one I didn't know yet. Okay. Yes. And now we marry the two. So we take a p tensor, p of delta, tensor. Caussure complex. Now we have this double complex. We take the total complex of it. Consider the total complex. And now what happens? I mean, so I have this double complex, right? Which somehow, again, this is just the direct sum of the i-th component here with the j-th component here, where i plus j is equal to a given constant with a given k. And now, well, okay, so what do I have? Somehow, in the direction of this complex p, what do I have? Well, in positive homogeneity or polynomial degree, right? This is just exact. All right, so I want to compute, so I will bore many people probably, but if this is the direction of p and this is the direction of k, so in positive degree, this is exact, right? So positive... There are three degrees now because... Yes, positive homogeneity degree. Yes, yes, yes, yes, this is why I don't... We will not write this down. There are too many degrees. So there is a homogeneity degree. Positive homogeneity degree, p is exact. So we can push down until we end up in degree... Homogeneity degree is zero, okay? Similarly, in the Caussure direction, we are exact because we are called Macaulay, right? All the stars, delta is called Macaulay, all the stars of vertices are called Macaulay, the stars of edges are called Macaulay, hence we are exactly the Caussure direction. And I can push it into the boundary here, right? The standard homological algebra argument. What I get is then exactly that... Well, and again, there are too many exacts, I get that the desired property that A delta is equal to the direct sum over the vertices in delta of A star of the vertex, right? Because the homology vanishes, this will be exactly the Caussure homology... Sorry, not the Caussure homology. This will be just the homology coming from the Czech complex, and this will vanish in degree k less than d. k less than d. This is an injection. And now let me state the more general version. So if delta is book's bomb, this is just saying that for all vertices in delta, the star of the vertex is called Macaulay. Maybe do I need purity? Let me... Maybe if it's disconnected, okay, so... So then delta pure, pure, and for all vertices in delta, the star of the vertex is called Macaulay. Then I can say something finer. Then what I get here is that... Well, I can look at the kernel of this map, the kernel of A k of delta to the direct sum over the vertices. Let me just write it again, A star of the vertex in delta. And this is isomorphic to, well, the homology, and I should, in degree k, the homology in degree k minus 1 with k coefficients, obviously, of delta. And then because the Causs-Sault complex, the Causs-Sault complex is really implicitly... Once I go to the homogeneity, degree 0, I will have some powers of it coming from the Causs-Sault complex, and this will be exactly the d-truth-case power of this vector space. This is the finite version. Yes, yes, so this is a refinement. Excuse me, so why are these homologies 0 in the case of the triangular sphere, the top? Yeah, the top is not trivial, but I'm restricting to k less than d. Ah, okay, okay. Yeah, yeah, yeah, because, again, empty sets, yes. I know the empty set is kind of your thing, but yes. All right. Okay, so let me delete that one. And then, okay, well, okay, so what I can draw from that is I can do the analogous version of manifolds. All right, so if delta, so, or let's say, if m is a closed orientable manifold, I'm going to say, okay, triangulated manifold, it's a potential of dimension d-1, of dimension d-1. Well, okay, so you will see that I will not be a Panker-Raduality Algebra. This a of m will not be Panker-Raduality Algebra, but what I can do is I can look at, so a of m is not PDA in general, but what I can do is I can take a of, I can take b of m, is not, thank you. Yes, I said it, but I didn't write it. But b of m, which is the quotient of a of m by this kernel here, right, by hk minus 1 of m to the d choose k, for all k, all right, so I do this, for all k less than d, this is a Panker-Raduality Algebra, and now it really is a Panker-Raduality Algebra, all right, that's it. It's ideal because of the, Well, I mean, it's an ideal in a trivial way because these animals here, they die under any multiplication of positive degree, all right. This is, this here, the homology, they are exactly the so-called elements, all right. You see that these actually, this here applies in every degree, but I don't want to kill the top so-called element, but all the other ones I kill, all right, I send to the graveyard. And in fact, what we prove, what I, what I prove is that the left sheds theorem holds for this more general Panker-Raduality Algebra, all right. But we can do it even more general, and this is now, so this is the end of the section, the works, and now we go to the cheats, which is the final part for today. So let me see this one I can delete. So now to the cheats. Let's say I take any, any simplisher complex delta, simplisher complex of dimension d minus 1, and then I take mu in the degree d component of delta. I take mu in the degree d component, and I want to actually take this as a quotient, somehow to mu any one-dimensional quotient, dimensional quotient. So it's a one-dimensional quotient. And in fact, there's something nice here, and we will see this later, is that this here really, always isomorphic to the d minus first core model of delta. So regardless of manifold property, we will see this with k coefficients. And then what I can do is I can always generate a Panker-Raduality Algebra from A of delta, which has this as a fundamental class. So then B of mu is a maximal quotient, sorry, is a minimal quotient of A of delta, that contains, that maps to non-triviality really such that B of mu in degree d is equal to mu. It's asomorphic to mu. So I just basically force a Panker-Raduality Algebra by kicking out anything that does not pair to mu. I think she has paired, she's degenerative, like the E.D. termals. Yeah, yeah, exactly, exactly. It depends on the minimum, the maximum, the order. It means the largest... What can say that it's a capital pairing. Yeah. A capital pairing can just... You take all the quotients which give this a degree mu, and you take the smallest, the single model. Yeah, think of it like this. I look at the pairing, right? And whenever an element goes to zero, it goes to zero under this... I pair any element in degree k, 2 degree d, and if it goes to zero under this quotient, I kick it out. That's my ideal. That's clearly an ideal. Can we read the function? Yes, minimal in the sense it's really minimal. Okay, okay, that's an actually smallest issue. Okay. Yeah, that's my best. It's a proof of that, because you can describe it like what you said. And so this is actually a nice object. I mean, it only depends on mu, right? So you can add faces of delta to delta that are not in mu. The algebra will be the same. Here's a nice way of thinking about the degree map in this context. And this will be the end for today. Let me see. So here's the check complex. Okay, let me finish here. All right. I can think of... I mean, I can look at the dual, right? Dual to mu in the homology of my complex. All right, so this is just a simple cycle, right? Really? A simple homology cycle. And then what I can do... Okay, so what does this give me, this simple homology cycle? I mean, okay, so implicitly I have the pairing, but I mean... Well, the nice thing here is I can write down the degree map until now this really, again, the pairing degree, explicitly in terms of this mu. So then the degree of a monomial x tau is where tau is of cardinality. So it's a simplex of cardinality of cardinality of cardinality d. And the degree of this monomial is then just mu tau divided by the determinant of the matrix. I look at... I look at the main, minor corresponding to tau. And here I'm explicitly... I'm orienting, so I have an order on the vertices. I'm taking the oriented coefficient and I compute the determinant of this matrix. What is the x-p? It's a monomial corresponding to... Yeah, it's a monomial. So x tau, this is the product of... So x tau, this is equal to the product of the xv, where v is in tau. And the degree means that you look at it in this quotient and you take the coefficient. Yes. Okay, and this is equal to... It's the oriented coefficient of mu in this... of the phase tau in this homology cycle. And I divide it by the determinant of the minor in theta corresponding to tau, again oriented. All right? So I implicitly have an order on the vertices to make this work. Yeah? Coefficient of tau in the... Okay, in mu... And the unit is not zero. Well, it might be zero, right? If there is a phase of delta that is not supported, right? If it's not supported in the... in the splitial cycle, then it will be zero, but then the degree will be zero on that phase. But what do you assume now that delta is a splitial complex? Delta is a splitial complex. Mu is a homology. Mu is a homology cycle. Mu v is a dual homology cycle, too. It's a pure dimension. Yeah, yeah, yeah. But I mean, I could add phases that don't lie in the cycle and they just don't appear in the ring. That's right. So this is the most general version. And then, of course, I can... Okay, so now you can start with a homology cycle, right? And you can start with a homology cycle that... any cycle that you want. And you can look at the algebras, B mu, theta, right? A d is related to co... What is... A d is always isomorphic to the cohomology. A d is always isomorphic to the cohomology in dimension d minus one of the complex, all right? And there is also a dual model that we will encounter soon for the homology that works out as the same. This is exactly the Ishiida complex. And that is... yeah, okay. Now we can look at... You can fix a simple homology cycle, mu. And you can look at all these algebras, B mu, theta, right? In particular, you can look at generic one. And a generic element of this, all right? So I take... fix mu, fix mu, and take a generic in terms of... In terms of the possible theta set you could take, all right? And this will satisfy the hard left-shed theorem. Will satisfy, satisfy the hard left-shed theorem. That is the most general version of the left-shed theorem that we have in this context. Hard left-shed. So you take generic, a and so on. Yes, yes. The generic here is more... and a generic l in B1. Fix mu, and then what is theta? Well, theta is just... I mean, so you have choices for theta, right? You have as many choices, I mean... It depends on three parameters. You will see it's linear. You fix some theta. Yes. No, no, no, no, you fix mu. Fix a simple homology cycle mu. And now you have many, many algebras coming from these... from modding our different systems theta. Okay, then you take theta generic enough. Yes, and this will satisfy the hard left-shed property with respect to a generic element l in B1. So the pair theta l is generic. Yes. And this is a generic in the... you have over which field now? Over any field or over... Well, I mean any infinite field. Any infinite field. Okay, so generic in the center of the zorisk is a forward... Then the generic capital is the risky of the generic point even just... You can send a field and this satisfies the hard left-shed. This is the theorem. Yes, that's the theorem, yes. I won't write it down again now because we are already over time. We will repeat it next time. I understand. Okay, so one is to digest and also it's kind of... I'm not usually thinking about... I know that in toric geometry, the issue that... You said something that August once told me about this... I don't know what August told you, but okay, we can try. But Ishida... the name. Ishida complex. Yeah, I mean there is a way to think about... We will go over this next. Yeah, I wanted to question you in the chat. Okay. Delta is a two complex in S4. Does the PM bearing have to be locally and noted to extend to a triangulation? No, if... No, it is... If you look at the reference of Bing, then any... It doesn't even matter. There's not even the dimension condition that is relevant. If you have any PM bearing of a simplificial complex into the sphere, then you can extend it to a triangulation. Or PL triangulation, even. It doesn't depend on unnotedness. But you can write me an email and I will... If you don't remember the reference, I will send it to you again. First, know that if you have a sufficiently fine triangulation, then you can extend. And then you want it to do it without refining the triangulation, the smallest stuff. So the idea is that you have to join with something from the outside. Yes, that's exactly the idea. You shield it off. Yes, that's right. You basically look at the links of vertices and say, okay, by induction I can do it in this vertex. And then locally you build a neighborhood around it. And then outside of this neighborhood you have whatever refinement. That's the idea. Yeah. Okay. Thank you. Wow. Thank you. Thank you.
|
Lefschetz, Hodge and combinatorics: an account of a fruitful cross-pollination Almost 40 years ago, Stanley noticed that some of the deep theorems of algebraic geometry have powerful combinatorial applications. Among other things, he used the hard Lefschetz theorem to rederive Dynkin's theorem, and to characterize face numbers of simplicial polytopes. Since then, several more deep combinatorial and geometric problems were discovered to be related to theorems surrounding the Lefschetz theorem. One first constructs a ring metaphorically modelling the combinatorial problem at hand, often modelled on constructions for toric varieties, and then tries to derive the combinatorial result using deep results in algebraic geometry. For instance - a Lefschetz property for implies that a simplicial complex PL-embedded in R^{4} cannot have more triangles than four times the number of it's edges (Kalai/A.), - a Hodge-Riemann type property implies the log-concavity of the coefficients of the chromatic polynomial. (Huh), - a decomposition type property implies the positivity of the Kazhdan-Lusztig polynomial (Elias-Williamson), At this point one can then hope that indeed, algebraic geometry provides the answer, which is often only the case in very special cases, when there is a sufficiently nice variety behind the metaphor. It is at this point that purely combinatorial techniques can be attempted to prove the desired. This is the modern approach to the problem, and I will discuss the two main approaches used in this area: Firstly, an idea of Peter McMullen, based on local modifications of the ring and control of the signature of the intersection form. Second, an approach based on a theorem of Hall, using the observation that spaces of low-rank linear maps are of special form.
|
10.5446/53457 (DOI)
|
Thank you very much for the invitation. I hope that he will become more and more active and younger and younger. I respect him, also like my father, and I am thankful to Ruku very much. Today I am the founder of the Logarithmic Geometry and I talk about the Logarithmic Geometry and the Logarithmic Geometry. I use the document camera to work. I hope I myself cannot see so well, but I hope you can see this. This is a space with a she-hub ring and a log-she-hub ring and a she-hub monolith. This scheme was started by the great teacher, Urotan Dik of Ruku. The theory of the log scheme was started. Ruku was very much excited and probably he was happy that he was making a new world of the algebraic geometry like his teacher did. We discussed Ruku and I shared many dreams on the logarithmic geometry. I think the log-Aberian variety was already in the dream from the early days of the geometry. I am happy that today I can talk about the logarithmic-Aberian varieties after 30 years. Here is a nice textbook on the logarithmic geometry written by Oga. In the first line of this book, it is written that the logarithmic geometry was developed to deal with two fundamental and related problems in algebraic geometry. One is compactification, one is degeneration. How compactification is important and how degeneration is important. There are two famous golden things and famous observations. For the compactification, Golden Thin is by Argello Vistolo, working with non-complete ojira spaces like keeping a change in a pocket with holes. This golden thing is often quoted by Abramovic in his talks. For degeneration, the king observation by Pakesh Saito is that in the story of beauty and beast, Argelli beast became a nice man by the love of the girl. In the designate object, Argelli objects in algebraic geometry became a nice object by the power of the log and the power of love of the girl and the power of love are similar. And then here is the mysterious coincidence of the letters. Who is the first one? In the case of modular space, these two things are connected in this way. It is a nice object in the usual geometry, but it is not compact. It can be ugly in the usual geometry, but it becomes a nice object in the log geometry. The space of Argelli and Varaitis is TEL-tropical. Today I give the information of the joint work with Kajiwara and Nakayama in which the territorial compactification is understood as the space of log-Aberian Varaitis with TEL structure. Usually, the designate-Aberian Varaitis have no group structure and other Argelli, but it is nice here in the log geometry and they have group structure. And here the PEL is the polarization, the endomorphism ring, and the L is the level structure. So the endomorphism ring is fixed or something like that. The ring of this endomorphism preserves the structure of the Argelli and Varaitis. Now we can have a ring because two endomorphisms can be added because log-Aberian Varaitis can have a group structure. So without a group structure, it is very hard to consider the endomorphism of the degenerate-Aberian Varaitis. But we can have such a nice object in the logarithmic world. So it becomes very natural and nice. The formulation becomes very transparent and nice with the log-Aberian Varaitis. And at present, our work is only the reinterpretation of the work of Ran, but I hope that in the future the log-Aberian Varaitis will have good applications because the theory becomes so transparent. And then in the case of the endomorphism, the theory was made by Farting and Chai in 1990. And also I hope to say that Fujiwara had also some nice work on the case of PEL type structure for Argelli Varaitis, for the Tore Dalcon Motivation. But unfortunately this work was unpublished around 1990. And I just introduced the rough idea of log-Aberian Varaitis. So log-Aberian Varaitis over FES-log-log-scheme is a contravariant factor from the category of FES-log-scheme to the category of Arbelian Groups. And so it is naturally the structure of the group structure. And if K is a complete discrete variation field and A K is Arbelian Varaitis over K with semi-stable reduction, then A K extends to log-Aberian Varaitis over OK. Here OK has the standard is endowed with the standard log structure and A K extends to log-Aberian Varaitis over OK uniquely. And this semi-stable reduction, how to treat semi-stable reduction nicely was the starting point, motivation of the creation of the log geometry. So then in the usual Arge-Brick geometry, in this case of the theory of Arbelian Varaitis, in the usual Arge-Brick geometry, A K extends to a robust scheme with semi-stable reduction, but it doesn't have the group structure. So the group structure cannot extend to such schemes, but the Arbelian, log-Aberian Varaitis has a group structure. And for example, if A K is a teiton elliptical curve with Q invariant Q, then A is if we restrict the phonkta A to the FES log schemes over OK module Mk to the N, over this Arltinian quotient of the OK, then this restriction can be written in the very simple way. It is a quotient of GM log Q divided by the power of Q. Here Q is a QE invariant. And this presentation is very similar to the presentation of the teiton elliptical curve, an analytic presentation of the teiton elliptical curve in the analytic geometry over the field K. But that can be treated in an algebraic way, not an analytic way, an algebraic way in this log geometry. And GM log Q is, no, no, no, I made a mistake. So GM log Q is this, and sorry GM log Q is a part of GM log, such as Q to the A device T, and T device Q to the B in the log structure for some AB. And then we divide this part of the Mk group by the power of QZ. And this is very similar to the presentation of the teiton elliptical curve as a multiplicative group by the powers of Q in the analytic geometry over QK. And here the Q is the Q is the element of Mk minus 0. And it is regarded as the section of the log structure M of the spec of K, called the standard log structure. And then for log structure, the log structure is the log structure, the C of monoid of the log structure is regarded as the extension of the unit group. So we have a bigger unit group. And this is a monoid, but this monoid is embedded in the group associated with the monoid. And you have also a map, multiplicative map from MT to OT. And then here for these schemes over OK, Mojiro, Mk to the N, Q is nilpotent. And if n is 0, then Q is 0 here. But you can, in this case, so but Q is put in the log structure and then Q becomes invertible. That is, we are making an nilpotent element invertible here. And in the case n is 1, we are making 0 invertible here. So we can make 0 invertible in the log geometry. And by such such thing, then we can have this group structure of this degenerate of the Arbillian variety. So log geometry is such a theory to make 0 invertible. Yeah, so now I put one comment that a log Arbillian variety is a log algebraic space in the second sense. And the usual Arbillian space is something which is covered by a usual scheme by a termorphism. The morphism here is relatively etal. X is regarded as a functor, but this is relatively it is representable. This morphism is represented by a termorphism. And Q is also representable and you have such a suggestion. That is the algebraic space. Arbillian space is something which is covered by by by representable to put object by etal morphism. And log algebraic space. The first sense is a is a similar to a functor covered by a representable object by log etal morphisms. So it's our morphism is replaced by log with our most log Arbillian varieties is such a object. And log algebraic space in the first sense is is such X, unctah X, covered by a representable object by the classical etal morphism. And this is also equal to the algebraic space with with usual algebraic space with an infuse of structure. So there are two things, first sense and second sense and log Arbillian varieties log algebraic space in the second sense. Yeah. Now, now I, I, and it's so I don't explain more about log with Arbillian variety if I explain too much then I will lose time so. And then now I will go to the problem of Mojira with PL structure for polarization and endomorphism and level structure. So, so then in such theory, we fix a semi semi semi semi simple algebra over Q and with such a operator operator, it's two to four locations identity and it is changes the order of the multiplication and and then. So, when you say log etal, you mean the common log etal or the general the more general log etal. Yeah, yeah, I think so yeah. So, I think we have also another question from Bruno. Sorry. We have a second question. It was a mistake I apologize. So, so then the module I, so I don't explain all things because I, it is, there are too many things to be put to just I, I explain the main part and the satisfying certain conditions, including the this, this size skew symmetric non-degenerate by linear form and then another the group of similitudes for this pair of the outside. This is algebraic group over Q, algebraic group over Q, deductive algebraic group over Q and and then by some I don't explain the some condition but under some condition. Then we have a similar similar data for this algebraic group and then we have associated reflex fields and then we have we can start the module. But this is the usual things in the, in the theory of PL structure. This is something we don't know. And then, then in the following the, following the usual story, we fix the structure of B, which is the order in B subring B, which is stable under this star. And the VZ is an integral version of V, VZ stable Z lattice in V, satisfying this condition. And then we fix a set of prime numbers here from now on, I follow the notation of, of, of, of long in his work. And this square is a set of prime numbers without bad primes. So bad prime is a contents of primes for which this integral version of site. The site for this integral structure is not perfect pairings at P, at the prime. Such, such prime is a bad prime. Maybe we also we remove some, some bad primes. So then, then, now the notation is that this local, localization of Z around this square is the localization of Z by inverting the all prime numbers feature outside the square. And we have the localization of this integer ring of the, the, the reflex field by, by, by this square is just a product. And Z hat delta, delta, z hat square means the product of all the L for L prime numbers L, which do not belong, belong to, to, to, to this square. So that is, this is not, not the for L for in delta. This is the usual notation for people, people in, in the study of Shimura varieties. We put to, we put to such notation primes in upper place, then we are removing the, the, the section. And if you put to lower place the prime set of prime numbers, then we are keeping these, then, then, like this, like this. And we fix the open compact subgroup H in, in, in this group because following this integral structure we can consider define the, the such Z hat to square set points of G. And we assume that it is neat. I don't explain the meaning of neat here. By, by, by this neat condition we have nice modular space. Yeah. And so then our modularity function. We have a modularity function. So, I think to, is it to 20 minutes, 20. I forgot the time, the end of the time, time of, of, of this talk. It is until 20 or it's okay. 20 or yeah. Yeah. So, so now we can start the modularity function. We can start three modularity functions. And the first one is just a modularity function for Arbelian varieties. And these six second and third are functors for, for, for modularity functions, modularity functions for log Arbelian varieties. They are all functors from the, the Fs, category of Fs log schemes over this localization of the integer ring of the reflex field and to the category of sets. And the Fs and the Fs. So here, we first define F1, F bar and F bar, sigma is, comes later. And this, this sigma will become the, the toroidal compactification. But, and then I have to explain sigma, conical position. This sigma is conical position. But, but F1 and F bar can be defined first in a very, very simple way. That is, we can connect, we can use the same, same presentation of the definition for F1 and F bar just connecting by less. Because, because we, we have a logarithmic Arbelian varieties have, have group structure. And, and then, so we can define the endomorphism and the level structure in the, in the simple way. So, so then Arbelian, so the Fs and F bar, this F bar is such such a, a and, and u and a and p. So, for the yotais, endomorphism, it is a level structure, p is a polarization. All those things can be defined nicely by using the group structure. And, and then, so the a is a, a is a Arbelian scheme over s and this logarithmic Arbelian variety over s and yotais ring homomorphism from bz to end a. Here we are using the, the group structure of a in the quarter. Because, because endomorphism means, means homomorphism for the group structure. So, so we are using the such group structure to, to, to, to, for this. And, and then, we, a, a is a level structure. So, his isomorphism between this standard object tensor. So, this is a square part of, square, square, removing the square part of the state module. So, state module, upper with upper square is a, is a, a, a, a, a, not in, in square. And it is the inverse limit of the kernel of the multiplication by n only, where n ranges over all int, integers feature prime to, to, to square. So, here we are using the group structure and, and the multiplication by n on the log Arbelian variety. And, and so just like the case of Arbelian variety. And, and then the, the, the level structure is the isomorphism module h. So, so we, we, we, we, h is acting here. And so then, then h is acting here. Then, if two isomorphisms are connected by, by h, then we regard them as the same, same element, same level structure. And this is, and then this is a busy isomorphism on the pro etal side. This is more busy isomorphism on the pro etal, this pro log etal side. And finally, p is a polarization. It is a homomorphism from a to the, to the x to sheep of, of a and gm log. This is also defined by using the, the, the, the group structure. And so, this polarization, I don't give the precise definition of the polarization. It is such homomorphism, satisfying some, some condition. And, and then these three things, you turn the etal and p should satisfy some compatibility. Yeah, yeah. So, this already defines the, so the, the, if the definition of this is well known, that, that is a, a very unvariety with the structure. And here the, the log, log, a very unvariety for the log, a very unvariety with the, the, the structure is defined in the exactly the same style. So, the definition is very simple. So, so, then, then, actually, our, our, one of the main result is that, if f bar is, is f, this, I will explain this sigma part of the, of f bar. But if we, we don't think sigma part, then f bar is f bar sigma is coincides with coincide with the, the toroidal compactification of run. But f bar is bigger as a factor. And it is a log algebraic space in the second sense. So, this, this is the fact this is a log algebraic space in the second sense is one, one, one means, means around here. So, so, now, so, then, the remaining part, I still have a big, still a big time, but I'm all, we must finish my story because the definitions are so simple. And, and then, but still, for this sigma part, I need some, some complicated story. So, then, this part will have time, need time. Yeah, I think, yeah. So, but roughly speaking, this sigma part of f bar is element of f bar, such that all element point of s, the local monodrome of a, at s, belongs to sigma. And I have, I hope to explain this more precisely. But I am afraid that my talk finishes soon. I, I, I, I, I, because, so, it's, it's, it's, it's so simple. Story is so simple. But yeah, here, the sigma part, story of sigma part is, is not so simple. Yeah. So, so the local, what is the local monodrome? So, this part is a part of the, we are discussing the local monodrome type of local monodrome. And then the local monodrome of the Aberian variety, a local Aberian varieties is as follows. Usually, Aberian variety doesn't have local monodrome. This local monodrome appears only for local Aberian varieties. And, and then for, for, for, for, for point of s, small s, for the point of s, small s, we, we consider the separable closure s bar of the, of the point s. And so, with the inverse image of the log structure base. And so, this is why now we consider the log Aberian variety over s bar to, to, to think about the local monodrome of A over s. And then, then we consider the log Aberian variety over our spec of separable closed feed. This has some FS log structure. Then the state module, state module, this, no, no square part of the state module has a filtration, weight filtration, W0, W-1, W-2, and W-3 is 0, such that, and then this, this quotient of W0 by W-1, even as group W0 is, is, has some integral structure. And the group W-2 has also integral structure and is, and then group W is the tensor product of the, of this, this integral structure times this, this, the, the square and the group minus W-2 is similar. And then we, such, things then, then for element of the pi-1 log, log fundamental group of s bar, which acts on the state module, then we have for element of the log fundamental group, E-1 square is 0. And, and then the action of the E-1 factor in this way, it goes to group 0 by the canonical projection. And then, then it goes to group, group minus 2. And then group minus 2 is embedded in state module. And so, so the map is, goes like this, E-1 goes like this. And, and then, and furthermore, this pi-1 log s bar is, is a completion of this integral structure. And it is, if s bar is characteristic of the field s, field of s is, of characteristic 0, it is a, this is a profiling completion of this. And if we, if the ratio characteristic of s is p, then this is a profiling completion of this for the non-p part, in the case of characteristic p. And so this is a dense subspace of this subgroup of this, which has is a finitely generated Arbillian group. And if we use this, then, then this action E-1 keeps, respect this, this structure. And such thing happens in the designation, in the photo log Arbillian variety. And in the case, in the case, K is a complete discrete variation field. And K, and here, as before, so, K is a Arbillian variety over K of semi-stable reduction, and is a unique extension log Arbillian variety over K. And if s is a closed point and s bar is a separable closure of the closed point, then this pi-1 log fundamental group, pi-1 log s bar is the, the, the lower group of the, the lower group of the maximal term extension of K over the maximal undamaged extension of K. And in the, so, then such stories very well known for the, for the Arbillian variety of semi-stable reduction over, over, over, over, over, over, over complete discrete variation field. And then this state module is, is, can be taken over, is, for AK and for AR, essentially the same, they have the same state module. And, and then such stories well known for the Arbillian variety with semi-stable reduction. And, and then, for example, if A is a Q-tate curve, then this, this, we are considering, we are considering the, the, the story, here, the story of, of the special fiber of the log Arbillian variety. Yeah. And then, then, then the state module is the same as the state module of the, of the unique fiber. Yeah. And, and if A is a Q-tate curve, that is, this special fiber is, is, is written in this way. Then, then the group zero is, is the power of this QZ, Q, QZ, which is isomorph to Z, and group minus two is, is such, is also isomorph to Z. These integral structures are understood in this way. Yeah. And I, I am, I am just, maybe I am omitting the, the, the, I am sorry that I, some, some thing like this is culturally necessary here, but I am omitting this, this thing to make things description simpler. We have to fix some, some, yeah, yeah, I am omitting that, yeah, yeah, yeah. And then, then for this story of, of Sigma, we fix, fix Sigma, the, the, the, the family of condi compositions. This is such thing as follows exist. This is written in the paper work of Lang. This is, this is written in the work of Lang, but it is, it is used from the old work of Manford and other people on the, on the territorial compactification, and they consider condi compositions over complex number field. And, and they, and they consider the condi compositions. But in the work of Lang, because he did not update module, in the not have the group structure of the designate of a new variety, so the combination becomes becomes a little complicated, more complicated. Here, the combination becomes simpler. Essentially the same as, as Fat Lang did. Yeah. Condi compositions and W ranges over, I naturally generated BZ module and, and he is a surjective homomorphism of this standard one to two to W after tensor Z hat square. This is a surjective BZ module. Such that the, I don't explain this part so well the kernel, kernel, the annihilator of the, of the kernel for this side kills each, each seven such conditions should be put. And then, then, and then for each such a pair of W and D, the, the condi, we should have the condi composition of the, the space of positive semi definite symmetric by linear forms W tensor W tensor to R with rational kernels are such positive semi, semi, semi, semi, semi, dynamic partner when it comes to symmetry before moreข value belongs to甜 bolt P dot directly to elsewhere in this measurement Everything becomes in T Our standard in this measurement is gonna we are not going to have to use the figures that is this condi-composition for W and GH, H in the open compact subgroup, H is the same of the condi-composition for W and G. And for a subjective homomorphism from W to another W prime, then the condi-composition given for W should be related to the condi-composition for W in some simple way. Yeah. And then the factor F bar, F bar, F bar sigma, S, is the, is the such a yota, ah, sorry, I forgot. This is in F bar S. So vice. And then the conditions are that, for all S point S for the special fiber for the restriction of A to S bar, the separable closure of S. And there exists for each point, there is a sigma, and then here the, the, the, the, you have a map introduced by yota, yota, ah, yeah. So other, other, other, other, other, other, other, other, other, the, the level structure is the, is the isomorphism from, from this to, to, to, to, to R log t, t, t, r, e, t t square e with, with this. This is the level structure. And then, then we can have, we have a, a map to grew W, zero. And it is a grew W, zero. z times z hat. So then we have a map g for this integral object and then we are using this group as w here. So then we have a composition associated on the space of the space of semi-definite in our forms on this space and then the condition is that for each point s, we have some element of cone on the index sigma such that for all a in this sub-space of, so we have now monoid inside the integral structure of the log fundamental group. And then we have a prime from group for a here, we have a prime from group w0 to group minus 2. And then the condition is that we then we have by this a prime associated to a minus action of a minus 1, then it preserves the z structure and then we have a map from group 0w times group 0z to group 0z times group w minus 2z by this prime. And then we have a pairing to z by this polarization. Yeah, polarization gives such a map and then this belongs to this. The condition is that this map is a Pyrenean form on this space and this condition is that this map for all a, this map should belong to sigma. So that is we have a map from this monoid to this sigma by in such a way. So this is the definition of the sigma part and the main theorem is that this f bar, so we have the f bar sigma and f bar and f is a usual monoid fangota for a Pyrenean varieties. So this f is actually the other is a monoid space of a Pyrenean varieties is PEL type and this is a classical object. And this f bar sigma is log algebraic space. So f bar is a log algebraic space in the second sense. And if sigma is a log algebraic space in the first sense, it is an algebraic space with an Fs log structure. And this algebraic space is actually the toroidal compactification of run. And like run did sometime we can, we can have also not only algebraic space, but a projective, projective modular space, representable modular compactification. Such thing can be proved, but it is already proved. It is proved by run and such thing is, yeah. But the main good thing about the proof is, the main thing is that the definition becomes so simple at least for f bar, the definition is perfectly the same as the case of usual Aperian varieties. And the sigma part is a little bit complicated, but we can still use the group structure. And the search, yeah. So a group structure makes all things simpler. And the proof of this main theorem is also becomes easy. And using the algorithm criterion or log algorithm criterion. And for example, the case, so from the partings try to run without endomorphism ring to the case with endomorphism ring was a hard process. And that was a very, very hard process. But this becomes very simple, because this is by just we can prove the space that the home space from between log Aperian varieties A and A prime is representative, representable over the base. And so then if by using this is proved by some arching criteria. And then this is relatively, so if we take as the base the modular space of without endomorphism as the base and A and A prime, the universal objects over it, then such thing can give the representative such theorem. And so, because, because we use that this is a space then this case becomes algebraic space. This becomes, if we know that this is algebraic space, then we can deduce that this to the PL case is also algebraic space because it is relatively the home space is one is relatively the home space over the, over the, over the case because q, because we are, because we are consigned, endomorphism ring here. So, so just, just this is relatively home space over this, and the home space is representable so, so we can deduce the representability of this from this. So, from the type to run is becomes simpler. Yeah, so, so I had to present my this method gives only a new interpretation of the theory of longer, but I hope that in the future, the, the, this theory of longer varieties will be useful in the theory of similar varieties and some other other theories. Yeah, I, I have still several minutes but I finished the, the stories. So I finish here. Yeah, yeah. Thank you very much. Thank you very much. So, we will take a few questions. So is there any question in the. So, maybe Sophie. Can you see the, can you see the stratification of the toroidal classification in a simple way. Oh, thank you. How, how, how is this, this W. Yeah, yeah, so I, yeah, yeah, yeah, so I, that is like, yes, so I, that is, so if you consider the local Archinian case, then the log of the varieties is like, like this, so this is, this is generalized to some G, G log. Y, y, y, y, y are, for y is only G to ZN and such he is a semi-Abelian scheme and so over Archinian based, we have always over, in the case of, Yeah, yeah, yeah. So the type of the log-Aberian varieties is like, is log-Aberian varieties over such log point is Warlog, Arachnian, Arachnian. This is classified by using the dimension of this Arbillian part, Arbillian skin part and this Taurus part and this rank Y and such rank this is, I think, and T is, I think, GM to R and these are Arbillian skin. And such thing appears if we consider semi-stabilization over complete risk variation. So such R and such dimension of display becomes the, gives the stratification of the proidal compatibility, I think. Yeah, yeah. Thank you. So Luke has a question and you have two questions from the chat. So maybe your question is, we finish with your question. So we take two questions from the chat. So I have a question about, in which category do you take your X in your, for your slides when you define your F, I think F sigma, you take some text. In the pluralization, this is a category of sheaves of, or sheaves of Arbillian groups on the, on the, on the, on the, if yes, if yes, if we consider log-Aberian varieties over S and, and this sheave means a, a tartopology in the classical sense. Yeah, yeah, yeah, yeah. There's another question that, okay, I don't understand what I am reading. So do you expect to generalize from log-Aberian varieties with the best structures to a notion of log-Aberian varieties with odd cycles and give a reformulated moduli function of the real commodification of the general structure varieties in the sense of, there are similar varieties of, of Arbillian type or some general, more general, similar varieties than the similar varieties for PEA type and, and your question is about it, the possibility of the generalization to, to more general similar varieties. Yeah, actually it's not my question, the question is in the chat, but you can, you can answer the question you think too, yeah? Yeah, yeah, yeah. So, yeah, we have not yet considered such a problem and, and so, so there are some interesting things which should be considered by using log-Aberian varieties, maybe a level structures for PEA not invertible, for N level structure for N not invertible. Oh yeah, but this is already difficult for Arbillian varieties, but, yeah, yeah, yeah. In the such more general similar varieties should be considered related, the similar varieties related to Arbillian varieties, but, but not, not included in the, in the similar varieties of PEA type, so maybe it is nice to think about them, yeah, yeah, yeah, yeah, yeah, we have not yet considered such a thing. Thank you. So, Luke has a question? Yes, some time ago I think it's Nakayama Chikara who asked me the following question, suppose you're over a CDVR and you have a commutative flat group scheme, locally of an ant type, over this complete DVR, and suppose that on the, on the special fiber, it is similar billion, then is it similar billion everywhere? Ah, the question, and the answer I, so at the time it seems that you didn't know the answer, and then Erstedrik Petmar would have found some, some arguments for that, so showing roughly that if you have something really potent in the general fiber, then taking some, some closure, then you get something also unique potent in the special fiber, so we'll come back to that. And I wonder whether this had something to do with your work and the criterion, the acting criterion you mentioned at the end? Ah, so I hope Nakayama knows this well, I hope in these days Nakayama is much cleverer than me, and so Nakayama, but is it in the, in the, in the, in the, in this work or not? Is it in different direction? Is this my useful for, what is my useful or what you discussed or not? Is it different thing? Ah, I am sorry, I, I cannot catch what you explained so well, so, so maybe we, I will ask Nakayama. Because the seemingly obedient thing is stable and originalization, so that was the property, is an open property somehow, almost. So, so yeah, I could not follow this thing so, how do you are saying so well, I, I hope that you, you explained this later slowly, because I have a second question, so maybe a much older question, so now you have these log-in billion schemes, so in the log, the group structure, so is there a way to better or better understand, you understand in a simpler way, gotentic monotony pairing in SGS 7, 9? Ah, yeah, yeah, in the SGS 9 is confronted prime by prime and I think there, so of course you only have the, the Nehromodern, you don't have this, this, you have any, I think it should come out from what you, yeah, about it, about that conjecture, the my student, Suzukuma, I was kidding Takashi, yes. Takashi, Suzuki, yes. Well, he's a good expert but I, I am, I am not the good expert about it and so, ah, yeah, I'm sorry that, that Suzuki is much better than me. You have the third question, so, and the last one, so very long ago you wrote a beautiful print, unpublished and maybe you're finished, maybe incomplete, so log, log due to the NACRI, so here, log due to the NACRI for these long and billion schemes. Oh, yeah, yeah, yeah, that should be, I hope to, to complete it soon, because here are the two points of the Abelian varieties, log Abelian varieties are so important and, and I hope that log crystalline theory of crystalline convolution of log Abelian varieties are important and, and the two points are important and, and then such things should be, should be considered well, yeah, yeah, so I, I have, I have, I hope to complete that paper soon, yeah, thank you very much, thank you very much. Okay, so I think after you've had the question, so after you can ask your question, if you unmute your microphone. Ah, what was my question? Oh, yeah, so, um, yeah, the question was, what's the nature of the embedding of this F bar sigma in F bar, is it the closed immersion and open immersion? Ah, that is, that is like a log smooth, or, or some, or some, brewing up, so that it is inclusion, but in the if we consider it up, yes, yes, yes, okay, in the category of schemes then it is like a brewing up, yeah, yeah, yeah, yeah, yeah, yeah, yeah. Okay, thank you very much. So I think we have no other questions. So let's thank the speaker again.
|
This is a joint work with T. Kajiwara and C. Nakayama. Logarithmic abelian varieties are degenerate abelian varieties which live in the world of log geometry of Fontaine-Illusie. They have group structures which do not exist in the usual algebraic geometry. By using the group structures, we give a new formulation of the work of K-W Lan on the toroidal compactification of the moduli space of abelian varieties with PEL structures.
|
10.5446/53458 (DOI)
|
It's a honor and pleasure to give a talk at this conference. So thank you very much for inviting me. And yes, my talk will be about logarithmic aspects of resolution and singularities, both the answer is resolution and plasticity. Okay, so yeah, maybe I should also mention that I sent a link to Rust slides on my web page I sent a link in the chat, maybe it's possible to see it, where you can go for some back, not the start with the page, so it might be convenient during the talk to see it slides separately. Okay, so let's start. I was lucky in the sense that my first project, where I seriously used the work in structures of container Luzi, was actually a joint project with loop Luzi. So I could study things from looking same. And our project was about the Gubbers version of the Jones, I'll discuss it a bit later. And the intuition of log geometry and confidence in the geometry which I got during this project was very helpful for recent advances. My main part of the talk will be about recent advances in classical kinomics resolution. And it was important, yeah, so it seems not related, I'll try to explain a bit about this. So the recent advances are completely joint project with Dana Brown and Shandianic Baudaurshek, and the extended classical canonical factorial resolution to morphisms, canonical semi-stable reduction type theorems, and also we obtained a much faster and simpler resolution for algorithm, for resolution of singularities, we call it dream algorithm. It will be just tangential. In this talk I mentioned it a bit, but main part will be about logarithmic aspects. Ironically, this dream algorithm does not use log geometry at all. Yeah, it was discovered because of log geometry, it does not use it at all. This is one of the reasons why we won't concentrate on the dream algorithm during this talk. But it has a lot of variant developed by students of Abramish, so it also can be done in algorithm setting. Good. Now the plan. So we'll talk a bit about altered resolutions and I also mentioned our joint project we worked on with Luke. And after that main body of the lecture will be about algorithmic resolution, just for motivation, for relations, when I'll describe you, Cherunakis approach, and after that I'll explain logarithmic piece, one has to do to the classical approach. Okay, let's start with altered resolutions. So to be brief, I just formulate one more or less resolved with general as many things about altered resolutions. So we need the notion of alteration of the morphism. What does it mean? We have a dominant morphism, y2x, f from y2x, of integral log schemes, or schemes, maybe schemes with real log structure, when by alteration we mean morphism f prime from y prime to x prime, where both y and x were altered. So this compatible pair, y prime to y and x prime to x, which are proper, generically finite, and rank or degree of these alterations is not divisible by any L, which is invertible on x. So ideally we would like it to be one, but best possible which we can do now is to be prime to any L, invertible on x. And the theorem from 17, altered resolution of morphisms says that if you are given a finite type morphism y2x, between integral fs log schemes, and the generically trivial log structures, and also we have to assume that x is sort of universally resolved by classical meaning. For example, it's a point, or maybe it's a curve, or maybe it's even a quite excellent surface because there is classical resolution for surfaces. When there is a log smooth, x alteration from y prime to x prime. But it is given any such f, we can alter y and x and get a log smooth morphism. So we can resolve morphism in log category. So a bit about history. Altered resolution was first discovered by De Jong in 1905. He considered the case when the dimension of x is at most one, mainly point or a tray, so resolution varieties, or some state of reduction over a tray. And also he proved an equivalent version of this group action. And then De Jong's approach in 1906 proved this result in characteristic zero. So we actually, and x is a point, so we actually resolved varieties in characteristic zero by a completely new approach, we proved that De Jong's approach is also able to resolve varieties in characteristics zero. Gathered unknowns around 2005, where one can also control in positive characteristic, one can control the degree of alterations, at least at a single prime L. One can get prime to L alteration, dimension still less equal one. And in our project with Illuzzi, in 2014, we actually worked out Gavir's program. It was not very busy, but we managed and we proved moreover, but actually one can take any x and not only x bounded by one, dimension bounded by one, this required a slightly different induction scheme, but we used many ingredients of Gavir's program. Good. So, so far, and in 17, a few more valuation theoretic techniques were used to strengthen this method. So is this a question, Paris? Yes. So that's okay, when you write integral, it's slightly ambiguous because you don't, probably you don't need, you need, mean integral in the sense of log geometry, but integral in the sense of scheme theory. That's correct. That's correct. But all my log structures will be FS, okay? Even though, yeah, but you're right. Yeah. Okay. And here means just interval on the level of varieties. Yeah. So if you want to make it nicer using alteration, using alteration, then the x prime and y prime are again supposed to be integral or there could be several irreducible components and just the degrees at the sum. In this case, we assume to be integral. Yeah. Okay. So this is a, okay. I understand. Okay. Yeah. Another question? Yeah. Is there a version of this theorem where you don't have log structures, but the altered morphism is literally semi-stable in the sense of... Soon. Okay. Soon. Okay. In a couple of times. Now the method. So the proof of all these results, yeah, what was found by Dion, runs by direct induction on dimension. So morphism of dimension D is split it to D curves, relative curves, and the result when one by one, we start with x zero, which can be resolved. It's a small dimension or by some inductive assumption. And then the result of one, and we get x one, which is log smooth. And then we resolve f two, and we get x two, which is log smooth. And a bunch of alterations is collected during this process, but the idea is very simple. Just resolve dimension by dimension one by one. This requires to resolve the morphism of relative dimension one. But the role of log geometry here is crystal clear. Relative curve can be resolved only in log category. You cannot make this morphism smooth by any alteration. Only log smooth, or semi-stable in the best possible case. The proof of resolution of morphisms is classical more or less. It's based on properness of MGM bar. One of groups. Now we're a few by way. And the semi-stable reduction of the limit of fourth, which actually is the first relative resolution result, which was discovered. Okay. And control on the run is done by quotients. So we resolve something equivalently and we divide back so that log smoothness is preserved. This is called, this is, this happens if action is so called toroidal or gather calls when very thing. So, conservation, classical context, walked with regular schemes and log structures given by SNC devices. But everything works even easier if we generalize to log smooth or log regular log schemes. Moreover, the generality is critical when we want to divide by the toroidal action. Because making action toroidal, so-called torification theorems, work only in the general context of log regular log schemes. By way, it was discovered by Dion and Abramovich in the way of walking in 1996. And the word torification is just a joke. When it was discovered and we saw what it looks, Abramovich wrote an email to Dion, which terrific. Yeah, terrific, terrific. So it's just play of words. Okay. Good. Now, what can we deduce from this? A sort of principle, which I think works often, is that once log structures are used, there is no reason to be stuck with smooth and SNC. You should better go to general context of log smooth or log regular schemes and morphisms. In a sense, from the point of view of log geometry, all FS monoid are equal, like all animal cycle, and if needed, you can, after that, improve and combinatorially by a separate routine. And here is a theorem I was asked about, a theorem with a project in lieu, where it's in a stable reduction for morphisms. If in the alteration resolution theory, which I formulated two slides before, you can, in addition, achieve with white prime and X prime are regular and log structures are given by SNC divisors. So you can achieve more. Literally this is the best possible resolution of morphisms. Locally parameters on X are just products on X prime, a product of parameters on white prime. And it is deduced from the theorem on two slides before by hard combinatorial methods. All you have to do is to improve monoids by blowing up by subdivisions. But it's really difficult combinatorial method. It's sort of relative version of the main combinatorial result of KKMS. Let's just put it off, which is also difficult. And now, that's all what I wanted to tell about alteration resolution. We have one principle to take with us. And let's see how we've walked to classical resolution. So the rest of the talk is about joint project with the Brownwich and Vodarshev on resolution singularities over field K or characteristics. For simplicity, we always work with varieties, finite type over field K. We can deal with larger generality, but for lecture purposes, we stick with this. Our goal is to resolve morphisms, log varieties, and a bit I'll tell about at the remover. References for the talk. So logarithmic resolution is done in two papers, first of all, we resolved logarithmic varieties in 17. This is already published. And now there is submitted paper about extension to morphisms. In addition, there are two papers on dream algorithms, paper without log structures and a paper with log structures by Kvek, a student of that. Okay, and now motivation for this project. Main motivation is as follows. We wanted to improve this result about resolution of morphisms, which in characteristic zero is due to Abramish and Karl. Deon's method is not canonical. And even if I give an amorphism with large smooth locus, we have no control on smooth locus. It can be destroyed. We have to choose this vibration. It's not canonical and we have no control. So main goals of a new project were first of all, resolve morphisms so that log smooth locus is preserved in particular proof, stable reduction over non-discrete variation rings. Hiranaka, Hiranaka's theorem implies semi-stable reduction over discrete variation ring. It's sort of accident. But for non-discrete, the only thing you can do is to spread out, get a family over a high dimensional base and try to resolve where. And you want your generic fiber to be, which is smooth to be preserved. So one needs to use something new. Second, do this as hunt or less possible. Try to do it canonically. Try to do it compatible with base extensions. Hiranaka's semi-stable reduction is not compatible with Ramefet extension of the tray. And our method will be so fun reality. So clarify the role of log geometry in classical resolution. Okay, just a minute I'll explain what it means. Now the only hope was to use Hiranaka's embedded resolution method. Why? Because this is the only canonical method we have. As I explained, mainly there are two methods to prove resolution in any dimension. Deon's method and Hiranaka's method. And Deon's is not canonical for sure. So we hoped to use Hiranaka's method, but for log smooth embedded varieties and not for smooth embedded varieties. So just shift completely to log geometry of Hiranaka's method. And why we hoped that this is possible? Yeah, not only because we do not have any other tool. We had some indication or expectation. And this was because in Hiranaka's approach, the rest signatures of log geometry. I will point where. And the hope was that due to this monoidal democracy principle, if there is log geometry in Hiranaka's method, it should work for general log smooth stuff. So this was actually, this principle was, you know, it gave us some hope to start it. Okay. And now a couple of words about classical resolution. So classical resolution aims to take an integral variety Z, this time just variety, log variety. So it's not no confusion as possible. And it wants to find a modification Z rest to Z is smooth Z rest. Hiranaka in 64 proved that it exists and got fields medal for this. And then many people tried to understand what Hiranaka did and simplify and Hiranaka himself also worked on this a lot. In 70s, Hiranaka zero found a notion of maximal contact, which will be important later. Willem Eyer and Gears to Milman independently in 70s in 80s and 90s constructed an algorithm, not just existence. We constructed an algorithm how to resolve canonical singularities. And since when actually the only algorithm which was available was this algorithm or Gears to Milman. It's essentially the same. Many different proofs were given or constructions, but the algorithm is the same. So our algorithmic algorithm was sort of the first one which is really new. And the blood actually in 2005 proved that the algorithm in fact satisfies stronger property, not only canonical, it's funtorial for all smooth morphisms. If Z prime over Z is smooth, when the resolution of Z prime is pullback of resolution of Z. And this is stronger claim and it is easier to prove as often happens with inductive arguments. And also it proves equivalent resolution, so it's useful for applications. Now about our results. So in 70, we constructed a look of classical algorithm in logarithmic world. So if you want to resolve morphisms, it's clear that you should go to logarithmic world. And I also gave a few more reasons why to do. Now morphisms are complicated things. So if you want to do some logarithmic, start with varieties. Just develop something. So here on the graph, theorem results with varieties. Just resolve variety, resolve the divisor given to the structure and you get the resolution. But we constructed an algorithm which is not only logarithmic, it's funtorial with respect to all look smooth morphisms. This funtoriality is completely out of reach for Heronaca. It's something new and it's important. In logarithmic world, you must walk logarithmically. So funtoriality also is much stronger here. This was the main novelty. And the method itself. And then in the next paper in SQL, we proved that this algorithm developed in 17 actually works for morphisms. Just the same algorithm, works for morphisms. It constructs a modification of x so that x rest to b is log smooth. But it may fail if dimension of b is larger than 1. But it fails for good reason. It fails for the reason that sometimes you have also to modify b. If dimension of b is larger than 1, it can be possible that you have also to modify b. So a new ingredient was to prove that there is a modification of b. So what after modification, the base change already can be resolved by the algorithm of 17. So when you modify b enough, you can resolve. Moreover, this will be compatible with any fuzzle base change. So it's completely up to existence. It's independent of the base. It's compatible with base changes. And so far in the archive version, h is not canonical. So resolution is canonical only relatively once you choose sub b. But we are working on canonical modification of b2. It will be done. So we are in the middle of this work, but it's clear what it will be done. So these are the new things about algorithm. And now I formulated, I gave motivation, I gave formulations. Now I'll describe classical algorithm. And in the end, I'll explain how it can be twisted to logarithmic version. So all canonical methods before our work actually constructed essentially the same algorithm. You can work locally because your buildings have in canonical. So if you do it locally, when it glues automatically, the resolution is embedded. One locally embeds x into a manifold. By manifold, I always mean a smooth variety in this talk. And then works with a pair. So one looks for blowups of the embed manifold so that MRES is smooth and some transform, certain transform of x, which is the pullback minus few copies of exceptional device. So transform of x is the resolution of x. Fantorial embedded resolution implies fantorial non-embedded because embedding is essentially unique. I will not stop on this question, but this reduction from non-embedded. To embedded is simple. Main choices. It turns out that this classical algorithm makes a lot of choices, which looks so natural that people just are not aware what they really are done. So first choice. The most natural one is that we only blow up smooth centers. Why? Because we want this ambient space to be smooth throughout our algorithm. So we constructed sequence of blowups. MI is blown up at smooth center VI and we get a smooth MI plus one. So this will be the notation. Transforms. And by way, I want to say that already one is a decision. And in our algorithm, its centers will be different. And in DreamAlgorithm, its centers are different. So one can play even with this choice. It's essential. Transforms. In the approach, you pull back X and subtract the multiple of the exceptional devices. The most natural thing you can do. If you pull back completely, you definitely get something which cannot be smooth because it has few components. It has copies of exceptional devices. So at least you must remove some copies of exception. Choice of centers. There is an invariant in the algorithm. I'll describe it a bit later. But the main component of this invariant is the order of ideal defining X. So the order I'll explain a bit later what it is, but it's something very natural, you can imagine. It's a very crude primary invariant. History. In addition, the usual algorithm will run into a loop if you just use this primary invariant. I'll give example in a couple of minutes. And because of this, it has to use history. It cannot work without history. And history is given by exceptional S and C, divisor E. And the number of components at the point will be another primary invariant. So and finally, induction. This algorithm also runs by induction, but it's not induction of dimension and induction by vibration. It's induction of co-dimension. It's induction on hypersurface and then hypersurface and hypersurface and so on. So in this ambient manifold, we'll choose a maximal contact hypersurface so that the problem can be restricted to it. And so this is the mechanism of induction. So actual invariant will be D1, S1, and then D2, S2, the invariants on the maximal contact and then the invariants of the next maximal contact and so on. So it's a sequence of two invariants. Okay, good. And now history. Classical algorithm, in addition to subtle inductive structure, it must encode history and with our choices, no history does not exist. And here's an example. An example of no progress. Let's take ambient manifold A4. Let's take hypersurface given by vanishing of X square minus YZT. Yeah, it's a hypersurface with singularity, locus, which is just union of three coordinate lines. X, Y, X is the X, I think, Y. Okay, it contains of the union of three lines. It consists of union of three lines. And there is a symmetry by permuting YZT as free symmetry. And in this singular, singular locus, the only S3 covariant sub-scheme containing zero is zero. So if we want to find something canonical, it must blow up a covariant centers. And then it can only blow up zero. If you blow up zero and consider a chart of this blow up, when the pullback looks like Y prime square times the same expression in new coordinates. That is, the total, the pullback of X consists of something which is X prime is just looks like X and two copies of exceptional divisor. So after removing exceptional divisor, we just are stuck with the same equation. It does not improve. And if we have no memory, we'll do the same. And we say, and we'll never stop. And a similar computation shows that even with the umbrella, when you blow up pinchpoint, you again get a pinchpoint. So Hieronakis algorithm must use history. But using weighted blowups and not just blowups, we have constructed a 19 and dream algorithm which is just as simple as possible. It defines a variant. It says which center to choose with center with maximum invariant. You blow it up and then there are drops. And there is no history. Okay, good. And because there is no history, actually one does not have to consider exceptional divisor in this algorithm and it works without those. Good. Now about the boundary. So why history is encoded in the boundary in Hieronakis approach? It's very simple. Once we blow up M and get some M prime, any point X on the exceptional divisor has a God-given coordinate T. It is unique after a unit. And it comes from the history of the resolution. So if you want to make some less choices and remember history, we should use this coordinate always in all of our computations. And this is what Hieronakis does. So inductively for a sequence of some manifold blowups, we define a total boundary to be premature of the I's boundary and the U boundary. And we call it the accumulated boundary of M. Of M i plus 1. We always work with coordinate system T1, Tn such that both new center and the boundary at this stage can be expressed in these coordinates. In such case, one actually says that E i and V i have simple normal crossings. This means that V i lies in few components of exceptional divisor and it's transversal to the union of the other components. So we call the boundary coordinates exceptional or monomial and even denote them differently and one up to MR. So our coordinate system has some usual coordinates where we have choices and exceptional coordinates which are God-given up to units. And in this event, it blows up only such V i's, it's automatically that the boundary will be simple normal crossing at any stage. And if I would blow up a smooth center which is not transversal, when it can happen that I destroy it with boundary and the next boundary would be non-smooth. So it's sort of must. If we want to use boundary as an S&C divisor, we must blow up something like this. So this restricts our center's smooth. Now the role of the boundary, good news, is that once we use monomial coordinates, we have less choices. Yeah, this is what we wanted. We avoid loops. And also boundary can accumulate part of i. So we'll split i in the signal as i monomial, where i monomial is maximal monomial invertible ideal and i pure, which cannot be divided by monomials. This splitting will be essential just in a minute. Bend use, in fact, in other side of the same coin. First of all, we must treat E and monomial coordinates with a special care. And less possibilities for coordinates. So sometimes it's also a problem. Okay, good. Now many technical complications of a classical algorithm actually are caused by the fact that we badly separate regular and exceptional coordinates. And I'll point out where this happens. So and first of all, in the definition of order, we have two classes of coordinates, but in Hironaka's approach, they are mixed. And in our approach, after, we'll separate them completely, as you'll see. Good. Now, principalization. So this idea of splitting i as monomial part and pure part actually is reflected as follows. By principalization problem, we mean we follow. All algorithm of embedded resolution do the following things. First of all, once we embed x into m, replace it by ideal i on m. And only work with ideal. So now on, we ignore geometry of x completely, we just work with ideal, geometry of m and ideal on it. And we solve the following principalization problem. Find a sequence of blow-ups as above of manifolds with boundary, such that the pullback of i to m, m is invertible and monomial. So just it becomes what I wrote, i mon and i pure is completely killed, no pure part. It means that it's just supported on E n. It looks like a different problem, but it turns out to be equivalent to embedded resolution. If you are given embedded resolution, you can pass to, okay, no, no, sorry, it's stronger, it's stronger. In theory, it's stronger. So the magic is that the last non-empty strict transform of x, let's denote it xl in ml, actually is a component of vl. And because of this, it must be smooth and transverse of vl is smooth and have simple normal process with l. So the magic is that if you can solve principalization problem, then you automatically solve the embedded resolution problem. So from now on, we'll discuss principalization problem. So we replace geometric problem by a multibright problem about hds. So moreover, principalization not only solves xl, it solves also the history divisor. It achieves since xl and el have simple normal crossings, the restriction of l to xl is s and c. So we wanted to solve one problem and we solved a stronger logarithmic problem. This gives a strong smell of log geometry and this was one of indications that log geometry is lurking behind shironaka support. Great profit, working with ideal provides a lot of flexibility as we'll immediately see. Okay, order reduction. So many variants of Valgory, as I told, is the order, the order of pure part because monomial part is sort of our friend, pure part is our enemy. We want to decrease the pure part. So the order is defined as minimal order of elements of the ideal and it's as natural as you can imagine. At the origin, the order of xl minus yz square is 2. Yeah, it's given by this monomial and the order of such a guy is 5 because of this monomial. Okay, in addition, one works not just with ideal, one works with so-called weighted or marked ideals, i, d, where d is a number. And this number indicates what types of transforms we want to do. D says that we want to remove d copies of exception. So we only use blowups along with centers, which are contained in the locus where the order of 5 is at least d. So we call such a locus i, d singular. It's singular support of the marked ideal i, d. And if we blow up such a center, it's automatically that we can update i by pulling it back and dividing by this power of exception of divisor. So these guarantees that we can divide by this power. And if we blow up the locus where the order was at least d, then we'll get at least d copies of exception and we can subtract it. For example, we already saw such example, we blew up x square minus yz t and we removed two copies of exception. Order reduction finds a sequence of blowups with boundaries. I just to save space, I did not put here the boundaries, which are i, d admissible in the following sense. In this sense, of blowing up only such centers. And order reduction not only blows up such centers, it finds a sequence. So with im, d singular is empty. So it managed to get im so that its order is at any point is less strictly less than d. Yeah, so we blow up points where order was moving d and we drop below d. So we sort of reduce the order of i below d. Now in principle, existence of such a immediately implies principalization. Just take d equal one, just start with ideal and kill it completely. Using such transforms and factoring out monomial parts at each step. And remark, the main case actually is not d equal one. The main case is d equal to order of ip1. It's the most natural thing. So invariant says that the maximal problem happens where the order is maximum. So try to reduce the maximal order and then next and so on. So the main case is maximal order. But for inductive reasons, we also have to deal with the case when d is not the order of your part, but something small. So it's sort of bad karma inherited on maximal contact from the general problem. Okay, good. And now we go to concrete part. Just one or two slides about, you know, concrete work. So maximal contact. The miracle which enables induction on dimension and the miracle which only happens in characteristic zero. We have no idea what to do in characteristic p. No analog of such a phenomenon is that in the maximal order case, in the case when d is just the order of i, the order reduction of i, d is equivalent to the order reduction of so-called coefficient ideal c i restricted to a hyper surface h of maximal contact. With the order d factorial. So any blow up sequence which reduces order of c i on h gives rise to a blow up sequence which reduces the order of i, d just blow up something in h and then blow up something in strict transform of h and so on. So just the same sequence induces sequence of loss of them and 20. c i is, yeah, as I said, coefficient ideal and h is hyper surface of maximal contact. Now the main example, how we look. Let's assume that i is just given by a single equation. So hyper surface. And in such case, we can always choose coordinates t equal to t1 and up to tn. So what this element will look like t to the d plus a2 t, d minus 2 and so on plus ad where ai depends on t2 up to t, at least formally local. And then h is very simple is just vanishing locus of t and c i also something very simple is just the ideal generated by coefficients. Hence the name coefficient ideal coefficients, but with correct weights, we want a2 to be a weight 2 and ad to be a weight d. So we take integral weights, which put them in the same gradient and remarks. Why such a definition, why coefficient ideal? The reason is we've fallen. If I try to take just i in restricted to h, when I just keep ad restricted to h, this loses a lot of information, no way that it will be equivalent to my original problem. I want to restrict all coefficients to h, but when I kill t, I must somehow keep information what was the degree of each coefficient and it's clear that they should be all the weights which I wrote here. So it's just a way to wait to keep all information about this equation on h. And a1 equals 0, this is the place where we really use characteristic zero assumption. Otherwise it's not possible to kill the coefficient of t to d minus 1 and it will immediately be clear why this is so important. Okay, good. Now I give you example, which completely illustrates the main mechanism of value, but it has choices, a lot of choices. I just chose some coordinates. So the question is if it's possible to do this without choices, that is yes, it's done by use of derivations. So main tool for a choice-free description is to consider derivation ideal of i, denoted d i, is generated by i and by all derivations of its elements and iterated derivation will be denoted dn of i. And I'll note that derivation decreases the order of ideal just by one. Yeah, there is at least one partial derivation which will decrease the order, it's obvious. So because of this derivation provides conceptual way to define all basic ingredients and the order is just minimal d such that this derivation of the ideal is trivial one, order zero. the capital L Job is capital K If I derive D minus one time, I kill all these parts and I only have two. So a maximal contact also is defined using this derivation ideal. And coefficient ideal, again, is just weighted sum of derivations. So more or less the same as we had before. Remark, the only serious difficulty in proving independence of choices now is independence of choice of this T. There might be few maximal contacts. One must prove independence. It's a headache of value, it's the most subtle point. I will not discuss it in this talk, but there is something to do. Up to choice of this maximal contact, I more or less describe all the ingredients. Okay, good. Now, complications of the classical algorithm. So it has two complications and this is related to use of usual derivations instead of logarithmic ones. So model of logarithmic derivations is spanned by logarithmic derivations mj, delta mj and delta ti for regular ti. Now, these are precisely the derivations which preserve the exception divisor. Take its maximal ideal, it's a, almost all needs, it's easy, more conceptual, easier for computation, whatever you want, to walk with logarithmic derivations once we want to keep E in the picture. But we cannot compute the other using logarithmic derivations. This is the problem, we must use all derivations. And because of this, Heronakis approach runs into two complications, two following complications. First, it says us how to choose H. This maximal contact is chosen using the derivation ideal. This derivation ideal has no idea what is exception divisor, just no relation. Because of this, it might happen that E is not transfer solved to H. In such case, I cannot restrict E on to H by getting something SNC. I can restrict as log scheme, but yeah, it won't be log smooth. And because of this, we have no control on transversality to E. So the algorithm we can run on H will not be transfer solved to E and we destroy all our inductive. So how one resolves this? It turns out that all new, except if we start to blow up H, all new components will be transfer solved to H. So the problem is only with the old boundary. So because of this, the solution is to work, to remember, to work with stratification of H by the old boundary, by the number of components of old boundaries. So we define a secondary invariant or second primary invariant, S-old. The number of old components of the boundary and the first four work where this S-old is maximal and then where it's next and so on. I'll not go to details because in our algorithm, we get rid of all this mess, but this is existing. It's a headache of usual algorithm. And this is the reason why the initial invariant is not just D, it's the order and the number of components because at this stage, the E is our enemy and we have somehow to bypass this complication. And second complication is that it can happen where the order of I is large when D, but the order of pure part is small when D because monomial coordinates contribute to the order. And in such case, we cannot proceed just by looking only at the pure part. We cannot just say, okay, let's take pure part and reduce because it's already reduced below D. In such case, we have to take into account the order of I monomial and we'll have to work with stratification where the order of monomial is large enough. Again, you'll have to stratify our picture and to run something different. And there is a solution outlined here. I will not discuss it because again, it's not essential for our new algorithm. Maybe I'll only mention that even when I pure is empty, still for inductive reasons, you have to get rid of monomial part and it's done by purely combinatorial step, but again, something should be done. Okay, in this combinatorial step, actually we have an analog, but much simpler in our new algorithm. Okay, good. So we are done with classical algorithm and now we have about 10, 15 minutes to discuss the logarithmic list. And logarithmic algorithm. So what is the boundary? Before we go further, let's really understand what is the boundary because so far only hinted that in Hieronakis algorithm, there are some logarithmic ingredients. Sometimes they help, sometimes they are against us, but we're a sample. Now, so let's think about boundary. Typically, and this was my first before I started, I was familiar with logarithmic geometry. I thought that this is a divisor. And I think now when this is wrong to you, boundary is a divisor. Unlike embedded scheme X, you should not think of E as a subscreen. Even because there is no map of pairs m prime, e prime to m e, when you blow up, you increase the pullback of your boundary. So in prime is not mapped to E. It may happen that E is empty and the new boundary is not empty. So it's not map of pairs of schemes. Just even by funturality, E is not a subscreen. It's not good to view it as a subscreen. But if you view this guy as a morphism of log schemes, this makes perfect sense. It's just a morphism of log schemes where we consider log structure associated to this SNC divisor. Moreover, this is excellent log scheme. It's log smooth log scheme. And yeah. And moreover, the shift of monomials, yeah, which are invertible outside of E, yeah, this log structure, is precisely what we need from E. In Heronakis's algorithm, we just factor the ideal to monomial part and non-monomial. And to factor out monomial part, we just use this shift of monomials. So in a sense, Heronakis invented in this particular case, the notion of log scheme, yeah, in very particular case. Okay. And logarithmic parameters. So we'll work with log smooth log varieties. For shortness, I'll just say toroidal varieties. And it's the same, yeah, just classical toroidal varieties are the same as log smooth log varieties. And locally, where of the form spec of K, bracket M, bracket T1, TL, where T1, TL are regular parameters, and M is just sharp Fs monoid. Okay. And the VU, TI is regular coordinates, and all elements of M will be monomial coordinates if the original T. So now we don't have good monomials and bad monomials because this M can be complicated. Also, logarithmic derivations, yeah, differential, differentials of T comma M, yeah, it means logarithmic differentials. This model is freely generated by differentials of T1, TL, and delta Mj, yeah, DMj over Mj. DMj now can be any basis of Mjp, yeah? I don't care if this is a basis of M or not, M does not have to be free, and just any basis of Mjp is good for me. Please pay attention, I'm in characteristic zero, yeah? So this is the reason I can take any basis. And even though Mjp tends to be like that. And this factor, this factor I prefer to say as a principle of monomial democracy will come to it a bit later. From now on M does not have to be free. There is no canonical basis of Mjp, and all monomials for us are equal. Yeah, like all of us monoids are equal, and all monomials inside such monoid are equal. Remark, the most interesting feature of the new algorithm is funerality with respect to kumar logital covers. I told that it's compatible with any log smooth morphism, but kumar logital is probably the most surprising, the most interesting one. Because in usual situation, they look like grimified covers, they are not smooth, so why should you expect any compatibility? So for example, if we extract roots of monomial coordinates in classical setting, our resolution is compatible with such operation. And here on echo, obviously not. Or in the case of semi-stable reduction, we can extract your roots of uniformizer of the base. We can consider ground field extension, which is remefined. And still this is compatible with our algorithm. And it's out of reach and also unnatural for classical algorithm, but it's very natural for algorithmic life. Well, now main results about algorithmic algorithm. So ignoring the orbital aspect, which I hinted at in the beginning and in the last slide we'll discuss it a bit. If we ignore it, when log principalization says that given a toroidal variety T and an ideal on this old T, in on T, we can find a sequence of admissible blowing up of toroidal varieties. I'll say you later which admissibility this time. Tn to T, such that the pullback of i to Tn is monomial. So it's just direct generalization to logarithmic setting of principalization. And this sequence is compatible with log smooth morphisms. Again, log smooth funturality is essential. And as this implies, within classical situation, as in the classical, given any integral logarithmic variety X where exist a modification X-rays to X, such that X-rays is log smooth. This is funtural again in a strong sense. The main novelty is strong. And also as I mentioned, both principalization and log resolution, we saw work also in relative situation for morphisms. Good. Now about the method. And please pay attention. We have something like seven minutes. We have just four slides. But after we've worked, we have done now, it's really will be very simple. So in brief, we want to log adjust all parts of classical algorithm. But this we want to put log at any place we can. Okay, so I won't, I just was confused about your, in log principalization, the ideal, maybe I didn't quite understand what the, the order of varieties. Monomial and invertible. It should be also invertible. I forgot to say invertible. Yeah, but the ideal, so the toroidal variety has a log structure in your, and the ideal is any current ideal or is related to the log structure? No, any ideal. Any ideal. But then the, if you want to, okay, you didn't explain all the, Mr. Oblombo. I did not. You'll see, you'll see, you'll see. I increase log structures, you can imagine. In brief, we want to log adjust all parts. So how we do it? Log order of I is the variable D, such that D log D of the ideal is trivial. So we just replaced D by D log. Maximum contact is any hyper surface given by vanishing of T, where it is regular coordinate. And it means it's coordinate whose log order is one. So in D log, D minus one, where are elements of order one, take any of them, it defines you a maximum contact. Such maximal contact is automatically toroidal. If I take vanishing locus of monomial coordinate, I will not get something toroidal. But here, if I take vanishing locus of such guy, it's always toroidal. Coefficient ideal, again, weighted sum of logarithmic derivations. The only new thing is what does it mean to have I, D admissible blow up? So we allow this time to blow up any J such way. First of all, I is contained in this power of J. And this is with D admissibility. If I is contained in J, D, when the public of five can be divided by this power of public of J, that is I can remove D times the exceptional device. So this is just to be able to remove D times. And second, J is generated by a few regular coordinates in human omens. And I don't care for monomens. It's democracy. You can take any set of monomens. So any monomial ideal can be blown up. And obviously this destroys smoothness, but this preserves log smoothness. So in log smooth context, I'm allowed to do such a thing. I have more possibilities for blow ups. And in fact, I just blow up what we call submonomial ideal. It's monomial ideal on logarithmic submanifolds given by vanishing of T1TN. And after blowing up such a thing, I add its exception divisor to the monomial structure and increase monomial structure as in the classical algorithm. Good. Now infinite log order. So a strange thing which happens, a new thing is that log order of Tis is what? But log order of monomials is infinite by this definition. Because when I take the log of monomial, I kept multiple of the same monomial where I give values of the log. I give functions. So we behave like zero where log order is infinite. And this is the main novelty. And this is the novelty which allows for reality with respect to extracting roots of monomials, kumar covers. Because on a kumar cover, my monomial which was for example, M, becomes square of something else. But its order must be the same if my algorithm is compatible with kumar covers, all invariance must be compatible. The only way to be compatible is to say that its order is infinite. Derivations are not able to treat monomials and you should give up and not insist as in Pheronakis approach. So as a prize, we have to do something special when the log order of I is infinite, but this something special is very simple. And in fact, it was discovered by colors by color a few years ago before our work. And it just says that you should consider ideal I mon, minimal ideal which contains I. For example, if I is given by Venetian of elements sum of MIs t2i, we just take the ideal generated by coefficients, monomial coefficients, blow this up and divide by this pool lake. What you get, you kill one of these coefficients. So on the pool lake, the order becomes finite. So where is very simple, completely combinatorial blowup, monomial blowup, which makes the order finite. And after that you proceed as usual, you take maximal contact and run induction on the dimension. Our algorithm is C-plug. It avoids both complications I mentioned. Maximum contact always is given by a regular coordinate. So it's always transversal to the monomial structure. It's always throw it automatically on the nose. And in a sense, we completely separate the dealing with regular coordinates, where I will order and give you these monomial coordinates, which is done by combinatorics by throw it a blowup, by monomial blowup. And the invariant now also is much simpler. It's just three of others. D1 up to Dn is DI just national numbers. And the last one is a zero infinity just like infinity or just something. Okay, and is always so elementary where is the cheating? And I said that there is a cheating and cheating is that a drawback of monomial democracy is that the algorithm has no idea when monomial is a power of another monomial. And sometimes because of weights, it insists to blow up a fractional power of monomial. We call it kumar monomial. It's monomial on kubar cover, but not on, yeah, it's monomial in kumar locker. How can we blow up such a thing? Well, we can try to walk logital kumar locker. We can pass to the Galois cover where this root exists, blow up where, and then divide by by the Galois group. Excellent idea, but it, yeah, and we did not expect complication here because of log counter reality, but it turns out that this action after blow up becomes not toroidal. So when we divide back, we get something which is not log smooth. Because of this, we must divide back as a stack. So called not representable modification, which we call kumar blow up. Blow up of kumar ideal, which contains, which is ideal in kumar topology. And it can be made principle invertible, but only by non-representable kumar locker. And this is okay for applications because after that we can remove stack structure by a verification algorithm. The same algorithm as used in Gaber approach and by Abramovich Deyonc and others by verification or just a verification, we can actually remove stack structure. But we lost our step with verification. We'll be compatible only with respect to smooth processes. So in order to be log factorial, we must also work with stacks with non-representable modifications. So the stage which is log smooth factorial, in fact only work in the world of stacks. So we must enlarge our context to log smooth and then also to stacks. And now last slide, there is an example where I show the difference between classical and non-classical situation and show when kumar non-representable kumar blow-up is needed. And I'll not stop here because I'm out of time. And last remark, this way to blow-ups which we discovered here. We blow up T1 up to TN and the MV is way D. We can be done more generally. For once we discovered way to blow-ups, instead of the context, we asked what can be done to class cloud? It turned out that usual way to blow-ups of TI, of coordinates on AM, these weights D1 up to DR, is in fact core space of a non-representable modification which is smooth. And even box with weighted blow-ups and consider just usual centers which are predicted by Hironaka, just maximal contact centers, these were weights, but you do the correct blow-up when you get the dream algorithm, I talked about. So actually it's also, it was always hidden in Hironaka's approach, but just people did not know what are the correct tools to work with. One have to work with correct weights, one have to work with text, and then it's possible to get a simplest algorithm one can imagine. Thank you for your attention. Thank you. Thank you. There's one question from Q&A from Darnco. So which paper is cited as T17, if any? 17 is the paper in the GIMS. It's... Oops, just a minute. Yeah. 17 is principalization of ideal sum logarithmic orbit. Just T17. Ah, no, it was, so it's just... Lodarsik is everywhere in all these walks, yeah, so... Okay. Non-intentionally. So, yeah, I think it's a good question. So, yeah, I think it's a good question. So, yeah, I think it's a good question. Okay. Non-intentionally. It was ATW 17. So, are there other answers? Sorry, just a minute, just a minute. Maybe I was just a minute. Let me see. Where is this ATW? Yeah, yeah. This was at 30 at 435... At 1135. So, do I say it again? No, I have 1035, sorry. Never saying this question was asked already on at 1035 a.m. Ah, 1035. Okay, well, okay, so... So, are we... Ask questions? Just concerning some comments in your talk related to my work. So, you mentioned the clarification. Yes. Which you said is used in my report. And in fact, as far as I remember, you mentioned in 2012 as a possible simplification. So, what I did is I used the canonical desingularization. Yes. Of some portion in the... And then you suggested to use clarification. It actually works, but it was not done in the book. It was... I don't know if this is what you meant about clarification related to my... Yeah, I meant... I meant... Yeah, offer. I meant that there are few algorithms for clarification. Your algorithm indeed used the resolution. Initial algorithm of... Deion Kabramovich used some other trick. But in all arguments I know, you must go to log smooth setting. You cannot do it only with smooth and S&C. And it would be possible also in your approach to use clarification of Kabramovich, but okay, it was not... But... Graphio... And now also graph works on so-called desiccification, generalization of these two stacks. It's also similar algorithm. Graphio versions. But all of them somehow must work with log smooth and not just... Okay, now concerning the classical desiccualization one. So you have... Vilamayo... Yes. And then Wilson Binman, who I think was another paper by Sinus and Vilamayo. So I was... I think I read some time ago in the master views of some of this, that the algorithm is not exactly the same. Sometimes they are different... Different all the... It's not exactly the same order of stack. Okay. So it's... Let's say so. In a talk I allowed myself to put something not important under the carpet. Just to save time. And also to make it simpler for listeners. But you are right. You can do combinatorics in a stupid way. You can do it more efficient and less efficient. And people are playing with it a bit. You have some choices with combinatorics, obviously. Moreover, difference of few algorithms is like program... In programming, AC program, when compiler... There are more effective compilers and less effective. If it's less effective, it just says to... Processor to stop and wait until it's sure that... It can do the next operation. The same happens with this algorithm. In some versions, they are not sure that they can proceed. They do some combinatorial steps much more than needed. So we just blow up divisor, for example, few times. Just idle operation. So there are some nuances, but... The main machine in gene of algorithm is the same. The choice of maximal context and so on is completely the same. Which is due to your own idea. This is in your own... Yeah, but in your own... It was implicit and when your own... Himself worked on it many years to make it simpler and so on. So it took a lot of time to... There is this patronite, the artistic exponents. We will introduce this in 1977. It produces certain things. It doesn't give the algorithm. But when then, one can... Actually get the course where he gave this that I had about this. So this is closely related to this... Algorithm. Right. Idealistic exponents are marked ideals. This is the ideal... The idea to consider marked ideals. Idealistic exponent is precisely this mark. Yeah, so you had some... There is reduction step that he mentioned. You mentioned the... And he also wrote the paper later in the early 2000s about this. Okay. The question is a stupid question. I'd like to come back to your quiz about... Making a morphism good by modifying the base. I don't remember. Come back to that result. You have an F and then the F point is used on... On F by some modification of the base and it is good. Is it this light? The new ingredient is that the reason. I'm not so sure. There was no log in the beginning. And then you... No log... Was it in altered resolution or in classical? Yes, altered resolution. In altered resolution it was. I don't remember which assumption you have on your X to Y. Or maybe no. No, it's not so technical. So my question was that you have an F from X to Y which is not good. And you make Y point to Y which is a modification. So how is a log goodness? Or maybe you also have to modify a little bit the source. I don't remember. Look, is it this theorem? Maybe. A morphism, yes. Yes. So it's a modification or alteration of both schemes. But you also fix a lot of questions. So you stop effort log schemes. The underlying scheme, is there any assumption there? Over a field or not? I assume that X is a finite type of a QE surface. Over QE surface. What is the excellent surface? What is the excellent surface, yes. So I was wondering whether you could use this sort of result or we get to the result to prove. For example, there is Fabrice's theorem about making generalized Rpsi of F becomes good after a modification of the base. So I wonder whether this sort of result could be. Of course you have in Fabrice, in the theorem of Fabrice, you have a shift. But maybe the constant shift is already. Yeah, I thought about it. Maybe you have this X to Y and then Rpsi of X, let's say Y is not good. But by modifying Y, you make it become good. And if the morphism you modify becomes good, then the Rpsi will become good. So I was wondering. Okay, I'm not prepared to ask that, but did I hear the talk or was it told what he thought about this? No, anyway, the reason approach in the paper uses a relative dimension one. So it is very, it is closely related to what you're doing with the fabrication in curve and things like that. So what I'm saying is that instead of you, there is some complicated induction and so vibration in nodal curves. But instead, if you can somehow prove the same thing I think by reducing to the case where you have a Fs log structure and some, some on both X and Y and some log smooth may be saturated. And of course the stratification, the shift is constructed relative to some stratification comparatively with the log structure and some tenderness, suitable tenderness condition on the sheet. And then moves that will discuss of shifts and you have uniformly that I mean that the Rpsi, this conversion of this in uniformity, you have constructivity of Rpsi and also in the stratification. Oh, in your second paper, the other paper maybe. Yes, so it's possible to sum up but of course you have to develop a lot of log things to state and then of course to actually check that Rpsi is who to compute the Rpsi in certain situations and sometimes it is very useful to use test cabinet to use already the result of. So I say, but I think in theory it's possible to do it somehow just by improving the warm of them and that it is. Can you think that the Michaels reserve might have to use. No, no, all I'm speaking about all the ideas that don't use as much as this, just uses the young approach, I mean to alter but without all this, I mean this is here you want to control. Well, I'm satisfied just with the log smooth saturated morphism and as a professor and here he wants to have better and also he wants to control the degree and he wants to have log. Well, I don't know. Okay. Stop here because. Yeah, I want to school but I also wondered about this theorem. So I asked a question about whether X prime and Y prime are integral. Now, if you have this when you a tile localized of course the being irreducible is changed so it's kind of strange that. Yeah, I see what you're saying but if structure is not the risk. Okay, yeah, maybe you're right maybe maybe one has to consider. Yeah, but maybe one has to consider components you're right yeah. I think that's all from. Okay, so. Yeah, I think we should.
|
I will tell about recent developments in resolution of singularities achieved in a series of works with Abramovich and Wlodarczyk – resolution of log varieties, resolution of morphisms and a no-history (or dream) algorithm for resolution of varieties. I will try to especially emphasize the role of logarithmic geometry in these algorithms and in the quest after them.
|
10.5446/53462 (DOI)
|
Thank you very much and thank you for the organizers for the invitation. It's a great pleasure to speak here in the conference in honor of Lucille Lusie. The first time I met Luc was in 2017. We happened to be visiting Chicago at the same time. He was there for a longer period. I was there just for a day or so for a seminar. And I remember while we happened to all go for lunch together, we had a very pleasant conversation and many, many more of those since I arrived, since I moved from the US here to France. And Luc was especially helpful when I just arrived and with making sure that my French is progressing well and that my understanding of French culture is up to date and also many interesting mathematical conversations and pleasant moments over a meal or so. So thank you very much for all that to look at. I found him extremely generous and kind and honored to speak here in this conference. The subject I'd like to discuss is a Grotten Dixier Conjecture which concerns torsors and their addictive groups. So let me just begin by recalling what the Conjecture says. The Conjecture originated around at the end of the 1950s. First there was an article of Serre which opposed the special case. The group perhaps came from the base field and Grotten Dixier and the group de Brauer posed a slightly more general version. Later Colette Lenin, Oger and Gurin popularized the general form of the Conjecture which is what I'm going to formulate. Still attributing this general form to Grotten Dixier, the first origins are perhaps in 1958 or so. It says, the Conjecture predicts that if we have a regular local ring, regular local ring R and a reductive R group scheme, well the Conjecture is already interesting when G is split, so for instance, I don't know PGLN, SON or so on, but in general a reductive group over a scheme or over a ring, a regular local ring, like so, smooth affine group scheme over that base whose fibers are connected reductive groups in the usual sense over a field in that their unibode radical is trivial. So the Conjecture predicts that non-trivial G-torsor trivializes over the fraction field of R. Or in homological terms, well really this is just a restatement, but if you want the map of pointed sets from the collection of all torsors under G over R to the corresponding set of torsors over the fraction field has trivial kernel. And in fact, I posteriori because one is also allowed to apply the original statement to inner forms of G, this map then ends up being even injective if the Conjecture holds for G and all of its, well, twists by torsors of G. So that's a question that I'll occupy myself with during this talk and let me just begin with discussing the cases, well, the main cases in which the Conjecture had been established, so we get an idea of the history of the question. So first of all, well, the simplest reductive groups are commutative ones, in other words, the tori, and in the case when G is a torus, the Conjecture was established by Koliot-Telen Hansen-Sück in 1987. They used so-called Flask resolutions of tori to analyze torsors under an arbitrary torus. Anyway, this case is not entirely evident, but somehow the use of these resolutions in terms of induced tori and so-called Flask tori, one can understand the question. Well, okay, so that's the case when G is as simple as possible. The case when R is as simple as possible, well, beyond the trivial case when R is a field, this one R is of dimension one, namely one R is a discrete-valuation ring, a regular local ring of dimension one, so this was settled by Nisnevich in 1984 in his Harvard PhD thesis with some help from Bruhat Titz, in fact, it uses some help from Titz, who I believe was aware perhaps of the case when R is a complete discrete-valuation ring, and so the argument proceeds by reducing to the case when R is complete. This step uses so-called harder approximation, which Nisnevich kind of extended and adapted to this setting in the case when R is complete, one uses some Bruhat Titz theory to analyze torsors in that case. All right, now the case when R is of dimension at most one implies another case when R is rather simple, well, relatively simple, namely when R is Henselian local, so this case follows from a case when R is a DVR, for instance, of course, if R is complete, regular local ring is in particular Henselian, so the complete case is known, basically because when R is Henselian, then a torsor under a smooth group is again going to be smooth, and so it will be trivial as soon as it is trivial over the residue field, because by Hensel's lemma, a point of that torsor over the residue field will always lift to an R point, granted that the finger is smooth, and so, okay, so we only need to trivialize the torsor over the residue field, and then we can sort of cut R up into a chain of DVRs, I mean, into, we can choose a chain of primes of maximal length such that the quotients are regular and the successive quotients will be DVRs, so somehow by this, by choosing such a chain and a little argument, we reduce the case when R is a DVR and then apply this Nevich's argument. Anyway, so somehow the upshot of this case free, although it's not particularly deep given the case too, is that the conjecture is simple when R is, well, when R is complete or Henselian, and so in particular, to attack the conjecture in general, we cannot reasonably hope for a strategy where we somehow reduce the Henselization or completion, and then because, I mean, that case already, that reduction must be the main difficulty, or something else that one has to come up with. All right, now from more recent, more recently, the conjecture has been established in the case when R contains a field. In fact, this was a subject of many works which I will not be exhaustive in mentioning that there really was extensive literature in this case with many contributions, but the final decisive articles that settled this case completely is a sequel characteristic case, whereby Fedorov and Pannon, first they were assuming that R contains an infinite field that made some geometric arguments, notably ones that use Bertini lemma, slightly simpler than Pannon extended the argument to the case when R contains a finite field, so arbitrary equal characteristic regular local ring. Later Fedorov simplified that roof yet again by avoiding some whole initial reduction to the case when she is simply connected and just taking up a general G right away without that initial reduction. So these are the main known cases. There are a few others, for example, when she is of special kind, I'll just say sporadic cases. Okay, many authors again I will not try to be exhaustive. For example, if G is PGLN, so these sporadic cases concern the cases when either R or G are specifically the one R is of low dimension or G is of, or perhaps both are G is of some special form, just for a simple example, if G is PGLN, then the conjecture is known because the torsos and the PGLN they inject into the Browel group of R and the Browel group by a result of Grotendick of a regular base, the Browel group injects into a Browel group of the fraction field. And so the case when G is PGLN is known is due to Grotendick, in fact, was one of the motivations for posing this conjecture in general or hoping that perhaps the statement could be true in general. And so, but in general, beyond the equal characteristic cases, somehow not so much been known about this conjecture, especially when R is ramified, makes characteristic regular local ring. When G is in activity, is that PGLN known? Well, so then one perhaps first needs to negotiate what classes of G1 is sort of talking about. Well, for example. G is at PGLN known, and G of course is at PGLN. Well, G is just a group scheme over R. For example, I can give some, so if G is an abelian scheme, is some kind of an orthogonal case to what we're discussing here, the statement is still true. The map on H1 is injective because one can look at the dual abelian scheme. Basically, it amounts to the fact that line bundles extend, but one uses the crucial way to dual abelian scheme and then extending torsion, properness of G. But if G is finite, just finite flat, the statement is also true. If more generally, if a finite scheme over a normal base has a section over a fraction field, and by taking the schematic closure, that section is going to extend by normality and fineness of the torsion in question. But I don't know, but I think it's not, I mean. I have some ideas, but not, okay, maybe it was not premature. Anyway, so you can look at the constant case or twist of constant groups, and then I think it holds for over a field for smooth varieties. You have a twist of a constant. This uses some analysis of pseudo reductive and quasi reductive, and using the previous arguments, but in a more slightly, using the theory of pseudo reductive groups, at least in the constant case one can give, and then I think also in the twisted case if you use there. So this is probably, well, essentially I know, in principle I think I know how to do such things for these constant groups or twists of constant groups. Okay, so you're saying G is a twist of constant group, if I understand correctly. So this means that locally for the TALTO on a smooth scheme over a field, locally for the TALTOPOLOGY, it comes from the field of constants of the scheme. So it is kind of possibly not a twist of a group, so it could be a germ, anyway, whatever. So this kind of situation, so this means that somehow the behavior is very constant. And so even there, even just to get rid of the unipotent part requires some three, okay, it's not so difficult, but the one is to use some work. So I think I have enough inputs to do it, but it uses just a variance of the previous ideas, but also other things about the two, okay. Yeah, okay, thanks. So that's a more general statement, but one, if I understand correctly, one could pretty much expect to be true if I are so equal characteristic or perhaps localization of a smooth variety over a field and she is a twist of a group that comes from a base field. Okay, anyway, so the result that I'd like to talk about is about this conjecture, in the case when R is a mixed characteristic, well, and un-ramified, and so this, the mixed characteristic cases is the remaining one because of this result of Fedorov and Panin. And so let me just recall for the sake of completeness what do I mean by un-ramified. So a regular local ring R with maximal ideal M is called, well, said to be un-ramified. If its residue characteristic is not in a square of its maximal ideal or more precisely, if either it is of equal characteristic, if either R contains a field, so it's either a Q-algebra or an Fp-algebra for prime P or R is a mixed characteristic, R is a mixed characteristic, so a fraction field is characteristic zero and the residue field is characteristic P and this P, this prime number P is not in a square of the maximal ideal of R. So the kinds of rings we're talking about just very, I mean, some very basic example, affine space over Z localized at the origin in characteristic P or more generally any local ring of a smooth Z, a smooth Z scheme or in fact anything in equal characteristic. So also, alright, this is just an example of a smooth, well, LCMX characteristic zero P, so Z localized at P scheme. Okay, so the result that I'd like to discuss in this talk is that the conjecture of Grotten-Dekensser in the case when R is unramified and G, the group G, is quasi-split, so it has a Borrel subgroup. Okay, for example, G could be split, that case is already, is already new in this result and is simpler, so one can think of that case, for instance, I don't know, some favorite exceptional group E67 or something, but okay, anyway, so that's the statement and a little bonus for such R, for such regular local R, namely the unramified ones, could be of equal characteristic or possibly a mixed characteristic, a reductive R group scheme H is split if and only if is base change to the fraction field of R is split. Yeah, so in fact, in general, the Grotten-Dekensser conjecture implies that two reductive group schemes over R that become isomorphic over the fraction field are isomorphic to begin with, so the Grotten-Dekensser conjecture implies in particular that if reductive R group scheme H is split, isomorphic to a split reductive group scheme over the fraction field, then it already has to be isomorphic to that over R itself and this implication of Grotten-Dekensser conjecture to this statement about reductive groups themselves, it requires the statement of the conjecture not only for G, but for also for inner forms and perhaps also for ACHO and group and inner forms of those and so it's not this quasi-splitness assumption is a little, I mean, anyway, we still get this conclusion about split groups, but we don't quite get that if you have two reductive groups over R which are isomorphic over fraction field then. So it does the second statement require more than the quasi-split case of the Grotten-Dekensser conjecture? The way I formulate it here, no, it does not require more because I restrict it to split groups and that's, I mean, the point is that if I restrict this H-split if and only if the fraction field is split then it requires, it falls from Grotten-Dekensser conjecture and a little additional argument. So you don't assume that H is general? Yes. Is it worth that it is quasi-split if and only if H-plug R is split? Aha, yes, so that I cannot quite show. That's where, yes, one would perhaps want to show that H is quasi-split if and only if over fraction field is quasi-split. I cannot quite do that because if I try to do a same argument then I start needing the Grotten-Dekensser conjecture for groups that need not necessarily big quasi-split. And in equal characteristics it's true because, well, then the Grotten-Dekensser is known. In fact, yes, so there's another conjecture of Kole-Telen and Panin which says that if reductive group scheme G over a regular local ring has a parabolic of a fixed type of a fraction field and a parabolic is already, I mean, not that particular part of the work, but then it also has a parabolic of a same type over the regular local ring itself. The quasi-splitness is just a case of where else. Anyway, that conjecture is kind of a bit of a story. It's not a special case of the Pugval and Dekensser. It's another type of conjecture. Yeah, it's spiritually related, but not so. Okay, so for this result. In the rest of the talk, I'll focus on this first part, just Grotten-Dekensser rather than about these forms of reductive groups. And so the proof, first of all, the proof uses known cases 1 and 2, but not 3 and 5. In other words, we use the case when G is a torus or when R is a DVR, but we're not using, for instance, the work of Federer von Pannen when R contains a field. We recover that. I mean, we approve that, although the proof is somewhat, I mean, it is related to their approach as well. So we approve that case along the way. And in fact, well, I stated this in this form for simplicity. In fact, same statement holds when R is merely semi-local and still unrampified in the sense as local rings are unrampified. Yeah, so in fact, one could generalize the Grotten-Dekensser conjecture to require that the regulating R be semi-local rather than local in some sense and a more natural starting point. And this result, if one assumes G to be quasi-split, is still okay in that case. So I suppose that by professing R is a limit of smooth things over Zp. Yeah, well. Okay. Over Zp. And so probably we'll use it. Yes. Is it then the proof works for smooth things over DVR or? Right. Yes. I have such a version as well. It still works. So in fact, I could assume that R is a traumatic irregular over DVR. It's still, the statement is still okay. I just restricted to this absolute case somehow in order not to introduce some, I mean, yeah, that case is still okay. All right. So well, let me perhaps just give a quick corollary just to illustrate somehow the arithmetic flavor in some sense of the result is just for if one applies this to orthogonal groups, one gets that if two is invertible in R so that we can comfortably talk about quadratic forms then non-isomorphic, non-degenerate quadratic forms over R do not become isomorphic over a fraction field over, yeah, unrhymified local R, do not become isomorphic over the fraction field of R. So kind of statements that one gets by specializing to particular types of groups in this statement. Okay. So I like to then proceed to discussing the steps of the proof of the main result because these steps themselves involve some self-contained statements that could be useful elsewhere beyond the proof of this particular result. So let me then fix the situation of what we have and what we want. So we have an unrhymified regular local ring R and the quasi-split reductive R group, yeah, so as in the statement of the theorem quasi-split reductive R group G. So it being quasi-split it has a Borrell subgroup defined over R Borrell R subgroup B inside of G and we have a torsor under G which happens to be generically trivial, generically trivial G torsor E, yeah, and we want of course to show that E is trivial in other words that it has an R point, yeah, so that is E is trivial. Okay and well I'd like to begin just by telling you right away how the Borrell is used. So this is captured by the following claim which I'll sketch a proof of which is not too long actually, so okay so the fact that we have a Borrell will give us that there is a closed sub-scheme of spec of R, closed sub-scheme Y of co-dimension at least two, so the complement contains all the height one points such that the restriction of E to the complement of Y such that away from this closed Y this torsor reduces to a torsor under the unipotent radical of the Borrell. Well okay so this is not difficult at all let me sketch the proof, of course we will use the value of criterion of properness, we'll apply it to this E modulo B, B being a Borrell G mod B is proper and any torsor E mod B inherits that properness so E mod B is a proper R-scheme well it's a priori an algebraic space, but in fact it's even a scheme because scheme of Borrell's in some in some in some inner form of of G namely the twist of G by the torsor E, so that exists by this value criterion that exists such Y such that E restricted to the complement of Y reduces to B torsor. Because E mod B well E is generic trivial so E mod B has a point over a fraction field and is being trivial that point extends to cover all the height one points of spec of R so there is such such Y such that E mod B has a point over spec R minus Y but a point over over over there is the same thing as a reduction of a structure group of E to from G to B so E restricted that complement reduces to a B torsor I'll call that B torsor E upper B and if we consider the torus which is a quotient of a Borrell by its unimportant radical then there is a purity for torsor under tori this is due to Colette-Lan and Sun-Suk which says that for torsor under tori and over regular basis removing a close subscheme of co-dimension at least two does not matter that does not affect H1 one can one can always remove a close of co-dimension at least two and so that that purity implies that the quotient of of this U upper B by the unipotent radical extends to a generically trivial T torsor over R so so this quotient is is a is a torsor under under T over spec R minus minus Y and because because this this complement covers all the height one points purity for H1 of of tori tells us that that this torsor under torus extends to a torsor defined over all of all of R and to a generically trivial torsor for that matter and so by applying by applying Grotten-Dixere for tori we get that that torsor that T torsor to which this uniquely extends is trivial because the generic trivial in other words this this quotient has a point has a section in particular where a complement of a complement of Y and and this this section then gives us gives us well this is actually what what we want because sections of this quotient are reductions of E B to torsors of the unipotent radical so this okay in in short we just we just showed show this claim that our generic trivial torsor E thanks to thanks to quasi-splitness of of G over a over a complement of a close subscheme of convention at least to reduces torsor under the unipotent radical of a borel so okay let's perhaps just a warm up now let's let's proceed with with kind of over viewing the proof of the main result so the main case I mean as I said it doesn't matter what are so equal characters or it's of mixed characteristic but but because equal characters case was already known I will assume for the rest that are so mixed characteristics 0p that's somehow the main case of interest in more difficult case and so the main the main difficulty is well somehow slightly philosophically speaking is that we cannot we cannot enlarge are we can somehow can only shrink it well what what do I mean by that so for instance we cannot replace arm just by its completion or so because that would the over completion the whole conjecture is known and any attempt to somehow make are larger and so the problem is that over is large a ring the torsor at your studying may become trivial so when somehow tries to go backwards and make are simpler by shrinking it and so concretely let me give an example of what I mean by this so Popesco result Popesco approximation is one such structural result for regular rings and that's where we use the assumption that R is unrhymified so it implies that our unrhymified regular local ring is a filter direct limit of localizations well of yeah of smooth Z localized at P algebras Popesco proved that any any unrhymified regular local ring is somehow I mean some geometric origin in a sense that can be obtained as a filter direct limit of smooth algebras either over a field in equal characteristic or over Z localized at P in mixed characteristic and this results somehow allows us to start using algebraic geometries for this for this for this problem so we use we use Popesco approximation and the limit argument to reduce to the case one R is just a localization of a smooth Z P algebras so without loss of generality R is a local ring well a local ring of a smooth affine Z localized at P scheme which I'll call X so incidentally in this small remark about the semi local case yeah in the semi local case can you allow residue characteristics to be different primes and in each one you will know that it is unrhymified or do what is different yeah that's like that I can allow them to be of different and and in each one of them is unrhymified okay so ours ours a local ring of a smooth smooth affine Z localized at P scheme I'll assume that the relative dimension of of X is D and I'll assume that it's at its positive because the D equals zero case is just basically the DVR case and that that is the case solid and also anyway so these these positive without loss of generality and our G E and and this closed sub scheme Y that we constructed in the claim they spread out by shrinking X we can assume that they spread out the reductive group scheme script G a torsor under its script E and a closed sub scheme Y defined defined over over all of X okay so so that's that's a setup we have we have some smooth affine Z localized at P scheme and then reductive group scheme and a torsor defined defined over it is generically trivial in fact over the complement of Y it reduces the torsor and the unipotent radical and so if that complement were affine then that then that over that complement the torsor would be trivial and we we want to show that is is trivial at the local ring that we're talking about so for this is we will outline multiple steps of how to kind of step by step simplify the situation so first of all we will we will simplify the geometry with with with some version of of of no term normalization so the first step does not use anything about the groups it just it just told you right geometry it's a version of no term normalization or some sort of preparation lemma which ensures that X and Y are of particularly pleasant form so what what does this notar normalization say what is well I put it in quotes this is not really no term normalization as you see but the statement is that there are an affine open U in inside X containing containing spec R so this is smaller smaller smaller affine open neighborhood of this local ring that we're really interested in such that over this you have a smooth map smooth smooth map of relative dimension one and we'll call it we'll call that map pi so this U is a relative smooth curve over an S which has an open of the affine space of dimension of dimension one lower it's an open containing the origin yeah this S is just some affine open okay and crucially so so locally at enable with of R the the axis fiber does a relative smooth curve over over over particularly simple simple base and moreover our Y our our script Y closed subscheme over a complement of which something good happens is such that its intersection with this with this U is finite over S not not not not merely quasi finite but actually literally finite so not notar normalization would would say that I mean ideally if there was no term normalization in mixed characteristic it would say that well after shrinking at this at this point X or sorry it would say that X admits perhaps a finite map to the affine space that's what neutralization overfield says a mixed characteristic that cannot be true for instance that are quasi finite schemes that are not not finite and this version says that at least after knocking dimension down by one we can find a smooth smooth smooth relative curve such that the closed subscheme of convention at least to that we're interested is actually that literally still finite and this this this so here's some ingredients sorry yeah right in fact the artens good neighborhood technique is used in proving improving this so it's it's some sort of I mean it is not really not our normalization in its classical sense it just intuitively a statement of that sort so ingredients to this is Albertini theorem applied applied to a compactification of extra projective flat scheme X bar over over Z localized at P and in fact we use a gubbers version of Bertini theorem which valid also for finite fields well it's also poonence version and in fact gubbers version is slightly more convenient for us because it allows us to control degrees of the hypersurfaces that that occur basically the idea is we take this compactification and we cut it iteratively by sufficiently transversal hypersurfaces and equations of these hypersurfaces will be the images of the D minus one standard coordinates of of a one and so the fiber over zero will be the intersection of you with these with these hypersurfaces and so if we if we if we choose them well then that zero fiber will be smooth somehow by Bertini and and then this will also will also hold and will spread out around around zero this is the kind of thing that happens that happens here and of course this is as Luke already remarked this this is similar to Arton's method so techniques from Arton's construction of good neighborhoods from SGA4 so Arton constructed such local vibrations into into curves in equal characteristic over a field perhaps a large bar closed field he used it to to show that the talcum ology agrees with some electric counterpart and this this result was was used I mean his technique was used for many other purposes too and let me just mention that earlier versions of this of this type of preparation lemma versions of a field of this step one are due to due to Quilin who used also such kind of newton normalizations with especially with respect to the aspect that one wants to control the closed subscheme to study algebraic theory and later refined by Garber in the context of Gersten conjecture okay so so that's step one was geometric step one let's pass to step two which in fact this is is is straightforward this is base change so what do you mean by that so let me just summarize what what we have so far so we have our U going to S and S is an open is an is an open in affine space of dimension one lower so Z localized at P and and U well let me just like that perhaps U going going to S is open and U is a relative curve is a relative smooth curve over this S and we have we have let me perhaps use color chalk so so we have this Y crucially we have we have Y here which contains well which probably contains a point we're interested in which contains the local I mean intersects spec R which is the localization of fiber above above zero in this in this in this in this vibration into curves okay so what so what do we do well spec R is a local ring of this total space of U while we take it one more time that's another copy of spec R right here and so it maps to S we just take a fiber product we get some C so what is C well C is a relative smooth relative curve over over over spec R yeah so we get C over R smooth affine R curve basically because C had the U had these properties over S we get well it comes equipped with an R point with a section delta which is which is an R point of of of C just because I mean there was also a copy of spec R here so surely we'll get we'll get a section from that just by construction Z in C is a closed so it's a closed sub scheme which is finite over R so our finite closed sub scheme which is just the base change of Y intersect or intersect with with you that was finite over over S and C so base change of of Y intersect U we're going to be finite over R E well we'll also have G over over C which is quasi split by base changing the group scheme we get a quasi split reductive reductive group over C with Borrell with Borrell script B such that delta pullback of this pair Borrell inside a reductive group scheme is our original Borrell inside inside G basically because our reductive group G need not be constant over S when we base change the reductive group to C we get a family of read I mean we get a reductive group scheme over C with the Borrell which need not come from from spec R and so but it's it's pullback along delta is going to be the original G that we that we start that we start from and by base changing the torsor we get a torsor where this relative curve a G torsor script G torsor whose pullback along delta is is E and such that the restriction of E to the complement of this are finite close subschims E reduces to a torsor under the unit potent radical of of of the Borrell okay well so from if in fact what we what managed to achieve here with this with these first two steps is that we managed manufacturer relative curve a reductive group and a torsor over that it could with a section such that the original torsor we're studying actually lifts to to this relative curve and so in later steps we use somehow the the flexibility of the setup to change C and to eventually reduce the case one C is the affine line and delta is the zero section and Z some close finite subscheme and then you well okay the news techniques about studying torsor under affine line so the upshot of this is that we have the flexibility of changing of changing C and the problem becomes so to more geometric when did I start okay all right so the next step the next step well now the price we paid is that our G is no longer constant we have some script G over the relative curve we don't quite like that so we would like to equate this this reductive group script G and the constant group just G base change to C and the idea for this I mean is not okay first let me give this proposition which in fact a general proposition so for for for a for a Henselian pair a but in a which is Henselian with respect to ideal I such that the quotient a mod i is normal or or not hearing on geometry in your branch should also be okay but okay a mod i is normal then reductive reductive groups over over over a up to isomorphism are the same as reduct as the base changes reductive groups over a mod i up to isomorphism in other words up to isomorphism every reductive group over i mod i mod i lifts uniquely to a reductive group over over a granted that a mod i is normal in the Paris Henselian and the case one when this is just a Henselian local ring is an sj3 and one can well we can also obtain this more general more general version so the idea is to Henselize I mean roughly speaking the idea is to Henselize Henselize Delta along sorry Henselize C along Delta to equate to equate G and and this constant group which is which is a base change now this because over Henselization because script G and and the constant group agree over of the pullback to the section then they will agree over Henselization and then after spreading out we'll have some real to the curve where they start to agreeing the problem is that we cannot we cannot do that because we need this this does not retain finiteness of of Z we have we need this close subscheme Z over the component of which something good happens and we really need it to be finite over over over Z rather than just quasi finite and if we do Henselization we will only get quasi finite keep control of the be which enters into the right okay yeah yeah yeah we also there's a finer version of the position where in fact is a couple to a braille I mean this one is quite split if and only that was was split so yeah there's a version of the bees but I just okay so in fact one proves a finer proposition where the atal neighborhood is not only at all but actually finite at all so the the proposition is as follows after shrinking after shrinking C meanings seriously locally so around around this union of of delta and and C there exists there exists a finite at all not merely at all but actually finite at all C tilde over C and and a delta tilde and our tilde and our point of C tilde lifting lifting lifting delta such that the base change of this B inside inside G to C tilde is isomorphic to to just a constant family base change from from R like so comparatively comparatively with delta tilde pullbacks I mean after delta after pulling back by delta tilde everything is identified with a constant and the identification here is compatible that's what okay so yeah so we can find a finite at all cover finite tall neighborhood of this delta union Z where where the good thing happens the B inside G becomes becomes constant and so in ingredients ingredients for this so one uses one uses to Tauric Tauric geometry Tauric geometry to build compactifications of torsors to build compactifications of torsors under tori and then one uses Bertini theorem roughly speaking the idea is that functors which parametrize reductive groups equipped with a borrel these are I mean relate to automorphism groups of reductive groups and by some little the massage one reduces because our G squas split when one reduces to considering torsors under tori and so the same kind of proposition where one wants to equate torsors under under tori and to do that one first compactifies the torsor over the torsor begins a constant torsor begins life or R on compactifies it using toric geometry so in fact the statement is that if one has a normal normal base normal noetherian searing for example a torus which is which is trivial torus which splits over finite talc over and a torsor under the torus then we can find a projective compactification of the torsor such that the torsor is fiber wise dense in that in that compactification and to build such a competition one uses toric subdivision so on it's anyway I will not perhaps go and go into this and proceed directly to step to step four but let me just say the upshot so the upshot of this step three what out was of generality B inside this is this this borrel is just constant and and the group is also is constant so alright and so the rest of the argument is to massage C into the affine line and then study the case of a affine line using affine grass manians and some geometric property of affine grass manians and I hope I will get anyway so so step four is to reduce the affine line so the goal is to replace replace C by the affine line this would simplify okay this would simplify the situation so so here we build a diagram we have our C and we will build a quasi finite map we build a quasi finite map to a one of r such that we have some affine open affine open of C containing containing delta and and Z containing delta the section delta and inside is affine open Z Z lies completely inside it the point is that this map is such that it maps the isomorphically onto a closed subscheme of so so these are both closed maps it isomorphically onto a closed subscheme of the affine line and moreover this right hand square is actually Cartesian in other words we realize Z as a pullback of a closed subscheme of the affine line and this quasi finite map is automatically flat because C is C is even regular and but C is conmocole and a one is regular so quasi finite gives is automatically flat so this kind of preparation lemma uses well yeah I'm out of time so let me just mention so that this yeah so use excision somehow to reduce the C being a one this is some subtleties net excision but I'll leave it there and once we have once we have the affine line we conclude by using affine gross mania and send here I mean okay the key key key statement anyway over the affine line one needs to one needs to study extensions to the project of line and the keys and this these extensions somehow given by gluing so the trivial torsor are parameterized by affine I mean related to affine grass mania and the key key geometric statement that enters is that the affine grass mania of any of any group G one takes its derived subgroup and a simply connected cover by functionality that maps to original affine grass mania and its relative identity component this map is by objective by objective on field valued points what one shows this and I recall that if if the characteristic of the field does not does not divide the pi one of the kernel T of the fun of pi one of the drive subgroup then this map F is even an isomorphism but in that characteristic there is there's no less is refinement that the map F is at least by objective on field valued points which helps us a lot because it tells us that points of this are invariant and the multiplication by L in points of this which lift to there and variant of the multiplication by the positive positive loop group L plus of G because that is correct anyway okay so amount of time and this is somehow the main geometric point that is used in the to finish the argument but roughly that's how I remember I think I looked one of ones at your and maybe it is this one and you refer to the paper on what my qualification with some steps right in this page in this group yes so in fact so in fact in the original version of this of this of this result I was only proving the result for split groups and I was using X bar was that compactification X bar was coin Macaulay and the geometry was kind of complicated so I was using my qualifications and later when trying to extend to cause a split groups I realized that actually by doing arguments slightly differently once you can just get rid of my qualifications and geometric power becomes simpler so anyway short answer is that yeah I don't I don't need it in the original version I used my qualifications and I thought that this was I mean turns out they're not essential one can bypass them so I don't use my qualifications enough so I don't use my qualifications enough in this in this question. Okay so thank you very much. Merci beaucoup. Alors ce qu'il y a des questions dans la assistance. Question in two bar. So r is a dimension at most one but there's no un ramified in this assumption there. Right. And in fact, the way that proof of this case two works is by first passing to completion and then using Brouhatt's hysteria and nowhere there. Somehow the proof there is a bit orthogonal to what we're doing here. It's kind of just general facts about discrete relationing. So, as what we're trying to do here is trying to use Popescu to reduce the some regularings of geometric origin and then try to do some algebraic geometry with it. So, it's also a main limitation of why one, I mean why it seems difficult to go to ramified regularings because then Popescu is not available and we just don't know what to do. But I mean, I don't know perhaps the correct approach is to try to channelize this to regular local arbitrary regular localings but it's also, I mean far from straightforward. Anyway, this current attacks on the higher dimensional case pass through Popescu anyway. Also, I think in the sporadic case there was this result in Boolean, right? Overvaluation field. Yeah, so, Ningu actually he proved a version of this conjecture over relationing. So, one has a evaluation ring not necessarily notarian by resolution of singularity. So, to be a filtered direct limit of regular local rings. So, the same conjecture should be true if one has a torsor and a reductive group over the relation ring then which is generally trivial and must be trivial and he proved that unconditionally. I mean that's somehow close to the DVR case except that the relationing is no longer notarian. Okay. So, remercie l'orateur à nouveau. Thank you.
|
The Grothendieck–Serre conjecture predicts that every generically trivial torsor under a reductive group scheme G over a regular local ring R is trivial. We settle it in the case when G is quasi-split and R is unramified. To overcome obstacles that have so far kept the mixed characteristic case out of reach, we adapt Artin's construction of "good neighborhoods" to the setting where the base is a discrete valuation ring, build equivariant compactifications of tori over higher dimensional bases, and study the geometry of the affine Grassmannian in bad characteristics.
|
10.5446/52993 (DOI)
|
Okay, good afternoon everybody. Stabilization of open source hardware. As I described, as I wrote it in my description, I won't tell you about how great open source hardware is, because I assume you all know that. But for those new in the field, open source hardware is not about giving away machines for free, but it's about the question who owns technology. And as hardware or other technology is how we produce our daily production goods or living goods and also the base of circular economy, I assume that's a super important question at least for us. So we start with the term. And the term has been defined by some guys in the US by the open source hardware association that came out with the definition that opens this hardware. This hardware whose design is made publicly available so that anyone can study, modify, distribute, make and sell the design or hardware based on that design. So basically you upload the design files, blueprints and all these calculations and stuff like that, and then anyone can build machines with that. So this is followed by a lot of license terms. So what Osper did is defining what open in open source hardware means. While hardware is rather clear, it remains open what the source actually is. So it is not enough to just share a quick sketch of your machine. Nobody can work with that. So what should actually be shared under an open license so others are enabled to modify, redistribute and whatever your hardware. So it's also a question about the purpose of the source. What are people meant to do with it? And this purpose has been already defined by Osper, open source initiative, and they boil down to these four core points. So people should be able to study, modify, make and distribute the hardware. While we put that in an official standard, which is then the frame for the technical documentation of the hardware because that's what you share in the end. That's also the first official standard for open source anywhere. So in that standard, we also acknowledge that this technical documentation depends on the life cycle phases that you are trying to cover. So not only introducing the hardware to the world, but also to maintain and operate it and to recycle, refurbish and whatever. So you need more documentation. And we also include the point that the technical documentation depends on the technology embedded in the piece of hardware and also that is used to produce the piece of hardware. You will need a different technical documentation for a 3D printed cup of tea without a tea or when you want to share a machine that produces cheese and slices and packages it. So yeah, that's all nice, but why do we actually need a standard for that? Can't people just produce and document open source hardware? So first of all, the standard defines what you should share. And apparently that's a huge issue. There's very few projects out there which are very good documented, at least for mechanical open source hardware. So you're also, despite from enabling people to make use of your invention, you make technical documentation compatible and comparable to each other. So when we have two awesome machines documented well and you can actually make use of both technical documentations, you can combine both to a better machine if you like. And second point, you can actually find it. There's no use if you just publish a lot of open source modules around the web if no one can actually find them. Then you need to reinvent the reel every time again. So the standard defines a common set of metadata so others can actually find your hardware. And as it's an official standard, we can put a trustee and community-based certificate on that. And this is meant to build a bridge between community industry and research institutes. As apparently industry and science needs paper and certificates. And by that process, it doesn't really matter who comes up with the invention first and who uses it in the end. So all can effectively work together. The certification process has also been defined by the standard. Here's how it works. So you start with the documentation release, then you make an application, and then people can peer review your technical documentation just as in science. And when the peer reviews state that this is complete and readable for others in the field, then you will have the certificate, which may look like that. And as the certificate is based on peer reviews, it's updatable and challengeable. So it's not a certificate just for your lifetime. Whenever someone else comes up with a negative peer review and they exceed the positive ones, then it aspires. And this process is moderated by a certification body who can just be anyone who is capable to organize a repository which keeps all these documents open and transparent. So that's how it works in practice. That's the context of the documents, the technical documentation is in the middle. Both standards define how it is made and how it is certified. And there's a third document that I didn't mention yet. There's a guideline that helps you to make this technical documentation in the end. It also helps you in other points like legal issues around open source hardware, patent law issues, and a lot of stuff like that. And on the other side, technical documentation is uploaded somewhere. So people usually upload it on GitHub, Thingiverse, Viki, Fub, whatever they're. Then in the English-speaking area, we have over 80 different platforms. And we developed a search engine that calls these platforms plus Google, plus YouTube, and then you can actually find what's out there. And the aim is to make it filterable by the certificate. So when you search something, you can filter the results for what is actual real open source hardware and what is more DIY stuff. The nice thing about this map, it's all open source, even the standard, which was a lot of lobby work. But you can participate in the standard and give feedback and then doing pull requests and other stuff. And then the National Study-Dissation Institute in Germany needs to look at it and make you that info for the next version of this standard. That was maybe a bit fast, but I wanted to keep it as rough as possible as it's a complex field. And usually, it's better to have a larger Q&A part about that. Feel free to ask me any questions about that, especially the uncomfortable ones are like those. And then we can dig into the details. Thank you. My question is regarding the repository. Is it kind of a product lifecycle management system or is it basically Git where you upload a bunch of, like you have a folder and you upload your CAD files, your calculations, everything, or is it something where you can dynamically pull out the data and say, I have an assembly, I have a bunch of standard parts which are coming from a shared library and you can all mix it up together? This repository is not meant to store the actual documentation, but just all the documents involved in the certification process. How people organize the technical documentation if they refer to other standardized parts is yet their issue. But as they say, that would make a lot more sense if we keep this hardware modular. So if there's a repository of standardized modules and people can look at it and yeah. So there's currently no platform that is capable of like resembling an open source product lifecycle management. I mean, I mean, there is one in progress. Viki Factory is trying to do that. But it's yet a research project. So it's not yet usable. Okay. Thank you. Hello, many thanks for the talk. First of all, it's very interesting that you really got it, that Dean makes something really open source even the specification. So congrats for this, especially. The other question, the other thing I would like to ask is in the open source software, you have tons of different license types. And this is only one license type. So for example, in you have GPL or you have BSD or whatever you think of in open source software, what's the license type when you are talking about open source hardware here? So for open source hardware, there's not so that also be licenses out there yet. There are three major ones, which is XAM, OHL, the most common one and the best maintained, the TAPR license and the solder pad. And they basically split up into with copy left mechanism or without. And that's it yet. So it's a free field. But Oshba is pretty open to that. So they only define the license terms. One can come up with a new license, which is conformative to that. Thanks. Hello. I have a question about this slide here. The certification body, who is this? This can be anyone. So it could be you if you're capable to hold such a repository. I mean, the standard defines how the certification body needs to be organized, what he needs to do. But in the end, yeah, anyone can do that. You just need to state when you give out the certificate who you are, so who issued that certificate and then, yeah, people can trust you or not. And there's also a specification for that? Yeah, we have two documents here. So I didn't mention that particularly. The standard splits up into the definition for the technical documentation and into the definition for the certification procedure. So here it's described how this procedure works and the certification body has clear roles and tasks to ensure that it's working. But again, anyone can just declare he's working in compliance of that standard. And that's actually how certification works in industry, too. You can be an institute, someone is reviewing you and says, okay, you're compliant to that standard. The difference here is that we don't have a power concentration here. So whenever the certification body becomes corrupt, so for example, my organization, I'm open source of college in Germany, wants to found such a certification body. Yet, there's no money involved. So it's unlikely that we have become corrupt in some sense. But when money becomes a thing, there is a risk for that. But as other documentation and the whole process is open, people can just fork us. So when they find out that we accept peer reviews from your friends, for example, that you call a few colleagues and then you say this is open source, and yeah, whatever. So we have some corruption happening there. And then you can just clone the whole thing and open your own certification body. Thanks for the talk. I was wondering about the specifications on how to share your product data, your technical documentation. So for example, how is one supposed to share 3D data? Are there specific file formats? Or yeah, so essentially I think that the open source world is pretty lacking in that regard, for example. And I wonder if there's some direction given there. There is. Basically regarding CAD files, we are not yet ready to only use FreeCAD or OpenSCAD, for example. So these programs are not ready for industrial use. So what we say in the standard is you need to share the original file format. But you also need to make the information available in a way that people can look into this information without any licenses. So you have your CAD file, and maybe people can look into the CAD file. But you should share a drawing as a PDF or whatever, so then people can look into the geometry and tolerances on the other. So the actual information is publicly available. But the CAD model may be only available if you have a solid work license or stuff like that. We give recommendations, but we cannot enforce it yet, because for most companies that's just unrealistic. Yeah, if there's no more questions yet. Yeah, there is. Sorry, what happens if somebody uses the certificate, even though it is not applicable anymore because it's a challenge or something? That's a good question. Yeah, I got this. Ask a lot what's the actual use of the certificate. So we can talk a lot about how well you do this, can be for industry, but there's no one applying that. Who is applying that is science. In science, for research project, it's a problem that you will construct any kind of prototype to test something. And as it's financed in advance, you don't need a business model behind that. You can just build your prototype, but it's usually not documented. So it stays in the laboratory and nobody will ever know about that. And plus, people usually cannot reproduce your test results or all the data that you produce with this prototype or testing environment, because nobody knew, knows how you build the machine. But if there's a standard and if there's a certificate, it's clear what could be documented and could be part of your research project. So people can put this into the application for research project. It becomes part of it. They get money for the documentation and time for that. And so we have all this high-tech research stuff openly documented. And we also win a revenue stream, like money flow for this whole open source hardware field. That was the, that's the first application case. We are also working together with a bunch of media-sized enterprises which are interested in that. They are looking more for a label or something. But they can also apply for public money. And whenever you apply for public money, your results should be publicly available. So certificate again. It all works with paper. And it's a huge authority, as it's been. These three letters make a lot. Thanks for your presentation. I'm Drew from, I'm one of the board members of the Open Source Pro Association. I was told about you from the Libre Solar a few days ago. Over here. Yes. And Matthias is also here. He's also one of the board members. So I think this is really exciting what you're doing. And I'd like to talk more about how, I wasn't familiar with DIN before this, but there's other standards organizations like ANSI and ISO in the US. And I'm thinking maybe this would be applicable to those things as well potentially. Have you looked at other countries, how this process could work there? Yes. There haven't been so open yet for that. The Austrian National Studyization Institute was kind of interested, but they didn't support us so well. And DIN has much more authority in the international sector. So this DIN spec is directly transformable into international standard, but it's under CC by SA. So whoever wants to publish this standard could be ANSI, could be whoever ISO say they need to publish it under CC by SA. So something like ISO is similar to DIN then in terms of a standards body is the ISO is that kind of similar to what DIN does? Yes, it's actually the same building. So inside the DIN building you have the first offices, the National Studyization and if you walk down the corridor there's the international sector. So just pass on the paper. Thank you. Any other questions? Well, what I could do meanwhile is showing you the open hardware observatory which I mentioned. So... Should I ask my question now or? Sure, you can. If not, I ask a question. Already in industry documentation is quite a problem because people are generally too lazy to do it. Do you think that in open source hardware we'll find acceptance in a lot of cases? Depends what you want to do. In the hardware sector support is a huge problem. So a few weeks ago I read an article about the garden robot and that was a relatively small company that wanted to provide ten years support for their products but coming out with a new version of their product every year. So that becomes a huge cost for this company and they couldn't... It could simply not provide such a load of work for the support. But what they did in the end was open sourcing the firmware in some parts of the design so the community can take care of that. So people have the effective ability to maintain their product and also to modify it and ask for help without costing this anything, the company. So some are starting with that. Another example is sort of motors. I don't know if you've heard of them, they're developing an electric car in Munich and they want this car to be fully maintainable by anyone. So they want to also develop an instruction how to maintain this car. They could do that themselves and invest a lot of money in the actual writing of the manual. But they could also just write the first part and let the rest for the community. So the people that repair the car write the manual themselves. By that they get a unique thing out there in the market. There's no car that you can maintain by yourself in all the details, at least in the electric sector and they also get free feedback. So they see that piece that always breaks when they do this and that or it's hard to repair so all these development cycles that cost a lot of money are free then, you're outsourcing things. So I assume that there will be few companies that will open source just everything but a few parts of it, I guess yes. And five years ago that wasn't some kind of impossible thing to pitch to companies to open source your development. But right now I don't know what changed but everyone I'm talking to is like, oh let's make a pilot about that, how can I collaborate with you. So there's a mind shift and I didn't question that yet so deeply as I didn't want these people to change their mind. I'm for Dean because what we're doing is undermining Dean's business model, their core business model and I don't know if all of them understood that but they're super motivated in that. They're actually looking for an open source software project with standardization motivations that they could support. So if there's any out there come to me and I can link you but I'm talking too much. Any more questions? What if not? We could, I could show you all of them. That's this. So that's the web page. You could just type in any kind of technology, any bus word out there, anything you want to do. Take wind turbine and then it shows you what's out there. And then you click on it and then you come to the actual design files. So there's no need to invent anything from scratch. For almost all technologies there's always some dude out there who did it and made at least a video about that. You are, there's my time up yet? Okay. So yeah, thanks for listening.
|
Compared to software, the open source approach is relatively new to most actors in the field of (mechanical) hardware. Plus Open Source Hardware faces some special issues. A yet missing definition of its "source code" is one of them (+ patent law, liability, engineers that do not know how to work with git, costly prototyping…). DIN SPEC 3105 will be/is the first official standard for Open Source Hardware and also the first official standard ever published under a free license (CC-BY-SA 4.0; that was a lot of lobby work ;) ). It defines the technology-specific "source" of Open Source Hardware and aims to build a bridge between research institutes, public authority, industry and the worldwide open source community. In this talk I won't explain why Open Source (Hardware) is great. I assume, you all know that (if not, still feel free to ask me in the Q&A part or after the talk). I'll describe what's the standard for, how it works and why its great _for_ Open Source Hardware.
|
10.5446/52700 (DOI)
|
you Hello, everybody. Thanks all so much for joining this session today. I am blessed to virtually be here. I know this is the last talk of the day, so hang in there. I am Marcella, a QA with more than 11 years experience currently working at Cochrane's and Sobison. I'm an open source fan. I've contributed to a lot of open source projects so we can talk about community and contributions all day long, but that's not the point for today. My expertise is around web applications, but I've been involved in a lot of projects, so I like to say that I can easily adapt. For today's presentation, I will try to point out how QROL has changed in the last couple of years, let's say. We see a live and evolution in the role of QA, which becomes more strategic within the team and close to business, let's say, and product issues, while still running the tests. Programming skills for creating automated tests and shift left testing approaches also enter in this evolution package. So quality expectations are increasing day by day. Until a few years ago, if you remember, the QA was the one who just planned and performed manual tests based on documentation and requirements, and that only happened on the testing phase. I like to say that we were just like some monkeys that we were testing, we were clicking here and here, and that was it. We are now releasing products either, let's say, every week or two, and in some cases we are doing this every two hours as part of a continuous delivery process. And the next logic question is what triggered this evolution? And I do have a few answers here. Of course, this is in my opinion, so I think that manual tests that are focusing on validating requirements, finding bugs, and ensuring that nothing has been broken became more expensive. And demotivating more expensive because like I said before that monkey word that we were doing that regression tests that we are currently performed repeatedly for each delivery, meaning, and ensuring that nothing breaks the existing systems. This kind of test is usually time concerning, and we all know that time is funny, right? And I say that it might be demotivating because doing the same things, that's all the time, always in the same phase, it can generate some sort of discomfort. Another thing that I can think of is the next question, what does every business need? I mean, well, it needs innovation, right, with more and more experimentation, which assures higher return of refacement and profitability. And I say that by adopting new tools and testing platforms or methodologies, QA is able to evolve, right, and keep reinventing its strategy. QA must also bring some sort of scalability to its process to test different and bring sustainability that is powered by innovation. And how do we adapt is the question that I'm asking right now. And I do believe that apart from the technical knowledge, which I think it's really important soft skills also play an important role for a QA career. And I say that adaptability is one of those. I don't think, and I'm pretty sure there is no definitive way in which QA teams can assure success. And it's obvious that only with constant growth and evolution of processes, it can steadily bring conviction on its approach and extend the same for businesses as they go out and serve their end users. QA is now on mindset mindset. I know this is a big word, maybe a buzzword these days. But I do believe it is the key to digital transformation that we are all talking about these days. And more, I dare say it's a culture that your team, and maybe your entire company should be involved in. Being involved in the early stages of development or design of the product on the activities varying from, let's say involvement in estimates, technical discussions, or requirements reviews, and so on. QA definitely plays a strategic role. It is a connection between development and operation. In my experience, product teams are often pushed to build and release faster. This is nothing new, I think I already mentioned before, but sometimes accelerated software comes at the expense of quality. So, sacrificing quality is just a short sighted delivery strategy that almost always ends up causing more damage in the long run. DevOps can speed up the development, but what about the QA strategy. And I think that this brings a new flavor to DevOps, known as a QA OPS framework. I dare say that the main focus of QA these days is not in finding bugs, but in preventing them from happening. And if they still do in a controlled time and environment, and before it causes negative impact. So, I think in the exercise and just think about eight or maybe more years back when there were not any mentions about DevOps, no one knew what DevOps meant, right. And I highlighted here the definition of DevOps saying that is the delivery of application changes at the speed of the business. And QA OPS takes the core ideas from continuous testing in DevOps, such as CI CD, and bringing together the teams to work on this pipeline. So yes, QA OPS is something similar to DevOps. Like I said before, because this framework uses the basic approach and mindset already defined in DevOps. But I say it's more complex than that, and I will try to explain this in the following minutes. Why I think so. This role QA OPS role is already present right in the IT industry, and it will be more and more present and required by the companies in the years that the tool follows. Let's clarify first what CI CD means. I will just do a quick description here. CI stands for continuous integration and CD stands for continuous delivery continuous development. Continuous integration is a development practice that requires developers to march their changes back to the main branch as often as possible. Each check-in is then verified by an automated build, allowing teams to detect problems early. Continuous delivery is an extension of continuous integration to make sure that we can release new changes to our customers quickly and in a sustainable way. In theory, with continuous delivery, you can decide to release daily, weekly or whatever suits for your business requirements. However, I said that if you truly want to get the benefits of continuous delivery, you should deploy to production as early as possible to make sure that you release small batches that are easy to trouble shoot in case of a problem. We don't like bugs, right? And for sure we don't like bugs in production. Continuous deployment goes one step further than continuous delivery. With this practice, every change that passes all stages of your production pipeline is released to your customers. There is no human intervention here and only a failed test will prevent a new change to be deployed to production. So, after we do this CI CD introduction going forward, let's see how we can implement QAOPS framework. And there are four main ways to approach the implementation of QAOPS. First is automation by adding automated tests in between continuous integration and continuous delivery. This is one way to approach QAOPS. And then there are three ways here before building an automation framework. QMS study the product in detail to better understand its goals, specifications and functionality, right? And once this analysis has been performed, QA can decide which tests can be automated first, depending on the stage of the product they're working on. The second way is parallel testing. This is used to test multiple parts or components of the application at the same time. The testing process for each component should be independent of the others. You can use different tools that provides parallel testing features or you can use a concept of that to achieve parallel testing where you will write your tests in your favorite programming language so that they run multiple processes at the same time. And of course here parallel testing has hardware dependencies. You will need a high performance system with fast CPUs to implement it, but there is always an alternative, which might be cloud, right? Another way is scalability testing. You will know and I believe that we agree that business growth is good for the company. When you develop a product and people's style start liking it, you have to scale it. Scaling the product includes adding more features and making maybe the existing features better. Last but not least, the integration with DevOps and QA. And here we are talking about a strong collaboration between those teams. You would probably wonder why this QAOps matters and where it can be used. This is especially useful when specific types of testing are needed. Let's say for localization QAOps is almost non-negotiable. QAOps can be extremely useful in regression testing. If you have some previously developed software and you need to quickly release a software enhancement, a patch or a configuration change, QAOps can help in this manner. There are a lot of companies that use QAOps framework. And I want to refer to Facebook for example. I know it's a giant company. I don't work for Facebook. This is something I read for. But Facebook login features enable users to login into millions of apps and websites with their already created Facebook identity and privacy controls. A few years ago, Facebook decided to migrate to Facebook Graph API version 2, ending for login review for all the apps. To ensure that this migration went smoothly, Facebook wanted to test out the new version on the 5,000 larger apps. Unfortunately in the house, they could not do that. I mean, it was basically impossible to do that. But they chose to outsource. And by outsourcing testing, they had all 5,000 apps tested in a month and were able to identify and address critical problems with more than 900 apps. That's why I said that it wouldn't be possible to do that in-house. So QAOps can scale up or down to fit any business size. And next, I want to share with you, let's say one success story where I've been involved. In the project a few years ago, it was a web-based application. And of course, we created an automation framework. We had around 500 tests, if I remember well, written in QCombo. So we've used the BDD approach. If you are wondering why BDD, well, it was because they need this approach for business analysts. And yes, gherkin syntax was the right approach for them at that time. Anyway, we all know that when it comes to BDD, parallel testing is a challenge. So what we did, we used Selenium. For those of you who are not familiar with this tool, Selenium is a Docker-based Selenium grid. Maybe you are wondering and you are thinking that setting up your own Selenium grid might not be hard. But the challenge comes when you start using it to run a lot of tests on it. And this sometimes causes environment issues and stability, right? And this is, and here comes this tool called Selenium. The interesting part here is that we can scale on demand. So we were able to create several Selenium nodes. So the test running time was reduced from eight hours to two hours. So this was a really nice achievement for us at that time. Just to do a briefly, let's say, conclusion here, QAOPS framework is a process, right? That the QA team should be practicing on a daily basis. Of course, not every QA team can do QAOPS, but we try to train our QA to achieve this. And saying this, I do believe that the implementation of the QAOPS framework is defined as an infinite process of learning and putting in use the best QA practices throughout the following phases. And I want to highlight here those phases. First is plan. All the initial things starts with the plan, right? At this point, we come up with a document known as a test plan. The test plan will include and describe the strategy and the objectives that we want to achieve. Second phase is test development. Our main activity and our biggest challenge as QA engineers is test development. In this phase, we perform most of the activities that were defined in the previous phase of planning. And here our testing activities are divided by, let's say, automation testing. And this means writing an execution of automated test scripts according to the test plan, like I said before, in order to meet the client's requirements. So, for this, as you know, we are using the software tools like, let's say, Selenium with all the programming languages like Java, JavaScript, Python, and so on. Robot framework, Cucumber, TestNG and so on. And for format testing, we can use Gmeter, low GI or any other tools for mobile testing. There are plenty of frameworks like APM and for API testing, SOPIOI or Postman can be the right tools. But of course, there are a lot more. The next phase is automate. In order to meet the criteria of higher efficiency, we need to find a way to automate all previously created QA related job. Also, this is the first phase that makes QAOPS framework different from the other traditional approaches in software testing. So, we are using tools like, about Jenkins, Azure DevOps, Maven, and so on. Next phase is trigger. Automating the job with the previously mentioned tools is not the only important thing. And I say that triggering the right tests at the right time is also super important. In this phase of QAOPS framework, from a business aspect, is important to mark and to select the right tests that will be part of the execution in the current CI CD pipeline. Next phase is execute. The objective of this phase is to perform a real time validation before the code goes to production. In this phase, the automated test scripts from different types of testing are executing. In order to increase the efficiency and let's say to reduce the cost, we have combined different ways of testing. For example, execution of automated tests combined with manual testing for verifying the final results in end-to-end testing. And here we have tools like Jenkins and Azure to make this possible from anywhere and from any device. Next phase is release. It's important to keep different versions from software. It's the same with keeping versions from automated tests. For this purpose, we are using tools like SVN or Git in order to have better management and control on the automated test scripts. Last but not least is report. These steps happens in all previous phases somehow. Because for example, when we open a bug ticket that we have found an issue during our testing, what we do, we actually report some anomaly in the system. We have to provide. However, from client's perspective, it's very, very important to have an overall picture of everything that happened previously. For this purpose, of course, we have tools like Jira to generate and to create reports and they are really, really useful. Here I have another example, let's say another success story that we had a while ago. And we, by adopting automation, we managed to reduce time. So our success story here is that after some time, we reduce the sprint from four weeks to two weeks. And again, this might not be a super nice success story that it's, you know, very important, but for us at that time, it was a nice achievement. So, I say that by implementing QAOPS framework as one of our main strategies. We continue to seek out challenges. This is really important in improving our knowledge and work in order to bring the best values for our clients on a daily basis. The main purpose of DevOps is to ensure the software is deployable at any point in time with new features in place. And the collaboration is mainly between the development and operations. In QAOPS, the main purpose is to ensure the quality of the application in terms of its performance or scalability, functionality, security, or even though usability among others. In QAOPS, the operations team mainly communicates and collaborates with the QA team to ensure the continuous delivery of products. And I highlighted here what are why it matters and what are the advantages of using the QAOPS framework. And I say that the most important thing is that using QAOPS framework, we are delivering a higher level of quality and reliability. Thank you. That was all. If you do have questions, I'm pleased to answer you. Okay. Hello, everyone. We are live for questions and answers. There were quite a lot of questions and discussions during your presentation, Marcella, and this is exactly why we chose it. We wanted to generate a discussion. So let me start. I will start randomly. Anna is interested if people are using the QAOPS term before or now, and so far the answer seems to be no. I think the buzzword is for sure. I think it's for sure. Thank you.
|
Quality expectations are increasing day by day, market demand is changing rapidly and digital technologies are influencing QA practices. How do we adapt? QA plays a strategic role, it is a connection point between development and operations. DevOps can speed up the development, but what can you expect without a robust QA strategy? Continuous development and continuous delivery is impossible without a comprehensive QA strategy. How can we accelerate software delivery without sacrificing quality? Join this presentation and you will find out why QA and Ops have a complimentary mindset, how you can implement a QAOps framework and why it matters.
|
10.5446/52703 (DOI)
|
Hi and welcome to my talk. I'm David Gioch-Reichert from University of Leipzig and I'm going to show you how to identify performance changes at code level in CI. Often if you think about performance, you think about scaling microservices or finding the bottleneck in your database. This time it's about the internal performance of your components, so the performance at code level, since your software will only perform good if your components internally perform good. To measure the performance at code level, I'm developing the tool PEACE, which stands for Performance Analysis of Software Systems and which finds a PEACE of all problems, so the problems at code level, not the problems at architecture or deployment level. Currently PEACE is a research prototype and a project funded by the German Federal Ministry for Research and Education. We are working together with industry partners to make PEACE usable for everyday software development. Now some people may say, hey, I'm a confident developer, I know what my software does at code level and therefore I do not need to measure the performance in every version. While in many cases this might be true, in many cases you know what performance impact your code changes has. Often you do a lot of changes like in this commit and if you make a lot of commits, make a lot of changes in one commit, then it might happen that you overlook a change and therefore better save than sorry. If you measure the performance at code level, you can assure or at least be more confident that you do not overlook a performance regression. And the second reason why you should measure the performance at code level lies in the nature of modern languages like Java, so PEACE measures Java. Here are two versions of the same functionality. You might now guess what's the faster version. In fact, it's the right hand side in most current OpenJDK versions. And this is the case due to internal optimizations of the JVM. So the JVM does internal optimizations which slow down to the left hand side. If you want to know in depth why this happens, have a look at this paper, but all in all, it's hard to know what's the best implementation at code level and therefore to have the best performance at code level in your application, you need to measure the performance. Now to measure the performance of our software, we need the specification of the workload to measure and a tool to execute the specification. This can be done using benchmarks which test a component or part of a component or load tests which then requests to a system and thereby measure how the system performs on the load. In the ideal world, we would maintain such benchmark or load test specifications with our repository, but since this is time consuming in real world, less than a percent of all projects maintains such load test scripts or benchmark specifications according to a study on open source projects. But according to the same study, about a third of all projects maintains unit tests. And additional if unit tests are present, often they have a big coverage of the program's code. Therefore, we make the unit test assumption, we assume that the performance of the unit tests can be used as a proxy for the performance of the software or at least the performance of a part of the unit tests can be used as a proxy for the performance of a part of the software. This might not work in every case because the performance of the software might be mainly driven by a parallel usage or unit tests may mainly test corner cases or make use of functional utilities which change the performance. But still in many cases unit tests specify use cases of the API and therefore they specify calls which are executed in production and therefore they specify workloads which are executed in production. Therefore, we believe that they can be used in many cases as a proxy for the performance. This will not work in every case, so if you want to measure the parallel performance, you'll still need a load test. But unit tests or measuring the performance of unit tests can be done without additional workload definition and it will give you an additional information where your performance at code level changed, so it will be able to identify a piece of all performance changes. Now to measure the performance of our software, we need a measurement process. We get started by a repository with its version and the tests in each version. And to measure the performance, it's not sufficient to just execute it once and measure the performance once since the warm-up changes the performance and also other non-tetaministic effects change the performance, therefore we get started by one VM, doing some warm-up measurements and then we take measurement iterations where we measure the performance of the repetition of the workload. But this optimizations may end up in different steady states, therefore we need to repeat the measurement and repeat the VM starts and the warm-up and measurement iterations and in many cases you need at least about 30 VMs to reliably identify a performance change. So after this measurement, we get a distribution like this of the current version and of the old version and then piece identifies performance changes using two-sided t-test. This measurement process is very time-consuming since we need to repeat the workload that often, therefore piece uses a regression test selection, so we get started by our tests and then piece identifies using a mixture of static and dynamic code analysis, which classes are called by the tests and if we have a change where we can tell using the github which method has changed, if we got this change then we look which test calls the change source code and only this test needs to be measured, this case only test one. Now piece uses a repository, executes a regression test selection and then measures the selected tests. Thereby piece could tell the developer, this tests have changed performance, but since the developer may change many, many methods and this methods may all be in the call tree of this test, the developer needs to know what's the root cause of the performance change. Therefore, piece executes a root cause analysis where it measures the call tree, so each individual node of the call tree and tells the developer where a performance change did happen. And in this case, we see that foo, do, and fem have performance changes and since fem has a performance change and its childs do not have a performance change, it's likely that fem is the root cause of the performance change. Yeah, this is the overall approach of piece and now I'd like to show you a demonstration how this currently works in practice. To use piece, you can use a Jenkins plugin and you could also use the command line interface, so if you're not using Jenkins, this is also possible, but now for the demonstration I'm using the Jenkins plugin. Here I've checked out a patch commands file upload, which is a Java library for file upload. And now to measure the performance, we need to configure the measurement process, so we need to specify how many VMs are executed, how many iterations, how many warm-up iterations it's on. And since the root cause analysis with the individual node measurement is time consuming, we can also disable it here. After we specify this, we execute the process and this takes some time and afterwards we get in the build performance measurement overview where we see which tests have performance changes. And this concrete, so this is commit, which we've seen in the beginning where we had this 15 files changed. Here, four test cases contain a performance change and we see 12 measurements of the individual histograms of 12 performance measurements, for example here, and no performance change did take place. Now we can go through the measurements and the performance changes, for example here test file upload has a performance change, which we also clearly seen the histogram. And now to find out why this happened, we can have a look at this, at the call tree of this test case. Here we see which, here is the structure of the call tree and this is made for a little bit bigger resolution. We can now go through the call tree and see which method has gotten slower, so red means slower, green means faster. And yeah, we can have a look at the individual histogram and also at the source code. And here the stripes indicate the source code change and here we see that in the old version, a buffer was introduced, which is not introduced in the newer version anymore. And then we see later on here, a loop starts and then streams.copy is called with this buffer in the old version and then the newer version. The buffer is not passed anymore because it does not exist anymore. And now, this first thing we can observe, then we could have a look at the childs, which is in this case copy. And in the copy methods, in the old versions, the standard copy method is called with the buffer and in the newer version, the method which takes the buffer is called with the new buffer. So in the end, in the newer version, the buffer is created every loop iteration and in the old version, the buffer is just created once before the loop. Therefore, the old version was faster. In this concrete case, I also committed a patch, so this is fixed. And now, sometimes because of warm up or other effects, we do not see the histogram so clear. Therefore, we can now further inspect the node. And in this node, or in this beginning, we see the histogram like we've seen it before, also with the t-value and the statistics of this measurement. Now we could select, for example, only three VMs and have a look at the trend in these three VMs after the warm up. And then we see the trend kind of looks okay, so there are no spikes and no big performance changes. And if there would be spikes, we could zoom in and see how the performance measurements change. And also, these are the statistics change. And thereby, we could also identify, for example, that our warm up is too slow in our measurement process. And this demonstration has been created with this component, so piece core as a Java project, which contains the dependency module, which does the regression test selection, the measurement module, which does measurement and analysis module. And yes, so this could be used directly using the quant line interface or using PCI, which is the Jenkins plugin. And internally, PCI uses copy me for the performance measurement and for the measurement of the individual nodes, the monitoring framework Kika is used. All these tools are open source and you could just check them out if you're interested. Now this basically works. What are our next steps? We are trying to make the measurement more reliable and fast. So we tried to speed up the measurement by parallel execution. So since the unit has mostly contained sequential workload, it could be possible to not lose precision and execute the measurements in parallel. We are also experimenting with isolating the measurements with C groups. And for the root cause analysis, currently each level of the each level of the country is measured individually, which takes some time. We are experimenting with different selection modes of the country. For example, select two levels at one time or doing a more intelligent node selection. Then we're trying to optimize the measurement probes to reduce the overhead and some other methods to make the measurements more reliable and fast. So currently it takes about some hours to get performance measurements and we hope to further make this faster. Then as I've said in the beginning, we're trying also to use this tools. We're trying experimentally to use them for open source projects and for the projects of our industry partners to find problems in everyday use and to make it more, to make it in the end to make it usable for software developers. If you want to get involved and bring peace to the world, then try it out and tell us your experience. And if you want to implement new features, fix bugs, then we are also open for pull requests. This is now the sportsman presentation. Thanks for your attention and I'm looking forward to the discussion.
|
Performance is a crucial property of software for both closed and open source software. Assuring that performance requirements are met in the CI process using benchmarks or load tests requires heavy manual effort for benchmark and load test specification. Unit tests often cover a big share of the use cases of a software and are maintained anyway. While they have some downsides for measuring the performance, e.g. since they test corner cases or use functional utilities like mocks, they still are a way of measuring realistic use cases with nearly no manual effort. Therefore, we develop the tool Peass (https://github.com/DaGeRe/peass), which transforms unit tests into performance unit tests and measures their performance. The stand-alone tool Peass can be integrated into the CI-process using Peass-CI, which makes it possible to run performance tests with every build in Jenkins. The talk starts by introducing the basic idea of Peass. Then, the steps of the current prototype of Peass are presented: - a regression test selection, which prunes all unit tests that can not have a performance change by comparison of the execution traces of the same unit tests in two software versions and the source code of the called methods, - a measurement method, which repeats VM starts and measurement iterations inside the VM until the performance change can be statistically reliably detected and - a root cause analysis, which identified the node of the call tree which causes a performance change by measurement of individual nodes. Finally, the talk demonstrates the usage of Peass in a running Jenkins instance.
|
10.5446/52707 (DOI)
|
KENNETH My name is Vlad Bogolin and today I am going to talk about the new MariaDB billboard developed by the MariaDB foundation. In the first part of the presentation I will make an overview of our new continuous integration fairing work, which uses billboard, and then I will talk about the main challenges that occurred during the setup. Let's first define some keywords that I will use in the presentation. By using changes repository I will refer to the actual source code changes commits that are pushed to the MariaDB server github repository. The build master refers to the main billboard process. In our case this process runs on a dedicated physical machine. Its job is to look for changes in the repository and schedule builds. A build defines the actual tested configuration. It consists of a sequence of steps that defines a particular configuration. One simple example would be to get the source code, then compile the MariaDB server and finally run all the MariaDB server's tests. The billboard worker differs to the process that handles the builds. It usually runs on a dedicated worker machine and waits for commands from the billboard master process. Now let's take a look at an overall schematic of our framework. As it comes to the billboard master we use a multi-master configuration. This means that we have two running master processes. So we have a dedicated master for the user interface and one that deals with looking for changes and scheduling builds. I will give more details behind this decision later. Each time a push is made to the MariaDB server repository, it is detected by the billboard master which schedules all the builds. Each build defines a different test configuration. We use Docker latent workers, which means that for each build the master starts a Docker container on a remote machine. The container is configured to run the billboard worker process on startup. This process can now receive instructions from the master. In this way by using latent workers there isn't a billboard worker process continuously running on the worker machine. Instead, for each build a separate container is started. Now let's talk in more detail about each component. We use a multi-master setup where we have a dedicated user interface master. This process only queries the billboard database and shows the appropriate information in the billboard web page. The second master handles all the builds. This means that it looks for changes in the Maria repository and schedules the appropriate builds for each detected change. Both processes run on the same machine. We have chosen this setup to ensure that the user interface is responsive independent of the number of running builds. We have experienced some interface delays while using a single master. I will give more details about this later. As it comes to the billboard worker, instead of the classical approach where a billboard worker is always running on a dedicated physical machine, we use Docker latent workers. In this way the actual billboard worker process runs inside the Docker container. So the billboard master, which has remote access to various Docker instances running on different machines, builds a Docker image which is specified by a Docker file. Then a Docker container is started. This process is repeated for each build. During the container startup, the billboard worker process is started. The process now connects to the billboard master. Now the actual build process may begin. When the build process is finished, the Docker container is stopped. In this way the billboard worker processes are only running when they are needed. Now let's discuss what would be the advantages of such an approach. Firstly, we can define the whole test environment in a Docker file. This makes it very easy to view, deploy or recreate a particular build environment on any machine. This may help the debugging process a lot, especially when tests fail on a particular environment. By using Docker files, we were able to easily create around 50 different environments for different platforms and operating systems. Secondly, by using this approach, it is very easy to add new physical machines to the infrastructure. Only Docker needs to be installed remotely accessible. No other configurations on a worker machine are needed. Moreover, since each build starts a new container, you get the immediate advantage of always having a clean environment. Now let's talk about the Docker files. Each Docker file can be split into four parts. First, we select the operating system. This simply involves pulling the correct Docker image from Docker Hub. Next, we install the MariaDB Server Build dependencies. This is usually the most time-consuming part since there can always be various repository problems or missing packages. This may require a bit of web search in order to solve all the issues. Thirdly, we need to install the Buildbot Worker Python package. Finally, we simply run the Buildbot Worker process to ensure the container connects to the Buildbot Master and is ready to start the build. In this way, we can easily define multiple build environments for different platforms and operating systems. You can see here the list of currently supported ones. By also including all the MariaDB Server configurations, which includes compiling options, enabling-disabling particular test suits and others, we obtain around 100 different configurations that we are currently testing. When it comes to the operating systems, please note that it is possible to also run enterprise versions in DockerCore containers. However, an additional subscription management step is needed, which varies from a distribution to another. Also, I would like to highlight that it is possible to run windows inside the container. The only limitation is that the host also needs to run windows. Moreover, we are also running ecosystem tests, where we run, for example, PHP tests, against the latest build MariaDB Server. We are in the process of adding more ecosystem tests, but you can see here the list of currently supported ones. Now, let's talk about the whole MariaDB Buildbot workflow. In the first step, when a new change is detected in the MariaDB Server repository, a source-tarball-creation-build is triggered. When this build runs, the clod is cloned and the source-tarball is created. This tarball will be used by all subsequent builds. Also, now it is time to trigger the bintar builds. Step 3 involves creating bintars. The process begins by fetching the source-tarball, compiling the code and running the tests. In the end, the bintar is saved and now we can trigger package-creation builds and ecosystem builds. In the first, fourth step, we have two types of builds that may run in parallel. Package creation, where we fetch the source, create the packages, save them and trigger installation builds and ecosystem tests. For ecosystem tests, we fetch the latest version of each particular tested framework, configure it to use the current MariaDB version and run their tests suite. In the fifth step, we test to see if the previously created packages can be successfully installed. If the installation is successful, we trigger upgrade tests where we test to see if the latest released version of MariaDB Server can be successfully upgraded to the current development version. Now let's talk about some of the challenges that we encountered. One of the first issues that we encountered was a weird case of file mix-up. More exactly, we had missing files or wrong version of a file during compilation, even though the source-tarball was correct. To make matters even worse, this only happened from time to time. After various attempts of debugging, we have found out that the issue was caused by a billboard misconfiguration. The latent workers have a build-wait-timeout flag, which defines the amount of time a billboard worker remains active, waiting for new jobs, after a build is finished. So now, instead of stopping the container when the build finishes, it remains active for build-wait-timeout minutes, waiting for potential new builds. However, as you may have guessed by now, if this happens, then the second job will not start with a clean environment. So the solution to this issue is to either set the flag so that the container is always stopped after each build, or to ensure a proper manual cleanup. Another issue may be the most tricky one in terms of debugging was caused by having the MySQL test run process killed by Siegpipe. To give a bit more context, in order to run the MariaDB tests, MySQL test run is used. This is a pearl process that runs on the MariaDB tests, and this requires step of almost all our builds. To make matters worse, the issue was not reproducible on consecutive runs of the same build. In the end, after a lot of debugging, we have found a billboard worker-side exception, while debugging another problem related to some UTF-8 character parsing error. So, the issue was caused by another billboard misconfiguration. After specifying UTF-8 as encoding for both the master and worker processes, the issue was solved. While we are not entirely sure why this translated into sending a Siegpipe to MySQL test run, it solved the issue and it hasn't reproduced since deploying the fix. One of the most requested features from the developers was the ability to see the last X runs of a given branch in the grid view. However, the default behavior for a grid view fetches the last builds independent of the branch and then allows branch filtering only for those builds. Even though X is configurable, this means that if for some branch there were no pushes lately, but there were many pushes to other branches, you have no guarantee that a fixed number of builds will be shown for the desired branch. In order to solve this, we have rewritten most parts of the grid view plugin and made some changes to the data API to ensure that the required data is properly fetched. Another issue was the fact that the interface was starting to be unresponsive especially after we increased the number of builds. Depending on the number of running builds, most of the interface pages were quite slow. In order to solve this issue, we switched to a multi-master configuration. Now we have a dedicated master process that only handles interface requests. In this way, the response time is constant independent of the number of running builds. We have started noticing that from time to time the build time for multiple runs of the same build varies drastically. We have seen even differences of one hour for a build that normally finishes in 20 minutes. So, we have found out that even though the actual step has finished, the master keeps communicated with the worker to get all the build logs. In our case, the log size especially for mySQL test run can be quite large, 250k plus lines of logs. When multiple builds are running, this can translate into quite large quantities of logs into a short amount of time. Since build-bots can only handle around 10k lines of logs per second, we had to limit the amount of logs. Since the logs are split into several files, we keep only the test results as a build-bot log and then transfer the other log files manually. We have also customized the build step to show a link to the file location as it can be seen in this slide. Lastly, we have found out that in some cases there is a huge file transfer time between master and worker. Some of the builds create packages, devs or rpms or bintars, which are aggregated centrally on the master machine. So, they need to be transferred from the worker to the master. After we made some research, we have found out that it is not recommended to use the build-bot data api to transfer large files. So, we switched to SSHFS and use remote volumes to make the file transfer. This change has provided a huge decrease in transfer time. This concludes my talk. These are some of the challenges that we encountered, but the complete list is way larger. So, if you have any questions, do not hesitate to ask me. Thank you very much. Thank you, Ablad, for this nice talk. We have three minutes for questions. You mentioned the ecosystem test, for example. How did you select which parts of the ecosystem you test? Well, hi, thanks for having me. For the ecosystem part, we are in an ongoing process of selecting more and more systems. So, it is not a definite list. We started with the ones we had more knowledge on and more connections with, but it is not limited to any type of ecosystem, and we are planning on adding more and more. Also, from what you told about all the problems, it looked like it was kind of an interesting journey. How long did it take until you have been at a state where you said, okay, it's kind of... The project started before I joined, so it was quite a long journey. I think the beginning of setting the new bill was around two years ago or something like that. And it is just that you... When did you finish it? It's already a long... Sometime ago, or was it just, let's say, the last few months? It's almost finished, not 100%, but we're getting there, 90 something. Okay, good you here, but it's a really interesting project.
|
Recently, the MariaDB Foundation has been developing a new continuous integration framework for the MariaDB Server. The goal of buildbot.mariadb.org is to ensure that each change is properly tested on all supported platforms and operating systems. Our new CI uses almost exclusively latent workers, more exactly Docker latent workers. In this talk, I will present a main overview of the CI infrastructure, the advantages of using latent workers and talk about the challenges that we encountered along the way. This includes a broad range of aspects, ranging from misconfigurations to Buildbot code changes to ensure that everything runs smoothly. In order to ensure that MariaDB runs smoothly, it needs to be tested on multiple platforms and configurations. In order to obtain a gain both in terms of speed and flexibility, we have decided to use Docker latent workers. In this way, each different environment is defined in a separate dockerfile. Besides having a clean and concise environment definition, using latent workers has the advantage of requiring minimal configurations on the worker machines. This makes the process of adding new hardware very easy, mainly involving installing Docker and configuring Docker remote access. Now that we have the build environments defined, we can start testing. The process starts by cloning the MariaDB server repo and creating a source tarball. Then, all other configurations are triggered, all using the same source tarball. While it seemed quite straightforward, the whole process turned out to be more challenging than we expected. This includes a quite long list of issues ranging from files mixups, weird sporadic failures where the main testing process was killed, grid view customizations to multi-master configuration and master-worker file transfer problems. In this talk, I will talk in more detail about these issues and tell you more about our experience and how we managed to overcome them.
|
10.5446/52730 (DOI)
|
Αβά,"Βειον-μία σας. Ζε hai あνα Qurikop, πραγματικά σε Ιά pled, Που όλα". Εσύ, πήρχα. Είμαι ο Χαράλανβος και μπορώ να Awak χρησιμοποιηθούνται με το VXL. Τώρα, τι συμβαίνει, είναι ότι πολλές εξοπλικές για τη μομπα-κομπιουτιντή για τη δημοσύνη, είναι σύφτητες σε ένας δημοσύνης μοδελμός για να είναι σύφτητες σε ένας δημοσύνης μοδελμός και να είναι εξεκουσιασμένη σε διάφορες αρχιτεξές. Αλλά τι συμβαίνει με εξοπλικές που ανοίγουν με αγγελματικά εξοπλικές, για παράδειγμα μοσινίαστικόν ή εξοπλικές, τα Milkoon δεν θα earnest με έναν B Thank You και θα χρυματοσυ downtime στο δεν-δύρι함 Korean vs Improvisation και αυτό τοühlאשμα αντικορ distraction. JB Colombia δυο αξιωρήτων. Και υπάρχει και η σοσοδοσία της API που δημιουργείται οι εξορήτες, και την αξιωρήτων. Αυτό σκοτήρει κάποια κοινωνία για την εξορήτηση και αν είναι μια πολύ καλή σοσοδοσία να χρησιμοποιήσει ένα τεχνολογικό εξορήτησης, που δεν είναι πολύ καλύτερη σε δυο χρησιμοποιήσεις. Και ευρώ, υπάρχει και η παραδοσία, ένα πολύ καλύτερο γνωρίζειο, αλλά όχι και η χρησιμοποιήτηση να προσπαθεί το δυο αξιωρήτων. Υπάρχει και η VXL, η VXL είναι ένας εξορήτητος, που μπορεί να υποχωρήσει δυο αξιωρήτων, να προσπαθεί για σημαντικές δυο αξιωρήτων. Η προγραμμυλική εξορήτηση είναι η T, να χρησιμοποιήσει η VXL να είναι εύκολη και η σύμβολη. Η προσπαθήτηση, η VXL δεν θα υποχωρήσει ένας εξορήτητος. Η πρωταμυλική, η ίδια εξορήτηση να είναι εύκολη σε δυο τεχνικές πλατφόρεις, πανταστά ή πριτοσύντας. Και, ευκοληθείς, θα θέλουμε να έχουμε μια προσπαθήτηση. Έτσι, θα έχουμε ένα αυτοκρότημα της δυο αξιωρήτων. Η πρωταμυλική εξορήτηση της VXL είναι η δυο αξιωρήτηση της δυο τεχνικές πλατφόρεις, με τις δυο αξιωρήτων που χρησιμοποιήσει VXL. Και μπορούμε να πούμε να έχει δυο λογικές πλατφόρεις. Και το μ encaπquirα boşάζοντας σιτ來到 το front end και το back end. Το front end δοξ counties δησοχήρητον ασάγιστον Porterange για seine ακient lethal εργαάδια και�� που Έβ μετά τις άστασεις καταλαμ που συμβουλείτε波 και στο μ πιuratλι για χρησιμοποιήσεις. Και μπορεί να είναι χρησιμοποιημένος για να χρησιμοποιήσεις σε κάθε τρανσπροκλαίωση σε κάθε τρανσπροκλαίωση. Τι έρχεται όταν χρησιμοποιήσουμε το VXL σε έναν βρισσόμα. Θα χρησιμοποιήσουμε ένα παράδειγμα για αυτό. Θα χρησιμοποιήσουμε ένας εμπλικασιακότητας να δούμε το εξεκουσία στο βιντεί. Σε πρώτος, η εμπλικασία χρησιμοποιεί την εμπλικασιακότητα από το βιξ-αξ-ραν-τύμ-σύστημα, το σχέδιο της χρησιμοποιής. Το VXL-RT δεδομιούν ότι η μόνη που υπάρχει στο βιντεί είναι το βιντεί. Οι εξεκουσία είναι το βιντεί, ο οποίος θα σημανήσει το βιντεί, ο οποίος θα σημανήσει το βιντεί στο βιντεί, και το βιντεί το VXL-RT είναι πρόσφυγος να αφήρξει το βιντεί στον διεδομιού. Αυτό είναι αυτοί. Αυτό σημανίζει για το VXL, αλλά βέβαια, ξέρουμε ότι το VXL δεν είναι μια καλή σχέση για εξεκουσία. Και υπάρχουν, εξεκουσία που μπορεί να είναι χρησιμοποιημένοι για αυτό το βιντεί, αλλά το χρησιμοποιημένοι μπορεί να υπάρχει κάποια σχέση, ειδικά για την εξεκουσία σε κάποιες εξεκουσίες. Και σε αυτό το κοντεξί, ειδικά, νομίζουμε ότι οι γυναίκες μπορούν να βοηθούν πολύ, και οτι η reason είναι ότι οι γυναίκες είναι ένα καλό πόλη για πολλές αρχιτεξές, γιατί οι γυναίκες μπορούν να έχουν σκέψεις πιο σκέψεις, μπορούν να καταγνήσουν πολύ καλή στιγμή, έχουν ένα μικρό μορφόρφι, έχουν ένα πολύ μικρό κόδο, οπότε είναι δύσκολο να αφήθουν σε αυτό, και έχουν also benefitted από την εξεκουσία που οι υπροβάσεις προβάζουν. Αλλά, όχι, δεν έχουμε αφήθει το πρόβλημα για τα εξεκουσία για τα εξεκουσία που έχουν σκέψεις πιο σκέψεις. Και για αυτόν, θέλουμε να βάζουμε σκέψεις πιο σκέψεις στον γυναίκες. Και, σε μας σκέψεις, η VXL είναι ένα καλό πόλη για αυτό το δρόμο, γιατί είναι να έχουν την ιδιαίτερη επιτυχία, όπως και την ίδια εξεκουσία, μπορεί να είναι εύκολο να χρησιμοποιήσει στον γυναίκες, στον κονταίνερο, στον βιτρα-μασίνο, ή και στον μπέρμετρο. Είναι εύκολο να βρήκει, δεν υπάρχει καλύτερη σημαντική κόδο, και θα δούμε πώς βρήκουμε το σε δύο γυναίκες φρέμβουσες. Είναι εύκολο να χρησιμοποιήσει, και το πόλη είναι μόνο. Λοιπόν, πρέπει να δούμε να εξεκουσίαν το VXL στον γυναίκες. Όπως βλέπουμε, το VXL στον πόλη, έχει δύο κομποντίνες, το ραντήριας σύστημα, και το πρόγραμμα. Άρα, το ραντήριας σύστημα είναι ένα πολύ σύστημα, ένα πολύ σύστημα, που είναι μία κόδο, είναι πολύ εύκολο να χρησιμοποιήσει, μόνος τίποτα, είναι πολύ εύκολο να το βρήξει. Το ραντήριας στον γυναίκες με άλλη χάρη, χρειάζεται μεγάλη πρόγραμμα, και αυτό είναι πώς θα δούμε πιο. Και, ναι, και βλέπουμε την πόλη για δύο γυναίκες φρέμβουσες, γυναίκες και ραντήρια, και ελπίζουμε να δούμε πιο γυναίκες φρέμβουσες στον γυναίκες. Λοιπόν, πλέον δούμε πώς το πρόγραμμα στον γυναίκες είναι εξαιγημένοι. Δεν μπορούμε να νομίζουμε να δούμε πως το γυναίκες είναι από τρεις κομμάτες. Το καρρακτήρι, που είναι η κοινωνότητα μεταξύ του βιεξλού σύστημα, και το βιεξλού βιεξλού γυναίκες. Βευτεί, το βιεξλού βιεξλού γυναίκες, που είναι εξαιγημένοι να συνεχίσει με το πόλιο, να δώσει την επόμενη, να το βοηθήσει, να δεινταμαθαι με stip keggλά, να δημιεινθεί με την επόμενη Lemon να δημιυ χρηση να τη δειρινώσει, να σημείνουν το εuityρηός για να απαρατήσουμε το ξεχωρούτανσ Nilhurck να δημιουργιθ τους πόλος. Και θα βοηθήσουμε τα τομέια από τα τ χεριανά. Και The reason we chose this kind of design is that we can be more flexible in the future in case Εντάξει, θέλουμε να χρησιμοποιήσουμε έναν δυο τεκνολογικό, σαν το Βυσόκ, για παράδειγμα. Λοιπόν, σαν το πριν είμαστε, πήρξαμε στο VXL σε δυνατόν ευρυσμό εργασίας, ένα ευρυσμό και ένα ραμπράν. Δεν ήταν ένα πολύ δύσκολο τάσκ, ειναι εξαιριασμό, ειναι ευρυσμό, ειναι εξαιριασμό, ειναι εξαιριασμό, ειναι εξαιριασμό, ειναι εξαιριασμό, ειναι εξαιριασμό. και το Unicraft είχε πολύ σημή τη βιρτωγική συμβουλία με το Linux, οπότε η κατασκευή δεν ήταν πολύ δύσκολη. Από την άλλη, το Rambrand ήταν πιο σύντομα και πιο δύσκολο, αλλά ακόμα να βρήκεται το νέο χαράδο και το βιρτωγικό χαράδο και το νέο βιρτωγικό χαράδο δεν ήταν πολύ σύντομα. Το σημείο που είχα ήταν η συμβουλία με το Virtayo. Και όπως ξέρουμε, Rambrand είναι βέβαια σε NetBSD και, όπως βλέπεις, NetBSD χρησιμοποιείται το DNA για να χρησιμοποιήσει το DNA μετά από όχι τέτοιου αμπστραξία για να χρησιμοποιήσει το δέτα στο Virtayo's back-end. Και η εμπλαιοδοσμή δεν είναι πολύ σύντομα, όσο πρέπει να προσπαθούμε και να δημιουργήσουμε κάποιες DNA-μπες για το δέτα που θα σημείνουμε στο back-end. Είναι αυτό το πρόσφυρο που βλέπουμε στο VXL, αλλά η εμπλαιοδοσμή είναι πώς να χρησιμοποιήσουμε το VXL και πώς είναι η εμπλαιοδοσμή να χρησιμοποιήσουμε το VXL. Πριν το μάγκο μιλόματος, θα προσπαθούμε να προσπαθούμε τα τρία άλλα πρόσφυρα. Πρώτας, πρέπει να χρησιμοποιήσουμε το VXL στον πρόσφυρο που θα είναι πρόσφυρο να αφηγηθεί τα εμπλαιοδοσμή στο δέτα στο δέτα, στο δέτα που θα είναι πρόσφυρο. Και πώς πρέπει να χρησιμοποιήσουμε ένα KMO-βερσίον που μπορεί να σημεριθεί το VXL's back-end. Στον πρόσφυρο του Unicernel, δηλαδή χρησιμοποιήσαμε έναν εμπλαιοδοσμή εμπλαιοδοσμή που δηλαδή χρησιμοποιήσε ένας πρόσφυρος εμπλαιοδοσμός και το δηλαδή χρησιμοποιήσε. Και πρέπει να χρησιμοποιήσουμε το VXL στον πρόσφυρο να κάνουμε αυτό. Και να χρησιμοποιήσουμε το VXL, δεν ήταν πολύ δύσκολο για το μηχανό εμπλαιοδοσμό. Δεν είναι πολύ δύσκολο για το μηχανό, πρέπει να χρησιμοποιήσουμε το μηχανό από το μηχανό μες οδηγιόδιες και για το μηχανό, πρέπει να δηλαδή να το βρήξουμε πως εμπλαιοδοσμόμαστε και για το μηχανό. Θα κάνουμε ένα μικρό δημοστηρίο για πώς μπορούμε να χρησιμοποιήσουμε και πώς μπορούμε να χρησιμοποιήσουμε το VXL στο μηχανό και να δούμε πώς χρησιμοποιείς. Θα δούμε να κάνουμε αυτό. Λοιπόν, είμαστε στον μηχανό που έχει έναν βιδιότυπο Τεσλα Τ4 και είμαστε μέσα στο δεσανό εμφερντζό προσπαθόμενο από το βιδιό. Λοιπόν, εδώ έχουμε το Ραμβραν και το Unicraft και έχουμε κάποιες μικρές γεγότητες που θα μας βοηθούν να εξεχθεί τα εμφερντζόμενα και έχουμε also το δεσανό εμφερντζόμενο που είναι χρησιμοποιημένο από το GoogleMet να κάνουμε την εμφερντζόμενα. Λοιπόν, ας ξεκινήσουμε με το Ραμβραν. Όπως είμαστε στο Ραμβραν, έχουμε να δούμε ότι η εμφερντζόμενα όταν εμφερντζόμε την εμφερντζόμενα στο Unicraft, έχουμε το βιξέλτρος μας και να κάνουμε αυτό, έχουμε ένα τάγκητο που είναι HW-Virtayo-Axl και όπως μπορούμε να δούμε από εδώ. Είναι για εδώ το vertio, θα δούμε στο βιξέλτρο. Οι HW-Virtayo-Axl είναι ένα αγγελματικό αγγελματικό αγγελματικό για να μιλάμε το Unicerno που βρειάζει όλα από HW-Virtayo και βρειάζει το αγγελματικό αγγελματικό ή η δεύτερη αγγελματική αγγελματική για το VXL και όλα τα άλλα είναι σχεδόντας στο vertio, σαν το βιξέλτρο vertayo και το βιξέλτρο vertayo. Έτσι, έχουμε αρκετά το σχεδό εμφερντζόμενο στο classify.bin και είμαστε έτοιοι να χρησιμοποιήσουμε. Θα χρησιμοποιήσουμε το βιξέλτρο vertayo με το αγγελματικό εμφερντζόμενο. Δεν ξέρουμε τι αγγελματικό εμφερντζόμενο, αλλά ελπίζω να το πω. Ναι, τώρα δούμε ένα πίτωμα στο σχεδόντας, πώς μπορούμε να χρησιμοποιήσουμε το βιξέλτρο vertayo με το αγγελματικό εμφερντζόμενο. Αν πρώτα, όπως μπορείς να δείτε, χρειαζόμαστε να προσέβουμε το βιξέλτρο vertayo, να χρησιμοποιήσουμε το βιξέλτρο vertayo στο βιξέλτρο vertayo και που συμβείται στο βιξέλτρο vertayo. Δεν υπάρχει πίτωμα εδώ, αλλά πρέπει να δείτε εδώ το βιξέλτρο vertayo, το βιξέλτρο vertayo, που θα είναι χρησιμοποιήσει για την εμφερντζόμενο. Θα θα ανοίγουμε ένα πίτωμα στο βιξέλτρο vertayo, γιατί οι εμφερντζόμενοι θα είναι πολύ πολύ πολύ πολύ. Και θα δούμε ένα βιξέλτρο vertayo, που θα υπάρχει το εμφερντζόμενο που βρισκόμαστε. Και εδώ είναι το βιξέλτρο vertayo, το βιξέλτρο vertayo για Kemu. Έχει δύο αυτοπιστές, κρύπτο και το γενικό μπακέντ. Εκεί θα πρέπει να υπάρχει κρύπτο, αλλά δεν θα το μεταφέρει σε αυτό το δημογραφικό. Ας πάμε στον εμφερντζόμενο που θα χρησιμοποιήσουμε. Ραμβράν δεν έχει ένα κομμάτι, δεν είναι πολύ καλύτερο. Υπάρχει το έως που μεταφέρει το εμφερντζόμενο. Και εδώ, όπως μπορούμε να δούμε, θα χρησιμοποιήσουμε το dog0.tbz. Εμμόντζο να καταφέρουμε. Λοιπόν, δούμε τι θα γίνει. Οπότε, αυτό είναι το έως που βρίσκουμε από το γενικό μπακέντ. Και δούμε τι γίνει από το ξεκίνημα. Οπότε, όπως μπορούμε να δούμε, το VXL was successful. Και η παράγματα ξεκινούν να δούμε ένα εμφερντζόμενο. Και εδώ, χρησιμοποιήσουμε ένα εμφερντζόμενο. Πολλοφέμαστε για να το φέρουμε. Έχουμε ένα εμφερντζόμενο. Είχαμε εμφερντζόμενο. Πολύτερα, εμφερντζόμενο. Λοιπόν, έχουμε έναν εμφερντζόμενο. Εδώ είναι όλα τα έμφερματα από το γενικό μπακέντ. Και εδώ έχουμε το βινόμενο για την εμφερντζόμενη. Είναι για έναν ολασκανό μπακέντ. Δεν ξέρω αν πρόνομαι αυτόν. Αλλά είναι σαν έναν χασκί. Και χρησιμοποιήσουμε έναν εμφερντζόμενο. Δεν σημανίζει να είναι καλύτερα γιατί η Ραμπραντζόμενη έχει ένα εμφερντζόμενο για την ώρα. Και, ναι, αυτό είναι το τρόπο. Αν κλείσουμε, δε θα διεξάμε την εμφερντζόμενη. Και αυτό είναι το τρόπο που χρησιμοποιείς στην Ραμπραντζόμενη. Λοιπόν, δούμε για το γενικό μπακέντ. Δεν θα μας βοηθήσει με το μπακέντ με το μπακέντ, αλλά... Μπορείς να δείτε εδώ το στραξό. Ξεκίνησαμε την Άπλικα, την ίδια Άπλικα. Και δούμε από έναν άλλο βίνδο, το μπακέντ. Ξεκίνησαμε την Άπλικα, την ίδια ίδια ίδια, και το ίδια ίδια. Μπορούμε να δούμε το εμφερντζόμενο. Δεν έχουμε πολλά αυτοπινέα τώρα, αλλά... Μπορείς να έχουμε στον εμφερντζόμενο. Δεν χρειαστείτε πολλές άλλες δημιουργίες. Εμείς δημιουργίσαμε το 9PVR-τυλόδρα. Γιατί πρέπει να χρειαστείτε από το εμφερντζόμενο. Και... Είμαστε... Εμείς δημιουργίσαμε το νέο λίπι και το ίδιο ίδιο ίδιο, αλλά δεν έχουμε προσπαθόμενο. Ποια εισαλόδρα ίδιο ίδιο ίδιο, γιατί δεν έχουμε πολλές αυτοπινέα τώρα, ας την καταλα Speak,ивать συστροφέρηση για τα υπο carbohydrate οι τεσ attained Ταιξο typnga και για να Ενcrates και ανμεταλείται τη βίντεωση but as soon as we world through that system we are an может like very simple will service the data directory which contains the節 0, em words, also say we also have no, we also have to notify και το εξαλείο μεταξύ του γαμμού. Και το κομμάτι είναι πολύ σύμβολο με το DoxeridotGPD. Θα δούμε τι θα γίνει. Είμαστε πολλές της Ευρωπαϊκής Ευρωπαϊκής Ευρωπαϊκής, λοιπόν μπορούμε να δούμε πιο πιο τι θα γίνει. Λοιπόν, εδώ είναι, δούμε το φιλό. Θα δούμε ένα νέο διβάσο, δούμε ένα νέο βιεξουλό σύστημα. Εδώ είναι το καλύτερο. Και... Θα δούμε ένα νέο διβάσο για την Ευρωπαϊκή Ευρωπαϊκή Ευρωπαϊκή Ευρωπαϊκή. Και εδώ είμαστε με το βιεξό στο UNI-CRAFT-TAP. Ναι, λέει ότι είναι το ίδιο. Είναι καλύτερο. Λοιπόν, αυτό είναι το καλύτερο. Τώρα δούμε τι θα γίνει με έναν δυοδομό εμμάδο. Θα δούμε το πανταύρι από το Βικηπιδια. Λόγω εμμάδο. Θα δούμε το δεθάμ. Εδώ έχουμε. Είναι εδώ. Θα δούμε το δεθάμ. Και θα δούμε το UNI-CRAFT με έναν δυοδομό εμμάδο. Θα δούμε το δεθάμ. Θα δούμε τι θα είναι το δεθάμ. Είναι το ίδιο. Θα δούμε τι θα είναι το δεθάμ. Θα δούμε το δεθάμ. Όταν υπάρχει Zhiwark's bombs ή proved στην καλύτερα της επειγιωματιμίας. και εξαναγωγηθεί σε ποιο πρόσφυγο και την εξαναγωγηθή. Ευχαριστώ για την ώρα σας και θα θα ευχαριστώ να εξηγήσεις πολλές πρόσφυγες που θα έχεις. Ευχαριστώ. Ευχαριστώ για την καλή εξαναγωγηθή. Είχα διεθνώσεις πρόσφυγες από τα δουλειά εδώ, στον κλειδι. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την κλειδι. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή. Ευχαριστώ για την καλή εξαναγωγηθή.
|
Applications demand fast and secure execution in diverse environments (Cloud data centers, Edge Nodes, mobile platforms etc.). Execution efficiency has been facilitated by the introduction of specialized compute elements (eg. GPUs), in order to accelerate specific parts of tasks/workloads (such is image processing). At the same time, too abstract deployment and management burdens, service providers use virtualization and container technologies. Eliminating the software overheads of these abstractions, especially in the context of hardware off-load/acceleration is a challenge and requires a number of factors to be taken into consideration: (a) portability, (b) performance, and (c) security. In this talk, we attack the first two factors and examine the option of unikernels and their surrounding ecosystem (application porting frameworks, orchestration frameworks, lightweight virtualization backends) in the context of hardware acceleration. We present our efforts in porting a novel hardware acceleration framework, vAccel, to the rumprun unikernel, digging into the internals of semantic abstraction for ML inference, as well as its implementation on rumprun and QEMU/KVM. We describe the frontend/backend driver port, the runtime needed to support the actual execution on the hardware and showcase our results in a brief demo of two unikernel frameworks performing ML inference on images.
|
10.5446/52731 (DOI)
|
Good morning. My name is Reynzo Davoli. I'm my affiliation is the University of Bologna. This is a joint work with the Virtusqr team, which is an international group working on virtualization. And my co-author is Michael Kulweber, who is at the Soviet University of Cincinnati, Ohio, USA. It's a pleasure to be here at FOSTA again, also in this online format, new format for the pandemic disease. This talk would be about Libyat, a library providing, I say, a definitive API for the Internet of Threads. The attempt of this talk is to make some concept closer, so to contaminate knowledge coming from different groups, from different experience, like those talking about virtualization, namespaces, microcarnals, networking, and so on. The concept of Internet of Threads is not a novel one. I introduced the term Internet of Threads back in 2012 here at FOSTA. And I used this kind of metaphor. So a telephone line, if it's a fixed telephone line, is related to a place, to a room. So it is common to dial a number and ask, is there Mario at home? Because we have not dialed Mario's number, but Mario's house number. Instead nowadays using portable mobile phones, we can reach directly people, because it's common to have portable phone sets associated one to one to two people. So one person to one, portable phone to one number. So it's easier to get in touch with a specific person. Relating this concept to the Internet, we have to consider what's the end point of an Internet connection. So what is addressed by an IP address? The original design of the Internet, the end points were hardware controllers. So each hardware controller had its own IP address. Now maybe this constraint has been a bit lighter, because we have namespace, virtual machines, but we are still addressing virtual interfaces of the counterpart of the ancient hosts. It would be more consistent, so more efficient to have IP addresses assigned to processes, threads, because the idea is that likewise with portable phones, we wanted to talk with a specific person, now we want to reach a specific service. So the idea is that if you have an IP address assigned to a process, it's easier to get in touch with that specific service. Actually, as you can see in the previous slide, there is a difference in the idea of IP address, because IPv4 provides a very limited number of addresses, so it's unrealistic to think to give IP addresses to each process or to each thread. But talking about IPv6 addresses, the range, the other space is so large that it is a viable solution. So we can talk from one side to Internet of Threads, IOS, on the other side there is a funny acronym, Internet of Legacy Devices, so we can talk about Iold, which gives the idea of what is old and what is new. Let me tell you a story about my childhood, about a juvenile stress I had. It happened that I had two different building sets when I was a child, and they were incompatible. And it was stressful, because it was hard to build constructions, including building sets, building blocks from both sets. So maybe from this being, now I am a computer scientist, and I want to have everything compatible and each block may be integrated with the others. So the idea is partial virtual machine Internet of Threads microcarnes, build blocks, like the ones of the different building sets. The idea of this talk is to provide means to make them compatible and integrateables. We are what we have in common, for sure we are all against monolithic implementations, we want to create so blocks. The common call is to create independent code implementing services, so the idea is the idea of these blocks. The basic block we are talking about in this speech is a TCP-IP stack. It's a block, it's a given OSI model, it's composed by two layers, network and transport. So it can be seen as a block which has two interfaces to the outside world, one in the upper side, the API to the application layer, and one to the data link layer. The idea here, the idea of this talk, it was to see if there is a well-defined set of APIs to the upper layer, to the lower layer, that makes this block a very useful block exploitable in many situations. So what does it matter with microcarnals? This block could be used inside user processes, so we can pick it back, the TCP-IP stack in the user process, and then we can deal with a server, the data link network server, in the meaning of server of microcarnals, so we can have the TCP-IP stack as a library inside the user process. So this is a way in which we can use this element in the idea of microcarnals, but we can also think to a TCP-IP process embedding this TCP-IP stack, and what is new, that this concept about lib-iOS makes this standard, and then different implementations can be loaded inside this block that in this way becomes a general-purpose TCP-IP stack block. So given that the problem is to provide an API or rather two APIs in the way that this API must be complete, usable and minimal, what can we talk about, how can we design this API to the app player? So just to resume which are the goals, it must be complete, so it must provide all the operations currently available, and I add that this services should be provided in already known mannerings so that existing code could be easily ported to the new environment. Usable, so the syntax function consists of a bunch of standards and minimal, so we need to avoid duplicate operations, we need to have this interface as simple as possible. Given this API, which are the requirements, from one way we have to provide cause function for configuration, more specifically to create or delete stack instances and to configure parameters, so we need to have entries to create a stack to provide IP addresses, parameters for the interfaces, routing definitions, before using the stack for communicating. At the end, or if you want to change the stack we used, we need to have a call to delete the stack. When the stack is configured, the stack can be used for communicating, so we have to open, close communication and points, send, receive, pack, set options and so on. Let us have a closer look to the Libyous design. This is a very common case, we decided to use a structure, a programming structure, a C structure and a pack structure. At the user level it is just a pointer, like the file all capital letters for the Libyous C when we use fopen. We use this as the identifier of a stack, so the way to open to create a stack is Ios NeoStack. The first parameter is the implementation, so the actual stack implementation will be loaded as a plugin, so we can use this API for several implementations. The second option, the second parameter, is the virtual network locator, because we use the lowest API, the API to the data link layer, VDE, virtual distributed data net, and given that the most common case is to have one stack with one interface, provided we are talking about processes using directed networking, the simplest call is NeoStack, but the most common factor, in some cases you may need to have several interfaces, those that are calls, there are new stack L and new stack V, the syntax is similar to exact L, exact V, that provides the programmer with means to create several interfaces stacks. The counterpart of NeoStack is DelStack. Then for communication, for communication there is a well known standard, the difference between Ios, Libyous, and the standard networking is that in monolithic kernels it is common to have the networking provided by the kernel, so the configuration is carried out by commons, there is not an API for configuration, while it is common to have the communication API, Verkissockets is a very common API for communication using TCP-IP stacks, so the only difference between Ios and Verkissockets is Msockets, because when one creates a communication endpoint using socket, it implicitly is talking about the kernel stack, now we have several stacks, so socket is not enough, we need Msockets with extra heading argument, which is the structure Ios, so they identify all the stack, apart from this, all the other Verkissockets entries are then implemented keeping the same signature and functionalities, so the semantics of all these entries is the same. What's this mean thing? We have talked about creating and deleting stacks, how to communicate, now the problem is how can we configure the stack, I mean add IP addresses, bring an interface up and down, set the routes, that's the point, instead of creating another interface, a part of the API for configuration, we decided to exploit the definition of RFC3549 using netlinksockets, this is very common and this is the way the Linux kernel uses for configuring the physical interfaces, so the most natural way to implement an API for natural configuration is to add nothing, given that there is Msockets that can be used to create netlinksockets, all the configuration can be carried on using this kind of socket, that is the RFC defines a set of packet formats to have all the service, to provide all the service for configuring IP address routes interfaces, so we decided to use this way, the problem is that there are commands providing this kind of services, there are client commands like IP, IP route, IP address or the ancient if config, there were not, there was not a library providing this kind of services as programming resources, as functions, we created this libraries, so the interface does not provide specific entries because we are using netlink, but we provide libraries that can be used to configure the stack using netlink, let us examine, let us see together a simple example to make clear what I am talking about, there is I think the simplest example of a program using networking by Berkley Sockets, is a program which opens a socket, a datagram socket, UDP and sends a message to a destination address, this is how the program appears using just plain Berkley Sockets and the kernel implementation of the stack, in this slide I am showing highlighted in yellow the changes I have implemented to have an Internet of threads implementation of the same program, given that we are not using the kernel stack we have to create the stack, so there is IOS new stack, the implementation is VD stack and it is connected to the network data link letter identified by the virtual network locator, VD colon slash slash the path so another slash temp hub, prior to communicate we have to provide the stack with a suitable configuration so we have to set an IP address to the only interface, so and we have to bring the interface up, IOS IFNameToIndex is the counterpart of IFNameToIndex, it is not an entry of the API but it is an entry of provided by a library and an inline that is able to translate that request in netlink messages, the same for IP address add and links at the top down, once the stack has been configured we can do the actual communication, so we can use msocket to create the socket and send to a close to send a message and close, apart from the the prefix and msocket instead of socket the actual problem solving implementation of the program does not change, so the final part of this example is an example of how applications could be ported to Internet of Threats just by changing just the prefix of the communication function and msocket instead of socket, but in order to reach these results there has been a long way because we had to develop a number of concepts and tools libraries to provide these abstractions, the idea in virto2 and the idea due to my journey stress for the building set is that each tool must do one thing and we tried to create tools making it well, doing it well and all the tool and libraries have been designed to interoperate, I think I missed one point about the API, here it's a very important point, all the file descriptor created by msocket are real file descriptor, that's an important point because we can use IELTS bind, IELTS connect saying this file descriptor is a specific file for Internet of Threats, but if we want to use POSselect, POSselect work on a set of file descriptors that may come from different implementations, one can be a real file descriptor of a device, one can be an Internet of Threats socket and maybe in the future we can have several file descriptor coming from other virtualization, other implementation, so the idea is that file descriptor can be used in POSselect, but even if this result is not costless we had to design libraries and tools to provide this kind of feature, Vienta distributed Internet is a support we are using for the data link layer and then many other, if they use data view poll and inline net link view as now in the following slides we can pass very quickly through these tools that have been created to create as a final point the bias, many others are under development. Next, Vienta distributed Internet, Vienta distributed Internet is still back I think 2006, it's a general support for Vienta distributed networking at data link layer originally designed for virtual machines, now VDE has evolved in two different directions, one is to provide support for namespaces, for user space implemented stacks for partial virtual machines, on the other side another development of Vienta distributed Internet is about the modularity, so instead of having one way to implement virtual Internet now the actor implementation can be loaded as plugin, again we can see the building block metaphor, the API of VDE is straightforward, this is the API used at the lowest layer of the TCP-IP stack block provided by Libyat, we have an NOPAC type VDEcon as the identifier and then we have open, receive, send, close and we have the way to have file descriptor to check for new messages arriving. Plugins, there are several plugins, these are just some examples, we can connect virtual machines, virtual user implemented stacks to the switches using this kind of plugin, we can create a street using this kind, we can use a slip to create an emulation, providing client side access to the Internet using a simple process, we can connect to the tap to the hosting machine itself, we can use this VXVDE to create a local cloud using multicast IP, this is an implementation of distributed switch, so you can start virtual machines on several hosts of the local network just providing the same address, they metamagically create a virtual Internet among them. Now in this talk I cannot go through all the details, but there is a tutorial section in the virtual square's wiki, so you can try and see the example at work, this is an example of a geometry you can create using the new VDE plug 4, you can have virtual machines connected to a switch, you can have the switch connected to the tap interface to the hosting machine, as you can see all the plugins have an identifier that is similar to URL, VNL is a way to identify a virtual network and the part that would be the protocol part in a URL is the implementation, so here the switch is loaded as a plugin, so as a shared library for this application, we have a VDNS that can create namespaces connected to the virtual distributed Internet, in this idea what Internet of Trades is can be implemented in this way, we have at the left hand side the implementation, the legacy implementation of the TCPAP stack in the kernel, so here there is the line of the boundary user kernel space, instead using Internet of Trades we can load the stack into the application and have an interface to VDE so we can create the application connected, here there is the socket, meaning this pattern, styling pattern to connect a VDE implementation, and Leviath and here the metaphor of building blocks as it's maximum, we have the application instead of loading, of having the library linked inside, we have just the Leviath linked inside the application and then we can load the actual implementation of the stack, either the kernel stack if you want, VD stack, Pycox net or in the future we can adapt other implementations and these implementations, stack implementations are connected to VDE, so there are several building blocks, one into the other blocks, now let us pass very quickly through the other tools we have created, FD user data, because we wanted to have the Berkeley Sockets functions with the same signature, so we had to pick it back information to the file descriptor, FD user data is able to provide a way to write libraries that can retrieve information using the file descriptor as the key, Vipoll is the library that allows to create file descriptors where events can be emulated, can be created as needed, actually Vipoll cannot be implemented completely as a library because we have no system causing Linux to provide this kind of feature, so there is an emulation that can be implemented in any system or there is a kernel model providing a device, in this way it is possible to have the complete emulation of Poll events, so it is a general purpose feature, the Poll Vipoll P-select, E-Poll and even if in the future new kind of system cause to wait for events will be created it will be compatible, and then inline is a library, very light library made of inline functions that provide programmers with tools to configure the network, the most common configuration request like add an address, delete an address, add a route, delete a route, etc. the MAC address is provided as functions, that was missing because it is not common given that the idea that the kernel provides the single stack you can have, it was not the requirement to have programs configuring the network interfaces, NLQ is a library providing forging and decoding of netlink messages, there are several libraries but the new point of this library is that there has been designed to provide also the support for the server side of netlink communication, again it is not common to have programs that need to understand and process the requests for stack configurations because it is a service provided by the kernel implementation of the stack, instead if we want to create stacks able to get requests VNnetlink and perform the operation requested, we need the library is able to decode the request call specific function of the stack to perform the requested operation and forward the reply method, VOS is a part of it, in some sense we can say that it gives a abstraction of a namespace that is entirely implemented in the user space, so this is an idea if this is the Linux kernel API and usually we have processes using the Linux kernel for the services, so the Linux kernel processes and libraries uses system calls to have services by the Linux kernel, VOS provides the same interface to user processes but the request path through an hypervisor that can decide whether to pass the request to the kernel or to process the request using modules that then can provide different answers to the system call, so we can provide the processes with different view of the execution environment, so it is a partial virtual machine not all the system calls are processed by the virtual machine, some are processed by the kernel, others are processed by the modules, the actual implementation is in this way, the virtual processes, the partially virtualized processes actually do the system call but the kernel can decide to divert the call to the hypervisor, there are different modules, the module to virtualize the file system, fuse the file system in user space, there is the counterpart that provides the same services but in user space, virtual devices, virtual networking, virtual name time, but the most important now is virtual networking, just to have a taste of how VOS works, the idea to add the module for the virtual file system and mount the entire file system slash mount, so the entire contents of the file system can be seen both as the real content and replicated because of this in slash mount, so it's something like a bind mount but it's entirely implemented in user space. Actually, LeBuyers can be used as a module in VUNET, so we can add a network, say using mount, that we want this stack to be mounted as devnet my stack and then we can enter using this command view stack into this start a process using this stack implementation and then we can see the interfaces provided in this case by LeBuyVD plug, excuse me, LeBuyVD stack, VD stack, which is a trick actually, it's an example of user space implemented, can be used as an example of user space implemented stack but it's not really a user implemented stack because it uses a namespace in the kernel as it was a box because all the requests are forwarded to the namespace in the kernel and using a tap interface, the request to the data link layer is captured and diverted to the VDE. Instead, PeacoxNet is an extra stack based on Peacot TCP generation in NG designed for Internet of Thread. Support multiple stacks is implemented by the Stainberg and Sock API, it has the support for the configuration via netlink and uses V-Poll to provide real-fight descriptors. Now let's finish this talk with the future, we're working on IOSConf, a support for auto-configuration and configuration using so it's not done to LeBuyVD to configure the network. So, returning back to the example, instead of using all the functions from an NLine to configure the IP address route and so on, we can have the configuration as a string and it's quite simple but here instead of this string I could have written something like DHCP and so it means that we wanted to get the IP address using the ACP auto-configuration or the ACPv6 RD which means router discovery and so on. So, these are examples, this is the way to configure using static configuration for IPv4, IPv6 or via DHCP or using DHCP using purely qualified domain name using hash-based assign of the address. This library provides also a way to configure the name resolution functions and another project we're working on is the name resolution support for LeBuyVD so the way to provide name resolution. All the effort is on the way to enter Debian, VDE, FD, Ethernet and NLine is already in Debian so there are packages to install these supports, we are working to package and to add all the other ties of this big project regarding virtual square tools. So, I thank you for your attention and now there should be a session for questions and other answers. Thank you very much. Okay, I hope you can hear me. Let's give it a few seconds for the stream to switch to us. Yeah, I think we are live and we are being broadcasted. So, thanks very much for the talk, it was very interesting. There were some questions and I'll repeat them here for the people who cannot join the matrix. So, one question was can I hope work with VirtioNet, could you please elaborate on that? Yes, not yet, even because Virtio is for the integration with actual interfaces and networking. So, it's a good idea but VDE would benefit from this feature just for in the boundary between virtual and real networking because in all other cases, directly communicating virtual machines or main spaces or internal thread processes would not benefit because they don't need to access the real networking directly. They use many other ways, unisockets for features or they use embed the packages into TCPIP, unicast and multicast messages like for example VXVD. So, it's just for the in the top implementation of VDE that would be a benefit but it's a good idea. We are planning to work on it too. Okay, thank you very much. Just as I know, I'm wondering why the stream is showing primarily me and not you. I think that something is still not working quite well but I mean, maybe problems probably. So, I would have another question. So, since we are in the microkernel lab room, do you have any experience of using IOP on a microkernel by system? Not yet. The idea is theoretical but by now, these black boxes that can embed in implementation are stuck and moreover, it is possible to inherit the implementation from other sources can be used in the general idea of microkernel. I've not tried it at the moment. Okay, thank you. And maybe another question. If I understood correctly from your talk, your implementation is sort of monolithic in the sense that the whole networking stack is one component or did I miss something? The two layers are usually implemented as a block but nothing blocks from having further model implementation in the future. The whole idea of virtual square is a bunch of blocks that can be plugged together. So, now the block is just one but we can define internal API and create plugins for each layer. If in one of the last slides, there was an implementation, a complete implementation of IOP using VDE and so on. So, you could see that it's really like a building set of tools that are loaded as plugins or shared library one with the other. So, you can have the actual implementation you want, you need, just by combining this. Nothing stops from having this communicating by messages instead of from function calls. So, to switch in an even more microkernel way to implement things. Wonderful, that's great. I'm looking on the chat whether there are some additional questions. I don't see any yet but let's wait for a few minutes or maybe my connection is not working. I don't know. Do you see any other questions? I have the problem that the only way to keep on with both I have, I need two different windows and I have to play with one to another to the other, one on which I'm transmitting and the other on which I'm getting. Yeah, I see your pain, definitely. Okay. Do you consider implementing protocol like multiple parts? I think it can be done. Sorry for interrupting. I would just like you to read the whole question for the people who are watching the stream because they don't. Okay, sure. Do you consider implementing protocols like multiple disappear alternative API like TAPS on top of using the virtual square components? I've created the infrastructure so over this infrastructure any other protocol can be added. Moreover, it's very nice of these architectures that you can add implementations at runtime. You don't need to have kernel support because all these support is at user level. So from this point of view, we are very close to the microkernel community. Other questions? Can you see other questions? Somebody is typing but it's not a question yet. So again, let's wait for a couple of minutes. Maybe somebody will get a question. As it's useful in the conference, I have to say thank you, Mr. Chairman. This sounds like a good amount of fun. For me, it is. It's a project I'm working on for I think more than 10 years now. Okay, so I don't see any more questions in the chat but of course after this session is over, the room which is associated with this talk will be open to everyone. So people can join, people can chat using text or even join the video conference. So okay, let's wait for one extra minute. Anyway, I'm at the post. I think today for the whole day and maybe the of the time tomorrow and my email addresses and contacts are available. So feel free to ask me questions anytime. I see that the bottom is typing. Let's see if some... Yeah, and I'm on two different windows. I can see that there is a delay between one and the other. Yes, there is definitely some latency in both in the stream and in the chat. The infrastructure is really struggling but at least we have the first live Q&A which is good. I was really anxious that we won't have anything live this year. We have to take into account that the FOSTA organization has done a great work to have this all this infrastructure working. It's not toy. Yes, definitely. I mean it's a huge effort and done mostly by volunteers so we are very grateful to everyone who contributed to this. Okay, I agree. I cannot agree more with Baudon. These components look great for network experimentation. Yes, it's wonderful. I see that I can combine all the components I need just by adding these elements one to the other. The problem is typing. Just to be clear, not yet IOT has been used by a project named Marionette by two professors Italian at Paris to Lotto and I can't remember the other name. They have created a graphical user interface to put virtual machines and switches and people can draw the infrastructure and given VDA the infrastructure became an actual virtual infrastructure just by typing start. So it's an example of experimenting. I've taken internet working quite a lot but working only with Linux. This works on Linux but it's another dimension on networking that you cannot do to the kernel networking support you actually have. Okay, thank you very much. I think we will wrap up this Q&A session. Thanks for all the questions. Thanks for your answers as well. People can surely reach you by multiple means. So thanks again and in about 10 minutes so at five past one we will have another talk from Normand Feske from Genome Labs. Thank you. Thank you so far and lots of good work. Bye. Bye.
|
Microkernels, partial virtual machines and internet of threads are not unrelated. The challenge of this talk is to show that the new libioth providing an effective and flexible support for the internet of threads can open interesting perspectives for a wider range of applications. A network protocol stack can be implemented as a library. There are several examples: lwip/lwipv6, picoxnet, lkl. These libraries can be used to implement processes connected as network nodes (the so called "Internet of Thread" processes) or to implement network protocol stack servers for microkernels. The main goal of libioth is to provide a convenient API to interoperate with different network stack implementations. Libioth is also an infrastructure where the actual implementations can be loaded as plug-ins. Libioth's API is minimal: it includes the complete set of Berkeley Sockets functions, some functions to add or delete a stack and 'msocket', an extended version of 'socket' providing one more leading argument to select which stack should manage the communication. Libioth does not provide in its API any specific function to set up the network configuration, e.g. to configure the IP addresses/routes etc. These features are provided through netlink (see RFC3549). "nlinline" is a simple and effective set of inline functions to manage the network configuration. The data link layer infratructure used by libioth is VDE, Virtual Distributed Ethernet. Although libioth has been primarily designed for the Internet of Threads, the way it is used in the vunetioth module of vuos has many similarities with the network protocol stack servers for the microkernels. Several concepts and many building blocks of libioth can be useful in microkernel development. The design of the minimal API itself can be used to reuse existing stack implementations in network protocol stack servers. Libnlq (a sibling project of libioth) is a library able to process netlink requests, and can be used to add the netlink support to those stack implementations providing configuration through a custom specific API.
|
10.5446/52732 (DOI)
|
Welcome to my contribution for this year's microkernel developer room at FOSSTEM. Thanks a lot to Martin Deckey for organizing the room this year. My talk will be about a topic that has been in the works for several years now and it is concerned about the plugging of device drivers for genode. So the talk will be structured in kind of four different sections. So I will give you first a bit of background where we come from and what we want to achieve. The biggest part of this talk will be about the work of restacking the GUI stack of genode. Then we will come to a third point where the lessons learned from the GUI stack will be applied to the network domain. And finally the talk will wrap up the bottom line and give an outlook about the future using the new features. So to give you a bit of background where we come from, so if you observe genode from last year's FOSSTEM you may know that genode is a component framework that allows one to use several different kernels, micro kernels most prominently but also Linux to create interesting operating system scenarios and combine these kernels with components like device drivers depicted here, protocol stacks over here or resource multipack source over here. So genode is a big toolkit of components where each component usually lives in a dedicated sandbox. So one system scenario that we are most proud of is the so-called script operating system. This is a general purpose OS that we developers use on our laptops day to day. And it also illustrates the nice or the fine granularity of the sandboxing that genode provides. So right when booting up the genode system before even running any application there are already about 60 sandboxes started. So each individual driver for example lives in a dedicated address space and is protected from the other components of the system. So default encapsulation that the micro kernel community advertises is actually living on our laptops. But we don't want to stop here. So there are further ambitions where we want genode to go. The first ambition is the targeting of long running systems. So here we have systems that we don't want to reboot basically. We want to also freeze them. So we don't want to burn one software stack completely inside a device. We want instead enable the user of the device or the owner of the device to update parts of the system at runtime. And also which goes in the same direction such devices would need a way to heal their customers themselves if errors are occurring similar to what Minix 3 has been advertising since several years. The second direction is the direction of assurance and fail safety. So if you look for example in the medical space here the trust computing base of the device is always a big concern for assessing the assurance of the device. So the low complexity of genodes trusting computing base is a big asset. On the other hand drivers are known to be flaky and unfortunately drivers play often a really fundamental role in such a systems. So they stand in the way basically. And the third point is the third direction is the adaptive systems that can change at runtime really flexibly. For example systems where you can connect and disconnect different devices like monitors and screens, input devices on the fly or if you think about mobile phones where we want to power gate peripherals when they are not needed to save energy. So to understand the remainder of this talk I first have to introduce a bit of terminology in the genode context. So genode is a component framework and in component frameworks we usually talk of clients and servers and that's the same for genode. So we have servers here and we have clients and servers provide services to clients. In genode however the relationship between clients and servers are really clear cut. So I will give the following five characteristics. So first clients and servers usually live in dedicated address spaces and they are sandbox independently. Clients can lend resources like for example their own memory to servers so that their servers can bring their servers which eliminates the big denial service attack surface from servers that are shared by different clients of different trust levels. Usually we assume that there is mutual distrust between clients and servers when it comes to the confidentiality and integrity of information. That means a server cannot look into the address space of the client and the client cannot look into the address space of the server. But when it comes to the liveliness of a client we have to be aware that the liveliness depends on the server. So for example the client performs an RPC call to a server but the server never responds to this RPC call, the client gets stuck in this call. It's similar to a situation where some program calls a library function. If the library never returns the call then the program gets stuck. On the other hand the servers never depend on clients so the liveliness of servers does not depend on the behavior of the clients. So if this raises questions I recommend this book. You can get it for free at the geno.org website. It explains the whole architecture and the ideas in more detail. For now I want to look now on the topic of layered architectures with this context of the strict client-server dependencies. So here you see a part of the genote architecture for the graphic stack. In particular the input handling of the graphic stack. So the text is a bit small but I will just briefly go through it. So at the bottom you have the kernel and the low-level services that is called core and init in the context of genote. These are low complexity system components. They provide really low level services like access to interrupts or to memory map IOU registers. These services in turn are used by a platform driver that turns those things into a higher level interface here and it provides a so-called platform service to clients. The platform service is something like a virtual bus you can think of. The drivers actually they are clients of this platform session interface and in turn provide a higher level interface. For example the internal framework of a driver provides a framework interface here. And this interface in turn is picked up by the GUI server that sits over here, painted in red. And the GUI server in turn provides another GUI service to the applications on the right and to the left for example another GUI server that sits on top and for example adds some notion of windows to the GUI. So this is a really beautiful picture I think because it's so nicely layered and nicely stacked but in the real world it is not perfect. So if you look into the drivers here you have to admit that those drivers are hugely complex. So for example the internal driver is ported from the Linux kernel and I think it comprises around 100,000 lines of code. Also some of these drivers for example USB-HIT are exposed to potentially flaky devices so you can never be sure that the driver will survive when you plug in a strange USB device. So the other parts of this picture here like for example the nitpick or GUI server they are really nice and clean and only consist of like 4000 lines of code for example in this instance but the littleness of this still depends on this hugely complex code that lives down there and this really tains the whole picture I think. Okay so we observed this of course early on but we had not found a solution for a long time but eventually we came up to the idea well we have to change this dependency between the low level GUI server and the drivers. So how would the system look like if we just changed this client-server relationship to the opposite and this picture emerged. So here you see that the nitpick or GUI server lives not up upon the drivers but it lives besides the whole driver stack. So on the left side you see the platform driver and the platform driver is used by the other drivers but the right side of the picture is completely untainted by this so the nitpick or GUI server is just a service used by the drivers you see here and so the 4000 lines of code of nitpicker they just sit on top of this minimal complex TCP over here and the complexity of the drivers is no longer critical for the operation of the nitpick or GUI server. From the applications perspective nothing really changed so you still have these dependencies between the GUI session here but the important part is the rearrangement of this part. So when looking at this picture here there are a few questions that arrive. So for example there are these new bubbles here, these new services, the so-called event service and capture service and I will go into detail about these services look like. So first when looking at the frame buffer, the frame buffer service has an counterpart now that's called capture service and it basically does the opposite of the frame buffer service. You see here on the left the original frame buffer server which would be the driver would receive pixels from the client, the pixels would go in this direction, the client would sometimes call the driver for refreshing certain parts of the screen and the driver in turn will provide some kind of viewing signals to the client that was basically the original state. The capture interface is reversed so here the driver is here as a client and it receives pixels from the server and additionally the driver calls for changes on a regular interval so this comes down to the same as a viewing mechanism but this viewing basically returns the changes here so that the driver knows what pixels have to be updated in the device. And very similar looks the picture when you look at the input handling so here the traditional scheme was that the input driver would provide a service and input service to the input client for example the GUI server and so the events would flow from the driver to the client and the client would basically request those events from time to time or when notified by the server by the driver. So in the new interface the so-called event interface it's much simpler the event client which is the driver pushes events basically to the server which is the GUI server so the events flow from the client to the server. Okay that sounds really cool but the problem is in practice we have this big and frightening question how to get there so there are about 50 different scenarios in our code base so one of them is for example is good to us so all these scenarios needed to be kind of put upside down we have plenty of device drivers for different devices all of these must of course be touched and there are of course the low-level GUI server and components and the companion components like the input filter that needs adaptation by preserving their feature set so for example preserving the ability to nest different init picker instances. So this seemed like a monumental effort and so we had to break it down into multiple steps in this part of the talk I will go to into the methodology that we applied here in the use case for the input but for the graphics it looks very similar. So the first the starting point was basically we have this nitpicker GUI server here and the nitpicker GUI server connects to the input filter over there and the input filter in turn would connect to the drivers this was the original layered architecture. So the first thing we do is of course we have to change the nitpicker GUI server to also accept event clients and capture clients so this is basically the old version of nitpicker and this is the new version and we also changed nitpicker to make input and frame buffer optional so basically that this becomes a freestanding component so now when using this component you see now that we have cut the dependency from the input filter here so the nitpicker now operates independently from the input filter which is really nice. From the perspective of nitpicker the input filter appears through this kind of intermediate component here this is like an adapter component like an event client but from the perspective of the input filter the input client here looks exactly like nitpicker so this new adapter component does the trick for us it converts those two interfaces the event interface and the input interface. So now the next point the next interesting point are these drivers so you look at the hit driver for example here they need to be changed right now they are using this input interface here so they are providing this and we have to change this so what we do is we can take this input driver and change this input driver to replace the input service by an event session by client role and use another adapter component it's called input event bridge to basically represent the result as an input service to the outside so this is also such a transitionary component it is basically just meant for this intermediate phase of going from one architecture to the other so this is basically a drop in replacement for the left side so this is nice because now we could address each driver individually and each time we finished the driver we had a consistent state and so we could process the drivers piece by piece so when all drivers with all drivers reversed the picture becomes like this so it gets more and more complicated so and you see here in this picture you have now this conversion from this event service over here to the input service over here and then a conversion back to the event service so the nasty guy in the middle of course to blame for this so we have to reverse the role of this input filter and you may have guessed it the input filter has to be replaced by a new component that's called events filter that serves the same purpose but operates in an opposite direction so the input streams they are coming here from the from the servers in the input filter case and are propagated to the input client and in a new case the streams of multiple input clients would be merged together two into one stream that goes to the nitpick a GUI server over here and then suddenly you see that the color complexity that we have seen in the previous picture collapses really to this simple and beautiful picture now we have reversed this part of the GUI stack so this is one piece of the puzzle and there are three other pieces that are coming really right at the same at the right time so one part is the so-called platform driver which is a component that arbitrates the access to devices it also manages the IO address spaces for limiting DMA per device and it also manages the power gating of PCI devices then next the second other puzzle piece is the ability of G-Node to monitor the litmus of components using a hard-peat monitoring mechanism and the third piece of the puzzle is a fault injection mechanism that we can build with G-Node's existing tracing mechanism so in G-Node it's possible for a privileged component to inject tracing policy code into remote components and here there are several default trace points so for example each time an IPC is performed a trace point is generated and all we have to do is basically creating a policy that divides by zero and we can install this policy into any remote component to make it crash basically so let's give this a try in practice so what you see here and basically it's good running on this machine actually and over there you see a small application this is this special privileged component that has access to the trace service and this component allows us to look at all the different threads of the system and the different components you see here the list of components and so we can pick a component that is of interest for us for example here we see this into the frame-puffer-driver component and you see the list of threads and down here you see that there is this main thread and we can just install this policy here to make it crash and you see now the screen has frozen I cannot move the mouse any longer so the frame-puffer-driver has crashed but the heartbeat mechanism kicks in and restarts a new instance of the frame-puffer-driver and we can continue working so the system has basically restored itself. As another topic that's very similar we raised the question can this beautiful mechanism that we now applied for the graphics and the GUI stack be applied to other driver categories as well and the most promising category was of course the networking drivers so here you see similar to the GUI stack picture the layered architecture of the original scenario you have a platform driver here and then there is a NIC driver that talks to the network adapter and provides a NIC session interface and the network application talks through the TCP IP stack to the network service over here. In real-world scenarios there's always an indirection so there is this NIC router component over here so this NIC router multiplexes this physical network device into multiple virtual network devices using network address translation and implementing basic protocols over here and this way multiple applications can use the same network connection at the same time. Here you see that the different network applications each one has a dedicated TCP IP stack so the NIC router is a really small component it's about I think 8000 lines of code and the complexity of network protocols lives over here. But the bad part of the picture is again in the real world this NIC driver is risky so for example if you look at the wireless stack the wireless stack of G-Node it's ported from Linux it's again in the order of 100,000 lines of code ported from the Linux kernel we cannot trust that this code will never fail it can of course fail and it fails also in practice and when it fails then the NIC router will get stuck because of this dependency here and transitively all the clients will also get stuck and this is not a good situation. The solution looks similar to what we did for the GUI stack so basically the idea is to introduce a new session interface for the for modeling the relationship between a driver and network driver and the NIC router and we came up with this new uplink session interface over here which basically corresponds to a NIC session interface but it is also a bit reversed so in the NIC session interface you see that the client can request the MAC address and the link state from the server send packets to the server and receive packets from the server and the server would also inform clients about link state changes using asynchronous notifications. For an uplink client the situation is that the client is the driver this one and the driver connects to the uplink server which is the NIC router and there's some simplification so we for example there is no need to communicate a MAC address because the MAC address can be imprinted into a session statically and the link state also does not need to be modeled because the existence of a session means that the link is up it's as simple as that. So with this new interface we of course have to jump through all the hoops to come from point A to point B but the final picture looks so beautiful. You see here that the platform driver and the drivers live now independently from the NIC router so there are more clients of the NIC router and the NIC router clients they are still clients of the NIC servers over here so they are not affected by this change but now if this driver fails for example the NIC router will just see that the client has disconnected and we can also for example add multiple NIC drivers here and they can all connect to the same NIC router and the NIC router policies can be applied to them. So to try this out I can give you also a demonstration of this feature so for this let's start a network application so I will just start a web browser over here so it will come up in a few seconds here you have it and okay this is for the previous I have just killed it in the previous instance I will just open the website for example genotwalk and you can see that the browser can be used to browse this website and we can go to the documentation section and so now let's do the same thing that we did for the graphics but this time for the for the network driver so the network driver it lives inside this runtime over here you see it here yeah and when I have to scroll down this list quite a bit to get it the NIC driver here we have it that's the NIC driver so I click on it and again I will just install a division by zero tracing policy to the to the main thread of this component and now the network driver should not be able to run anymore so let's try what happens when we for example go to the download section you see that now the browser has really trouble to fetch any network packets and normally after a few seconds the timeout triggers and the web browser will just give up let's wait a few seconds to see what happens yeah here we see it connection refused so what you as a user can do now is you can go to this component graph and you can basically use this convenient restart button to start the driver and you can also see it in the lock what happens now here you see that the new driver came up the NIC router has seen a new NIC client and let's try out the refresh button yeah now the network is up again so and we can continue browsing yeah so this little demonstration shows that the mechanism actually works in practice yeah now for the bottom line of this line of work we of course have to mention a few limitations so this is not the solution for everything so for example we identified that this idea cannot easily be transferred to a block devices because block devices are very stateful so that's the point of block devices and so when thinking of killing a block driver and restarting it questions about the consistency of the data arise immediately and also the question what happens with the caching it also comes down to the consistency question so this cannot be handled in this straightforward manner as we did for the network or for the GUI stack and maybe we can address this separately it's not planned today but maybe we will get a good idea later another consideration is bus drivers so for example if you think of a USB host controller driver such a driver must really must be be harmed so it there's no way around it it must be long-living because it has it is a dependency for most other drivers and has such a far reach so if you think of a USB host controller driver and you are attaching a USB stick which is a storage device and you have a core software stack that depends on this USB stick then of course if the USB host controller driver crashes then this whole storage pass die and there is no real way around this so consequently we think that bus drivers must be harmed pretty. Now coming towards the end of my talk I want to mention a few prospects that have become possible thanks to this work first a lot of things that we for a long time thought of really complicated have now become quite simple so for example when looking at dual head scenarios even using multiple graphics cards or multiple graphics devices at the same time now with this kind of reverse of this driver role this now becomes really really easy actually from the architectural point of view of course there are still many open questions but from the architecture it's a big big advantage now furthermore this work also clears the way towards virtual devices for example for capturing the screen taking screen shots or for implementing something like a virtual on-screen keyboard for a mobile phone because these applications they can just also use the capture session or the event interface of the nitpika-gray server and so appear as a physical device of course once the security policy allows this another prospect is that this will ultimately allow us to swap out different kinds of drivers from one to the other so for example imagine you have one driver ported from the linux kernel one driver ported from a scratch so when one fails you can try the other that's quite nice it's also possible to update down on downgrade drivers using genotes package management just like any application any regular application which is which is will contribute to this long livanness of genot systems and another advantage comes from the fact that we can now kick out device drivers from the system at any time so and save power like in the bus driver can gate the power of these devices automatically if no driver connects to it then we have a really good power management scheme and this in turn also simplifies things like the driver lifecycle management for suspend resume because drivers no longer need to cooperate with us okay this comes this brings me to the end of the talk thank you for your attention and let's see if there are any open questions thank you yeah let's give it a few seconds because there is there is usually a like of the stream I think we are slowly getting there yes yes all right so Norman thank you very much for for the very nice talk and so there are some questions on the chat I'll read the first one I mean not the first one but probably the one that that has not been answered yet does it mean that at suspend all client connect connected from their all client disconnect from their drivers um no the actual yeah strictly speaking yes but if you think of applications the applications are remain connected to the server or to the Nick router so there is this intermediate component that stays there in the system that is not restarted and and this decouples the applications from the drivers basically so in the one case that's the Nick router for the networking case and for the graphics stack is the it's the nitpicker GUI server so these these multiplexing components they they they yeah they live for a long time so it's transparent for the applications not entirely transparent because for example in the graphics case if the if no driver connects to the nitpicker GUI server then nitpicker will just switch the resolution of the of the screen to something like one by one pixels and so clients will actually see that that there's no no reasonable screen resolutions so they can adapt to this situation but as soon as you for example connect a new driver then a new screen resolution will be set up and the drive and the applications can can adapt to this yeah okay thank you then there was a question which you have probably already partially answered in your talks so so how does this map to to um sorry how does this map to stateful drivers like like GPU drivers and stuff like that you want to elaborate more on it or I can give a few thoughts maybe um so for I think that's a very good question and it should be considered for each class of drivers actually so for block devices I think I would conveniently skip this question because I have no clever answer so I think in principle one could go into something like a rate kind of scheme or or some sophisticated mechanism but I'm I always love simple simplicity so so this is the opposite of that so but for the GPUs actually we have a plan we we think of GPU drivers as basically resource multiplexers for the GPU so when we when we during this year we will design actually a GPU architecture for genote and here the GPU driver that plays the role of a kind of hypervisor for graph for the GPU and and we will apply the same the same principles that we applied to a microkernel to this to this component so in our previous experience we already did this for the inter GPU and we came up with a reasonably small component I think it was about 6000 kinds of code that was able to multiplex the GPU among multiple distrusted clients and I think that's the way to go but this this GPU multiplexer will be a long-living component this will be not hotswappable okay makes sense to me I have like an open-ended question regarding to the whole architecture so do you think that some of the root causes of or the or the reasons why you have to somehow you know reshuffle the components and their connections might be also caused and maybe cause is not not the right word but there might be some some way how this could be completely avoided by using asynchronous communication or by not having this really strict parent child relationship maybe maybe I'm not sure about this I think of this strict relationship so very strong asset of our architecture so it really sharpens the rule of the of the components you always when you design a system you always keep in mind which are the clients which are the servers which depends on which component depends on another I think the the kind of cause of trouble that we had had to go through is lies more into our kind of former like academic kind of thinking of these layout architectures yeah you have this academic view of beautiful generalized principles and you see a driver as a resource and you and drive so the driver provides a resource so it's a server so everything goes to the next step so it automatically and and then finally you get this layout architectures and then you look at it and you see oh wow that's not working so well so because in reality not everything is such a has such a nice academic kind of simplicity to it and so I think that was misdirecting us at the beginning and so I think the strict rules are not to blame they are really a good a really good point about genot but I think that we just took a wrong turn in the beginning of the project about the the modeling drivers as servers understood and my the reason why I'm asking is because we don't see some we see obviously some of the problems that you see in how long ways but not all of them and I think that's because our architecture is more like a like a generic graph there is not not always this strict relationship so so so the trust works in a different way so this is maybe why I'm asking whether you see this strict you know parent-child relationship as something that somehow you know roots the problem yeah there's certainly some some truth truth to it but but you have to consider the upsides of this tree as well and the biggest upside is that it it implements the kind of strict management scheme for resources for and for for implementing and enforcing policies and that's the biggest biggest advantage and we wouldn't like to miss it miss this yes I agree completely I think the research management management that you have is a very you know strong point of you know yeah and it's tied to these rules silver client rules okay I don't see any other questions in the chat so of course everyone will be able to reach Norman via the matrix that this this talk room will get open for everyone soon so you can you can talk to Norman directly either via text or even via video conferencing thank you very much for your for your talk thank you very much for your answers and I will wrap up this this Q&A session yeah thank you Martin and Jakob for organizing thank you
|
Resilience is often touted as the biggest advantage of component-based systems over monolithic architectures. The catchy part of the story often told is the containment of faults via sandboxing. However, the story has another inconvenient side that often remains untold. Components are interdependent. Whenever a central low-level component fails, dependent software stacks suffer under the outage. The talk presents Genode's recent breakthroughs to address this second part of the story, in particular making the system resilient against flaky device drivers. Component-based operating systems promise the containment of software faults and vulnerabilities by separating functionality into sandboxed components. In practice however, a contained fault is still a fault. Whenever a fault happens in a central server component, clients have to suffer under the outage of the server. Device drivers are especially problematic because they tend to be fragile while being a hard dependency for critical software stacks on running on top. Even though a bug in the driver cannot subvert the information security of the dependent components, it cuts the lifelines of those components. This fundamental problem calls for an architectural solution. We found the key in the reversal of the dependency relationships for several classes of device drivers. During this line of work, we re-stacked Genode's low-level GUI stack and turned network device drivers into disposable components. Thanks to these changes, drivers for framebuffer, input, network, and wireless devices can now be started, killed, updated, and restarted at anytime without disrupting applications. The talk provides a holistic view of Genode's recent architectural changes, gives insights into the though process, outlines the methodology applied for turning big parts of the system upside down, presents limitations, and gives an outlook to the future of Genode and Sculpt OS.
|
10.5446/52733 (DOI)
|
Hi everyone, my name is Jürgen Andronik and I'm part of the Trust with the System Group in Australia and I'm very pleased to be here at FOSDEM in the microkernel dev room to talk to you about the SEL4 Foundation. So you should have heard about SEL4 from Gennard Heizer in the talk just before mine in this microkernel dev room. So by now you should know everything about SEL4, you should know that it's the most trustworthy foundations for safety and security critical systems. And why is it the most trustworthy foundation? Why is it the obvious choice? If you're doing safety and security critical system? Well the reason is that it's not only the world's fastest operating system kernel, but it is also that it has the most comprehensive mathematical proof. So it has very high assurance about its correctness and about the fact that it enforces security and in particular the isolation of application running on top of it. So their integrity and their confidentiality. Importantly for FOSDEM, SEL4 is open source and we've seen a growing community and adoption around SEL4 and in particular SEL4 has already been in use across many domains. So in automotive, aviation, space, defence, critical infrastructure, cyber physical systems, security industry 4.0, certified security and so on. Pretty much everywhere where you can't afford to not be demanding very strong security and safety for the software that you're building. So this is my first message. If you are in one of these areas, you should definitely look into SEL4 if you're not already doing it. The second message is that with all these growing interests and growing adoption, SEL4 needed a home to host this community and ecosystem towards a bright and long-term future. And this is what the SEL4 Foundation is all about. It is about building that strong ecosystem of software, developers, adopters for the safety and security critical system based on SEL4. You can learn more from the SEL4.systems website where there is a part about the foundation but this presentation is all about what this SEL4 Foundation is about and in particular why you should join. In particular there is an easy three steps process that I'll mention in a second. So in particular I want to spend some time in this presentation explaining the benefits of joining the SEL4 Foundation, in particular if you're in the areas of safety and security critical software system. And secondly, very quickly, how precisely do you join? What are the exact steps? In addition to that, I'd like to give an update from the board and from the technical steering committee since the creation of the foundation itself. So firstly, let's look at the benefits of joining the foundation. So I mentioned that it's about building that strong ecosystem. So in particular there are four strong purposes assuming that you're already in these kind of areas where SEL4 is really the key choice to build a high assurance system, software system. So the four main purposes of the foundation, the first one is to increase that participation and adoption so that you have really an entire community supporting the technology and associated tools. The second point is to ensure the long term independent support of a SEL4 in the ecosystem. So that means if you are a company, an organization that is betting on a SEL4 for your next product, you want to know that it will be supported longer term. Or if you are a service provider and you want to provide services around supporting a SEL4, you want to know that's a long term plan. And therefore you want to know that SEL4 is a technology that is not dependent on single individual or single organization but actually supported by an entire community in the ecosystem with a long term future. So this is the second big goal of the foundation. The third one is about making sure we can continue to accelerate the development of SEL4 and in particular its associated proof. So to have enough funding and support to be able to do that kind of development to make sure that it's being supported in various platforms for various features. And finally, a lot of people betting on SEL4 would have similar issues developing similar things so it's basically to consolidate and facilitate that interoperability standardization and sharing of costs around developing that technology. So altogether these four goals are really there to show that SEL4 is not only just the obvious choice, the best solution to choose from, but that it's also readily deployable, that it's not just technologically the best choice but that it's well supported, that it has a diverse and stable ecosystem of service providers of products that makes it also the choice that it is easy to make. So once you are convinced about the purpose of the foundation, what makes it specifically a benefit for you to join as a member of the foundation, these are all the benefits, I'll go through them one by one. The first three ones are generic and the other four are specific to your organization profile, what you're doing with SEL4. So let's start by the first three ones. So the first one is really be associated with this most advanced, most highly assured OS technology. So promoting the fact that you're taking security seriously, that you're looking at the most advanced technology that really guarantees the highest assurance about the correctness in the security. So have your name out there, I'm part of the SEL4 foundation, I'm serious about security. The second one is once you've looked at SEL4 as part of your journey, as part of your products, how do you get easy access through that membership, you can have access to expertise across the foundation. So this is an entire community of people that are having similar issues targeting the same proposition that you can share with. And finally, you can influence the direction of the growth of SEL4. So if you join as a member, you can participate to some extent to the board, depending on your membership level, I'll mention that in a minute, and through its committees as well. And through that, you influence what are the next big step of SEL4, where we should allocate the funds, what is the priority for the next one to five years, approve some standards, approve some branding or trademark use, and also come up with certification and accreditation schemes. So these are the three main benefits. So one is really being part of this world changing community and say it loud and clear that you're part of it. The second one is connect with the others in this community that are aiming at changing the world, and finally contribute yourself as driving that change world changing mindset. Finally there are benefits that are specific to your profiles. For instance, you might be NSEL4 adopter, meaning that you're building some solutions and products around SEL4. So the benefit of joining as a member is that you can support the creation of artifacts that a number of people in the community need, and you can eliminate the replication of effort and reduce the cost for everyone by contributing to that funding. If you're a processor manufacturer and you want to be competitive in the market of critical systems, you want to make sure that your hardware is well supported by SEL4 so that people making the change to SEL4 can use your platform. So I'll come back to which platforms are supported and so on, but basically you can support that by joining as a member. Finally you might be a service provider. In particular you might want to provide services around SEL4 support. Well by joining the foundation you can become a certified trusted provider. What that means is that you have a special connection with the core technical team of the SEL4 foundation. You have opportunity to advertise as such in the foundation website, and with that trusted relationship where it's working closely with the core developers of SEL4, you have built that trust that allows the foundation to redirect inquiries coming to them or facilitate putting together common beats for funding. And finally if you're a university or public sector you might do research or teaching or training around SEL4 and the foundation allows you to advertise your teaching or your research, establish collaboration with other universities doing research or training or teaching. So these are all the benefits of becoming a member and once you've done that what are you going to help achieving? One of the big things that we want to achieve with the foundation is make sure that the kernel and in particular the kernel verification, the assurance is there for a number of architecture platforms and features. So I've talked about the fact that SEL4 has very strong mathematical proofs. Well it depends on the platform and configuration. In particular the current status is the following and probably Gernot has mentioned it in his talk just prior to mine. So in terms of the code itself, the implementation, it's available on the main three architectures so x86, ARM and RISC-5 and this is for a unique core platform. There's also some support for multicore but this doesn't have the assurance or the verification at this stage and this is exactly what we want to tackle through the foundation. So again for the proofs they are available to some different degrees for for instance the most verified platform at this stage is the ARM ART32 on Unicorn. And for this you have proof of correctness and security. For x86 and RISC-5 both 64-bit you have the proof of functional correctness on Unicorn. You also have an extension or a new version of the kernel that allows support for mixed criticality system which is basically it's going to become the future default for the SEL4 microkernel and here the proofs are also nearly done. You can hear more of that on the exact status and so on on the SEL4 system website but also there is a white paper that I can highly recommend that gives really an overview of all of the SEL4 design principles, supported platforms and so on. But basically through the foundation what I want to highlight here is that once you are a adopter or a service provider or a platform provider you might not be on a specific platform that is readily available right now and so we want to make sure that you will be in the near future. So in particular this is another way of seeing the future so the wish list if you may. So on the left hand side you have everything before this extension to multicore to support for mixed criticality systems and on the right hand side you have what will become the default once the proofs are finished which is with the mixed criticality system support. So on the left hand side you can see that the most verified platform at this stage is the ARM AR32 where we have proof of functional correctness, binary correctness and security. What that means is that you have the strongest possible assurance that your binary, the thing that is executing on your system is doing exactly what the spec is saying and that it enforces security. So it enforces the integrity and the confidentiality of the application running on top. All the other columns there show the status for different platforms or different features. As you can see there is a lot of cells here that needs funding in particular having the security proof for platforms other than the ARM AR32 or also the well needed multicore support that we have the code for but we don't yet have the verification for. So through the foundation with the funding of the foundation pulling together for organization that all need that kind of support we can get funding to extend this kind of assurance for multiple architectures, platform and features. Through that what we want to provide is of course a platform that is mature and readily deployable. By that I mean that the kernel itself being the micro kernel dev room I don't need to justify much but the kernel is only a big part of the operating system platform and therefore we want through the foundation to have as much as possible operating system components to have a mature readily deployable platform that people can use from scratch. And then finally we want to extend the story, the house and story beyond the kernel. So in particular you might have drivers that are on the trusted computing base or you have specific application let's say a filter for instance that might be in the critical trusted computing base as well and therefore you might want to have high assurance of those limited components as well and want to extend the story there. More generally we want that expertise around the CL4 to be increasing and making those CL4 based system ubiquitous that's kind of the general big aim of having verified software and software that you can truly trust becoming really ubiquitous. So I hope that by now I've convinced you about the benefits of joining so now I'll say okay how concretely do you join the CL4 foundation it's a really easy step there is a link there but if you don't want to remember that link you just google CL4 you feel lucky so you click I'm feeling lucky you immediately get to the CL4.system website you find the CL4 foundation logo you click on that and then you search for join now here this will guide you through the process of joining through that you'll have to choose your membership level so the membership structure of the foundation is the following there are three levels there is on the right hand side you see the associate which is free this is mainly for non for profit open source project and government entities otherwise it's either premium or general if you are an organization betting on a CL4 building your products on that what you want to look at is a premium membership because this gives you a guaranteed board seat so this is where you can drive where the development of a CL4 will go and how to make those decisions the general membership depends on the size of your organization and then you vote as one sits as an entire class another thing to note is that the CL4 foundation has been established under the umbrella of the Linux foundation which means that you need to be a member of the Linux foundation so if you're not already a member you need to add that membership on top of the CL4 membership but the process itself is the same which means that the three click process I mentioned the previous slide includes the process for joining the Linux foundation if you're not already a member so this is the kind of easy step to join and now I want to give you an update on the both the governance side of things the board and the technical steering committee so this is everything that has happened on the governance side of things I'll go through things one by one so firstly the foundation has launched in April and 2020 so this was quite a nice event with the number of press article it was just at the beginning of the time where the world went to lockdown so we had nice celebration but everything went virtual of course we kind of used to that but it was just the first times at that stage so we have a governing board and let me present you the governing board so in this board you have Gernot Heizer which you've heard about he's been driving that CL4 agenda for the last two decades and more then next to that you have Dan Potts who's from Ghost Locomotion who's a California based company that is working on self-driving cars and using a CL4 to keep them safe next to that you have Sasha Kegres from Hensel Cyber which is a Munich based company working on embedded IT products and using a CL4 based operating system combined with RISC-5 processor to basically meet the highest security standard next to that you have Gerwin Klein who's been driving the verification of the CL4 micro kernel from the start and finally have John Launchbury who I hope doesn't need an introduction but he's been founder of Galois and of Tangram Flex but he's also been director at DARPA and has been a strong supporter of the CL4 micro kernel and CL4 vision for a long time including through funding program through DARPA like the Hackums program that has seen a CL4 embedded on unmanned helicopters and robots to secure against cyber attacks. So this governing board is the starting board and after one year of running the foundation will host a vote according to the governing rules will give an opportunity to get the general members to have a seat at the board and any new member joining as a premium to be at the board as well. The foundation has a website that you can find through the CL4.systems website and through there you find all the information that I can mention here but you know the governing board the how you join the technical steering committee and so on. We have an increasing set of members so we have the founding members the premium members that are CSIRO, Hensel, Cyber and UNSW. So CSIRO and UNSW are premium member as a recognition of having created the CL4 and its verification. Hensel, Cyber has heavily funded the verification on the risk 5 platform and finally you have a number of general members. Let me just go through them quickly. So Ghost on the bottom line is the one that I mentioned that is doing self-driving cars using the CL4. Then you have next to that Donna work who's doing software services and in particular providing support services for CL4 systems for companies in the US and elsewhere in the world. On the first line you have Advanteon Labs who's also been working like Donna work with our team the CL4 team for a long time including through DAPA projects, DAPA funded projects and Advanteon is doing R&D around high assurance software and in particular in the medical space. Then you have Breakerway Consulting who is Sydney based in Australia and is also doing has been doing a lot of embedded medical devices support but has been also investing now in looking into a CL4 and will become a key partner in the CL4 support. Finally you have CogSystem who also have a long history of support with CL4 starting with first OKL4 through the General Dynamics company but now switching to CL4 as well. I'm also very excited to say that there's another name here but at the time of the recording the announcement has not been official so I'm pretty sure by the time this is shown at the conference the name will be there so just go and have a look very excited a partner that we've been working from the very beginning on a CL4 big name being there and so go and have a look. Of course we're really keen to have your name added there so once you have seen these slides and you're looking at a CL4 as a solution please join and add your name there for the reasons I say previously. The other thing I wanted to mention is that we're working on endorsement and certification scheme related to services, training and products so we have already endorsed three service providers Donna Works, Brekaway and Cog all of them I mentioned before and really this is to establish a trust relationship between the core CL4 development team and providers that we know we work closely with that understand the whole process and the standard verification implications and so on and so forth so this is really a way to demonstrate that trust relationship that if you are then say an adopter that you can know where to get support that is endorsed by the foundation. Finally the board has been working on other stuff like the trademark use in which cases you can use the name and promote it and this is all available on the website as well as the white paper that can help getting an overview and an introduction to CL4 context design principle supported platform and so on and so forth. So this was for an update from the board there is also separately from the board a technical steering committee as its name says it's look at technical aspects of CL4. These are the people involved in the technical steering committee of the CL4 foundation. Very quickly Matthew Breknell is a verification engineer in the core CL14 very involved in providing in the big projects for verification of a CL4 on new platforms as well as making sure that the verification is future proof in the sense that it can sustain changes and is robust to changes. IHORCUS is also part of the core CL4 development side on the system side of things and is leading all our projects from the trust with the system group. IHORCUS is the team leader on all of the proof engineering and in particular all the verification of a CL4 on different platforms. Corey Levis is also a verification engineer part of that team. Next line Annalyne and Kent are both on the system side of things. They used to be part of our team and on new companies also supporting CL4 and key members of this CL4 system team. And so then you have myself, Gernot that I heard you introduce and then Kevin Helfenstern who is basically the original designer of the CL4 kernel itself and still involved in all of the design discussions around CL4's development. Okay, so after discussing all this technical student committee, what are we doing? We really, our goal is to increase the community involvement and contribution. We really want to have more people involved and that's the priority because we're just launching this foundation and we want to be sure that we can get help from the entire community to support the development of a CL4. This means that we want to have more open, more transparent ways of doing things. So we've opened the tracker for instance and the repository so that everything that is going on both on the development of the code and the proofs are seen from the external world. We also introduced a review role which means that people can help on pull requests and issues and this is kind of the stepping stone to become a cometer at the technical student committee. So currently we have a few people in that role from the various companies organization in that community that the members of the foundation. Currently we have a strong ongoing work to make it easier to contribute to the repository in particular for processes and documentation but as well as making sure that everything that can trigger action on GitHub can be accessible for people that are not necessarily just on the core team and the trust with the system group. This has a number of challenges in particular one of the big target is to be able to trigger the verification test with so to make sure that if you change something what has changed in the code, sorry if you change something in the code what impact does it have for the proofs is something that we want to have more accessible but it involves technical challenges that we're working on. On top of that we really want to have people contributing to the roadmap as well. We have members that are working on specific support for tools or user level technologies that we want to make sure that we can include them in the roadmap and so we have a process for that now and there is a process to scrabble the website. We also want to be able to have an ocean of platform owner which means that across the main three architectures there is a number of different specific platforms and we want the community to help supporting those platform and become owner of a platform so that it's supported by the entire community not just one single team. That's all of the things that I can mention from the board and the technical steering committee and so I hope that by now I've convinced you of the two main messages of my presentation the first one is if you are in the area of safety and security critical system you want to bet on the most trustworthy foundation which is the SEO for microkernel and once you've done that you want to join the SEO for foundation because by doing that choice you join the foundation to be part of this world changing adventure. You contribute to that community aiming at changing the world and you contribute to driving this change towards software that we can all truly trust. Thank you very much for your attention and I'll be happy to take questions in the session just after that for the Q&A. Thank you very much.
|
The seL4 Foundation was created in April 2020 as a Project of the Linux Foundation. Its aim is to provide an open and neutral framework for developing seL4 and its ecosystem and promote update. The talk will give an overview of the seL4 Foundation, its goals and activities and the benefits of joining.
|
10.5446/52735 (DOI)
|
Okay, so for the people outside our group who are watching the stream, this is a panel session, but it is meant like an open discussion between some of the representatives of some of the open source microkernel based projects. We are still waiting for Udo Steinberg from the NOVA team or I should better say from the Baderok systems to join us, so let's give him a minute or so. Okay, anyway, the intention is also to have this as a sort of replacement for the traditional microkernel dinner, which is obviously quite informal, so I suggest we start with our refreshments as well, don't hesitate. Wonderful. You should have said that in the invitation email, Mati, so just water for me. Cheers. Water looks like vodka. So, that's a lot of vodka. So then we will see how this discussion will go. Okay, and before before Udo is able to join us, I would suggest we do a short introductory round for those who don't know our faces. I will probably start with Norman, again, the order is just the order of acceptance to this deaf room or to this panel, so it's not seniority or anything like that. So, Norman, why don't you start? Okay, yeah, I'm Norman Feske, co-founder of the Genote project. We do this project now for about 12 years as an open source project before that time, the first project was done at the University of Dresden two years in advance. Yeah, now I'm still doing the like the architectural side of this project and also on my day to day life, I'm also doing a lot of development work. Yeah, and I think also along with you, Martin and Jakob, one of the most regular attendees of the microkernel deaf room, so I think many people regular visitors of the deaf room may know my face already. Thank you, thank you, Norman. Julian, why don't you continue? Thanks, Martin. I'm Julian Stecklina. I'm also an outcome of the Teotristen Operating Systems Group like multiple people here. I have been working on some form of microkernel based systems since around 2009, I think, with a quick detour to working on the cloud at AWS where there was a Linux based system that's also pretty component based. At the moment I'm maintaining Headrun, which is an over fork at Cyberos Technology, and the main reason I'm here I think is that I've also been active a bit in organizing this deaf room. I'm also trying my hands at getting the people in Dresden where a lot of us are based to meet each other, so we have like a very infrequent meeting and are trying our hands at podcasting as well. Thank you. So Udo is joining us at the right time for his introduction, so Udo why don't you say a few words about yourself. If you can hear us. We probably cannot hear you. Okay, before Udo manages to iron out his setup, Matthias why don't you continue with the introduction. Yeah, thank you very much, Martin. Yeah, so my name is Matthias Lange. Currently I'm working for CanConcept, which is a spin off from also Teotristen from the Operating Systems Group. I've been working on micro kernels for 15 years now, which is a pretty long time, I guess. And here I'm responsible for developing stuff, but also I'm responsible for some of our key customers. So yeah, that sort of stuff I'm doing. Thank you very much. Gernot, please continue. I think we can skip. I am Gernot. I can beat Julian's microkernel affiliation. I've been doing microkernel research for a quarter century, or a bit more. I've been the leader of the team that developed the SEL4 and verified the SEL4 microkernel. I'm in my day job. I'm a professor at UNSW Sydney. And one of my many part-time hats is I'm the chairman of the SEL4 Foundation. Thank you very much. Then there is Jakub, Jan Maaf and me. We will serve mostly as the moderators, but still, Jakub, why don't you introduce yourself as well? Yeah, okay. So I'm Jakub. I also work for CanConcept as Matthias. I usually have presentations on Hellenos or on L4RE in the microkernel developer room. This year, I didn't have any, in fact, I didn't do much on microkernels except for my day job. And I'm trying to help out Matt Kim with running this room, but he's doing a wonderful job. So I'm not really necessary. Thank you. Anyway, Jakub, you are great support, even if it's somehow manageable. So then there's me. I'm Martin Deky. I've been also around microkernels for many, many years. I'm one of the co-authors with Jakub of Hellenos, but I'm also working on microkernels in my day job. So I have been working for Huawei Technologies since 2017. And since 2018, at least, I'm working on our in-house microkernel based system, which you might have heard of as HomeMang. And actually, now I'm working on a second microkernel in Huawei, which I cannot mention. You just did. I cannot mention the name. I can say that it exists. Yeah, that's it. Okay, Udo probably will join us later and we will obviously give him the chance to introduce himself. But, you know, for the sake of time, let me start with a very simple question to all of you. And again, we can probably keep the same order as before. So when you look back on 2020 regarding microkernels, what do you think was the most interesting thing that happened in your projects, in other projects, a success story or anything like that? Yeah, so just speaking for the Geno project, I think the most exciting event was in the summer when once we got the Chromium engine running on Geno Directive. So as you may know, with Geno, we tried to really build an operating system instead of more like many other projects that are more focused on microkernel as a hypervisor platform or microkernel as an isolation platform. And then we tried to have a kind of first class operating system, which is like a different ambition. And on this way, there are many, many things to many challenges like for example, the opposing compatibility is solid enough to host everyday applications. And I think the most challenging application so far is the Chromium web engine, which is just a giant of software coupled together. It takes about a day to build it. And we managed to bring this life on the Geno system natively. And I think this was a really great moment for us. So to see this workload coming alive on our system and still also providing the benefits of Geno, the low complexity components that are not tainted by posix just living in one system integrated, I think that was a very fulfilling moment for me. Thank you. Thank you. So Julian, why don't you continue. It was two things on the work side to manage to actually ship a product to the German public sector that includes our Nova Falk called Tedron. So that means that lots of lots of people working at government agencies are actually going to use an open source microkernel on their daily driver, which is an interesting experience. And on the community side, I think the most important thing for me was that together with Florian Pesta, one of my friends and colleagues, we've tried our hands at Zuslock, which is like a systems podcast where we try to interview all the interesting people that we have in in our extended network and this has been our way of dealing with the pandemic and in some sort. Wonderful. Thank you. So, I'm not sure if if you can say something. Now, I mean, I don't see a mic, a mic symbol at his tile so Mattias, why don't you continue still. I think I can chime into the same direction Julian just did. So we actually had two customer successes. So, as you are aware, we don't have our own products, so to say, but we help our customers to build products with L4RE inside. And so we had one customer actually achieving after a very long time security certification, which rated this product for processing data up to NATO secret level. So that's for the public sector. And the other big achievement we achieved in late summer actually is that L4RE got literally on the road. So we partner with a major automotive supplier here in Germany, Electrobit, and they build a product where L4RE is inside and for so called high performance computing in automotive. So we use the, yeah, hypervisor side of L4RE here. And this product actually got shipped onto the road. And unfortunately, I cannot tell you more details about it. But if you are able to use Google and put in the right terms, then you probably get an idea on where you can actually find L4RE driving around. Thank you. Thank you very much. Let me try Udo once again. You know, hear me. We can hear you. We can hear you. So, so, so first, please introduce yourself in a few sentences and then the question was, what was, what was the most interesting or not for they think in 2020 regarding my kernels, according to you. Okay. My name is Udo Steinberg. A lot of the people in this room know me. I studied at T Addresson was part of the four community for a long time. And about 10 years ago started a project called the Nova Micro Hypervisor. I work for Bedrock and I'm heading the Kernel architecture and development team. And we continue working on Nova and an entire system around it. The scope has expanded quite a bit compared to when we started. It was originally just an x86 hypervisor. It runs on on V8. And there's also a significant effort underway to formally verify it. So, that's in a nutshell, what I do at Bedrock. The second part of the question was what was the most exciting thing in 2020. Correct. So the, the most exciting part was to put all the different efforts together, making the kernel and the whole stack portable from, let's say x86 only to arm I talked about this a lot at last year's foster. The question this year was how much of the x86 code and the arm code could be actually unify because they are somewhat generic. And it turns out that in Nova about 30 to 40% can actually be unified and that's something that we're still working on. So, we haven't released the unified version yet. The other part was how to specify how to verify and how to automate the form of verification process of our software as much as possible. And that was and is quite an iterative process to the extent that whenever somebody changes to code somebody changes a spec, somebody changes to prove all that stuff gets automatically rerun to tell us if anything broke, and that's why we're doing that very important in the development process that we don't do this manually and that we don't do this every now and then but that we do this basically as a continuous integration form. Okay, thank you very much. And Gernot. How about you. I said in my talk or the main event was setting up the SEO for foundation, which I think is a real game changer. So this is an effort to get large number of industry players behind the SEO for platform and work on broadening the ecosystem and general participation and deployment. And as I also said in my talk the second major event was verify the SEO for Colonel on risk five. So we have now for functional correctness proof of risk five. And that's the main event for to order on continuous integration. This is what we have been doing on SEO for for 10 years nothing gets committed to mainline until the proofs have gone through. I think it's a very important property to have. Thank you. I have one one question to Gernot regarding the SEO for foundation and your relationship with the Linux foundation so I mean, very straightforward question. So how do Linux people look on you I mean, I mean, how they can. You need to understand the Linux foundation is not Linux. The Linux foundation started as a foundation to back Linux but it's growing beyond that it's literally thousands of projects from across the whole spectrum of open source. So, we don't really specifically interact with Linux people we interact with Linux foundation people so that the Linux foundation is basically a framework that's been set up for easily starting new foundations. And the SEO for foundation is part of legally part of Linux foundation, blah blah something I'm not a lawyer but it's it's basically a subsidiary of the Linux foundation like literally thousands of others foundations. Okay, okay, thanks for qualifying. Yeah, I do have anything to add from your side. Yeah, I would just like to say that I enjoyed seeing that most of the projects kept going. Despite the current crisis, even though we will be probably talking about it. And that you are still making releases, but I wanted to especially highlight the Julian's and Florence podcast that I enjoyed listening to and I think that's something which, which connects our communities so I liked it very much. On my project or the HLENOS project front. I enjoyed seeing still one developer actively working on it and improving the graphics tech, so that it now looks more like windows. Yeah. That was not the goal but I enjoyed seeing that it's being worked on. Yeah, I have to quite second with the ACOPE in my personal life, 2020 was really can be really summarized as, you know, all work and no play makes Jack a dull boy, but the the the CISLOD podcast and personally for me, the privilege of being invited to one of the projects that I enjoyed was was the really the highlights so so thank you Leon and thanks to flow for for making this. Thank you for the warm words I will relate him. I would just like to encourage everyone who is listening to our stream. I mean, I hope there is at least a few per few people doing so, but I can't tell for sure to send any questions to ask to us. And in the meantime, let me ask you another can question. We have already touched the COVID-19 pandemic. So how did this affect your work or your communities or whatever you do. Please, please start again. So the relationship to our customers to the commercial users of geno I think it strength strengthens this even as well. For example, before the pandemic we never did video chat and things like that. We had to avoid this, but now every this has become an everyday thing. And so we got close more closely together with several people who we only had phone calls or emails before and so this was actually a positive outcome. On the other hand, at the community side of things. This is a bit of a set of course, so the fact that we cannot cheer on in the restaurant with each other and in a genot community. There are two events that we had to cancel so and in springtime we have this hack and hike event, which is basically a gathering of about 20 people in some nice nice nature side, hiking at daylight and and taking at nighttime. And we had of course to skip this to this last year and which was really set. And also in summer our very to turn our our office into a kind of community co-working space for two weeks. And also kind of one of the highlights of the year. We had to skip this as well. So this was of course, setting. And so we look forward to the next year or two latest year, how does gets any better. So, yeah. Thank you. Thank you. I can, I can second what Norman said so from the business side, I think there was no impact at all. So, but from the community side and the rest. It's really difficult because even before the pandemic it was already difficult to find spaces for all the different people to interact. So there were a few events like the genot barbecue which is also always highly regarded by everyone in this. But we also used to organize like pop trips and the plan for 2020 was to be a bit more inviting to outsiders and open this up to more people that are not in the to this in group and have some public talks and this this sadly hasn't happened. So we're hoping that in 2021 we can actually start having semi regular talks about system topics where we invite all the people from the companies around to join and maybe have a refreshment afterwards. But yeah, mostly the community life has has paused. But actually, if you mentioned this, I remember this one online talk that was organized, I believe it was in May or something. There were actually, you know, many people from from from the West Coast of the US and stuff like that. So, I wouldn't call it a complete fight failure. So this is true. So for the people who don't know we organized one virtual talk with Neil, it's a colleague from university presented a very, very interesting project for like a very creative microkernel like system that runs on heterogeneous systems and this talk was actually super nice as Martin said we had people from the West Coast there were surprisingly many people from Apple in this talk and also a lot of people that some Russian known faces that we usually only see at the at the deaf room at foster. So, so my takeaway from that one was that there's definitely demand. But it was a bit difficult to actually get speakers and get everything lined up. And then of course everyone from the organizing side had like their personal life to take care of which was not necessarily easy. So that meant that this only happened once. But yeah, we definitely want to continue doing that. Wonderful. So you and COVID-19. That's an interesting story. When we founded bedrock. We didn't found it as a company that was located in Europe or in the US or in any particular location. We basically hired a few experts around the world wherever we found them and it's in the DNA of the company to be, I would say, distributed. And it was that way before the pandemic struck. So, we always knew how to work remotely how to work distributed how to do conferencing over video, how to collaborate with distributed tools and things like that. The only thing that really changed for us is we used to do regular face to face meetings every, let's say two months or so somewhere around the world. Some of you have seen that because whenever we were interested in we usually met this people from the community to have joined dinner or drinks. And that has gone away because nobody can really fly anymore. So we're looking forward to resuming that when it's possible. But other than that, I don't think it has impacted as much, except obviously everybody would love to have more for social life and less of a work life but when you get locked in, you get more things done. So that's a, I would say set side effect of the situation. Thank you. Thanks for perspective. There not how about you. The biggest drawdown was we missed out on a big party on the launch day of the foundation that we were looking forward to that one for months. So outside engagements. I don't think there was a big change. There's some interesting developments. I can't talk yet now but the things have been moving on there maybe may have been a slowdown for short time but not too long. The bigger impact for us was really in research as well as the more difficult engineering projects, particularly some of the verification work. Communication has definitely suffered. The, if you, our team always was built on very close interaction, people sitting together in the same room there's a bus when you come in and you notice it immediately. There's never, you walk through the lab and there's never less than two groups of people chatting. And we, we think out on that was, was really very difficult for us. And it definitely affected some of the engineering projects of verification projects because there's just not enough and it also definitely affected research, because a lot of research ideas get generated from informal conversations. So it for us it was actually, even though you may not notice it's out, it's from externally but it was, it was a tough year and personally I found it a very tough year. I really suffer. So I'm not trying to make anything look better than it was. I wouldn't want to do the same thing again. Fortunately, things are improving in Sydney and my respect close to normal. But here was an impact. Thank you. I don't have anything to add from my side actually I don't know if your group wants to add something. So maybe I would like to just agree with with Gernot on the collaboration thing between people working together. So I saw a lot of phases nodding here when when Gernot mentioned this. And I think that was for us here at Cancuncept also the biggest impact we took or where we had a steep learning curve that we had to somehow change our way of communicating with each other and yeah actually collaborating because just walking into another colleague's office room and ask a question and start discussing and getting into the topic and this is not really possible anymore because this randomness has gone away or interesting discussions sparking from break room meetings or something like that. So I think that was the biggest impact we had here at the company. On the economic side, I think we're pretty pretty good because our customers are basically from the or a lot of customers are from the public sector and that's where actually currently money is spent on it projects on improving security and everything. So this the outlook is quite good here. But yeah, yeah, basically change the work culture here at Cancuncept a bit. And that was sometimes hard. Understandable. Yeah, sorry for messing up the order. No, no problem. Those here and I just, I just missed you sorry. The positive news is that we actually have some some questions from the audience which which I'm very, very happy about so let me start with the most one. So if you were going to hazard a guess for those of us interested in microkernel plus object capability approaches on our desktops. How far do you think easily running such a system on our desktop computers might be. So I think, I mean, this is a perfect question for for Norman to start right. Yeah, I'm using geno on my laptop. So, day to day. So this, I have to be doing this for I think five years now. I think once I visited you I also showed you the system back then. And since this time, it progressed really well. So now we have to sculpt OS, which everyone can download from our website. So this small image on the USB stick boot from this USB stick and you are ready to go on at works on on on most commodity Intel based laptops like we are usually using Lenovo sync pads. And yeah, and so all things that cannot be covered directly on a microkernel world can be run in a virtual machine so we have a virtual box running on top of geno so you can always have this as a backup backup basically. And so in fact, most of my development works still works inside a Linux based machine machine, but nowadays we have a web browser as well and we have a, we have a quite comprehensive unit like environment where you can use the view of text or at least my favorite text. There's a custom GUI stack. There are many drivers from wireless to to NVMe drives and so there are many things that are already working. So it's already feasible today. That's a great news. Julian, what do you think? I think Norman is our best bet and the whole geno labs crowd. I think the only thing that comes close is the Google Fuchsia thing I can't pronounce it the American way. So they have their microkernel based stack which I think at the moment is is target targeted to mobile devices, but I have no personal first hand experience. What do you think? Yeah, I think the standard answer of microkernel people to reusing legacy software is the answer that Gennad gave earlier is use simply put it in a virtual machine and most of the microkernels that we all use can run virtual machines at good enough time. So you can run virtual machines and this pass through devices and whatnot that you could easily run a microkernel underneath your legacy stack. This does not necessarily reduce the trust and computing base complexity because you still have the Linux kernel in it. So it's then progressively hoists security sensitive applications out of the guest into the host. And I think G node has driven that to the extreme. They basically put as much as they could in the host and not actually hoisted it out of the guest but actually implemented it from scratch in the host. The problem with this is that reusing most of the legacy applications that were written for politics on a microkernel based API is not very easy because microkernel API is on a politics and then a politics for a very good reason. So the difficulty is capability based access control being one example and the bloat that comes with politics being another example. Right. So the difficulty is in porting politics based applications directly to a microkernel API if you don't want to run them in a virtual machine. Yeah, so that's a problem that that's a problem that we solved actually. Yeah. But I mean you have to do that for all these applications that you care about. No, I don't have to touch them because we have implemented a C runtime that maps on the screen microkernel API is. So we have a politics. We have a Lipsy like the previous deal of sea and the and the Lipsy works a bit like an artist that underneath that uses a clean slate microkernel API but it but it emulates basically a politics API on top and the applications don't feel a difference. So they do they are select and they can even do their fork and exactly and they are just happy and in reality, they are living in our components. They don't know the difference. So that's pretty cool. So this is the that's also the key for running complex software like the chromium web browser on geno. We have not touched any line of code in there or maybe we have touched 20 or so but that's not that's not feasible to bring complete soft soft fast text to microkernel system by changing it. You just introduce bucks. You have to take it as is and you have to come up with a reasonably complete politics environment and then everything is everything works. Basically, so for this reason I don't I don't think that Gannert's remark and his talk is so so well forward thinking because of course it's easy to dismiss politics and I to say that's from the 90s and so we we don't touch this old legacy stuff. But what you have to realize is this is a stable API and this is an immense value. So for the same reason, people love virtual machines because we have a stable API. So the machine has a stable interface and so people love to jump on it. But now you can decide which stable API sucks more. Is it politics or is it a virtual machine. So it's both API APIs from this interface are not ideal, I think, but both are stable and there is the value. And so I won't dismiss politics because it has this huge value of being this kind of stable API and it enables this which workload that we all enjoy in our day to day lives. So what the politics API did you actually have to implement to say cover 90 plus percent of the applications you want to use. Was it like all couple hundred system calls, or was it just, I don't know, three dozen. So basically we started with a complete C library from three BST and replace the system call layer basically by by stops and then a pair pair successively we replaced those stops with with with other inner workings. So the the the P threat API we took later on and he mapped this to a genot native synchronization parameters, but that we took a lot of freedom for example inside the sort of genot API works similar to what cannot be explained in this talk it's basically a state machine based idea that we are a component gets constructed at first so first you run some code that sets up things and then the component only lives when as soon as it's triggered from the outside. So like Reynolds notification and RPC that comes in. And so this is of course the complete opposite from a politics program, but the execution starts at main and then execute step step step and has a main loop or a select loop and and things like that so in politics everything is a sequential procedure approach not a state machine driven approach. So there's obviously a gap between both of these API is and we bridge this gap in the in the C runtime. So the C runtime underneath uses this asynchronous I. This is a pretty powerful approach, and on top it it it it it pretends to have this synchronous flow of control. And this is a pretty powerful technique because we can host all kinds of cool things inside C runtime so for example, each application has its custom VFS, which is just a library and this VFS is extensible it has a plug in interface so we can basically mount a TCP IP stack local to an application insider slash net or whatever that such socket and then just point to see right see library to this as a TCP IP stack similar to what plan nine is doing and and the application doesn't know the difference. It just uses the BST socket API and that's happy and I think that's really a cool design and it combines those words to politics advantages of this as well as the politics to have a stable API, but it does not sacrifice that clarity of the system underneath, and demand, Can he, maybe elaborate a little more on why you are using the BSD license slipsy. like, I don't know, it was it 30 years ago, we had to decide which libc to take as a basis. So we wanted to make a clean cut from the Teodreson's code base. So there was this UC libc popular back then at Teodreson. So we did not want to take this just to differentiate ourselves. Basically it's a kind of silly reason, but then we also looked at the diatlipc, but this was we found the license to be too, to copy lefty for our, because we wanted to offer a genote as a commercial product. And so the license would stood away. But maybe today I would probably think differently, get in touch with these developers. But so we were drawn to a comp, we wanted to have a complete libc, not so diverse, for example, also new lib, but new lib was a popular and embedded space, but it was known for being not so compatible with complex workloads. So we wanted to have one implementation that was time tested. And so there's glibc and other open source, complex, mature operating systems are there, of course, BSD. So we took free BSD libc and went with it. So that's basically the story. So nowadays you also have muscle libc and other alternatives, sorry, Martin, Matthias, you wanted to say. So maybe I just wanted to take a step back and somehow maybe take a point or a position between your standpoint and Udo's. So Martin and the preparation for this panel asked how we can drive or how the community drive, can drive further adoption of the microkernel technology. And I see different streams here. And so what you are doing, Norman, with Genote and Sculptor is actually trying to get broader user base onto this topic by also providing rich software ecosystem around. And there's wonderful software out there that you want to use as an everyday user. So I think that's a really interesting approach. But I can also agree with the standpoint from Udo and that's actually what we are also seeing. We have to account or take the business side of things into account. So actually we have to somehow earn some money and we have to convince customers actually, yeah, stepping onto microkernel based systems and see the value in them. And usually you cannot argue for just porting over or bringing over their whole business logic software stack onto a microkernel based system. So the natural first step is usually taking virtual machines and configuring and setting them up in the right way so that they actually get new nice security properties they weren't able to gain before or have before. And now that we have done that for a couple of customers, what we are seeing is they're actually starting to ask, yeah, what can we do now to improve on this? And they're thinking about getting out the security sensitive components out of the virtual machines and putting them into native or yeah, native microkernel based or components and using the actual microkernel API because now we are able to convince them and argue for them that they're actually getting new nice properties out of this microkernel API instead of using the whole Linux and POSIX APIs. Thank you. Thanks for the discussion. I would just like to give the word to Julian because he raised his hand. Oh, sorry, Julian. Yeah, my thoughts have evolved now that I listened to you. I think it's the spectrum. The one common thing is that people using the system are very reluctant to learn something new. So this fits with all of your observations. So if they can just produce a Linux VM and run it, I think this is something that people understand. But at the same time, programming against the POSIX API and just using a different tool to put everything together is also something that people understand. So the moment you force people into an API that works fundamentally different, there's a large hurdle for adoption. Thank you. I think now they're not should add his opinion. Yeah, so in terms of the original question, that's the easy answer. Gino, this is the answer. It's a great system. It really is usable and improving in terms of being able to use a microkernel-based system on a standard platform as a working environment. So that's really awesome. In terms of it depends, though, to what you want. Part of the microkernel motivation is really highly dependable system from the security, safety, reliability point of view. And in that sense, Linux is like the bottom layer of dependability. It's got a huge trusted computing base that no one can ever get correct. And it's literally full of tens of thousands of bucks and thousands of potential exploits. Compared to that, something like Gino is, I'd say, a medium level security thing. It gives you much better security simply because it minimizes the trust computing base and componentized systems. So if there is fault somewhere, the effects of that is founded. And that's classical microkernel story. If you strive for really high security, provable enforcement of general security policies, whether it's military style hierarchical policies or commercial style policies like Chinese ball, et cetera, what's used in banks, for example, then you need to go further. And something like a POSIX model just cannot do that. It does not provide the requirements, meet the requirements for a truly secure system. For example, POSIX is inevitably not least privileged. That's just an odds with a complete model. So if you're after building something that's usable, sure, POSIX support is the practical way to do it. And that's what the Gino people are doing. And of course, their internal structure is much better than that. They have a capability system that gives you certain nice properties. But it's a far cry from being able to prove that you have a really secure system. And of course, that truly secure system is not going to happen on the desktop anytime soon. That will take decades. So what we are working is on the fundamentals of how to design a system like that and come up with the prototype and then eventually let other people actually build something. It's not something we will be able to do. That will be a large community effort. And it will take a long time. And as discussed earlier in the chat, hopefully we can be able to integrate something like Gino leverage the great work that's been done in the past. So when asking questions like that, you really need to understand what's the purpose of that? What are your objectives? What do you want to achieve? And depending on your objectives, the answer is different. Norman wants to react. Yeah. So I just want to be careful about the notion of Gino than POSIX. So I want to clarify that Gino is not a POSIX-based system, not at all. So POSIX is just an option. It's a library that a component can link to. And then this is a compatibility layer to existing applications. But Gino, the Gino API, Gino components are clean slate. They are only C++, no C runtime whatsoever. It's 15,000 lines of code sitting there at the root of the TCP. And that's it. So I think it would do Gino a bit of unjustness to dismiss it on the grounds that it also supports POSIX. So it sounds like that. And I also noticed in your talk that you mentioned you are designing a new multisubore operating system by yourself. So on the other hand, you are speaking about community and bringing things together. And so I'm confused because what do you think is the positioning between yours multisubore that you are going to build? And Gino, is it a competition? Or what do you think about it? As I said, this is research, as opposed to deployment and or engineering. Gino does an excellent job on engineering a system or based on well understood principles. No one has been able to build a general purpose operating system that is provably secure in the sense that it is provably able to enforce general security policies. And as I said, policies ranging from military style to commercial style to whatever else you want. And this is our aim. And it's very much a research project. I do not expect any practically useful system to come out of that. I expect a design to come out of that that can guide future engineering work. And as I mentioned, there is a white paper which is submitted that to a workshop. I'm waiting for the reviews of that and then plan to publish it where we actually go in detail through the requirement that you need to satisfy for a system like that. And there is no system that meets these requirements right now. So I hope this answers your question. It's not a competition. The next step in the development of the whole operating system as a discipline in giving you a design, maybe a blueprint for building a truly secure system of a sort that has never existed before. Yeah, but that's a problem that I have. If you say something like that, it always has this kind of vibe to it that everything out there, including the stuff that we are doing or others are doing, is insecure. You write something in your white paper. There is no assurance story about G-Node, for example. And this misses basically other people's work, because of course, there's a assurance story. That's the point of building the system that we are doing this. And you're basically partitioning the world into the people who have grasped this formal verification thing and have seen this kind of light. And then there are the Muggle people who are just developing things. And what these can never see the light, because only the formal verification guys can see the light. And I think this divisive tone is always present when I read your white paper, or when I see the talk, this is putting me off, really. I can't take it. I'm sorry about that, Norman. And I don't mean to insult you. But there's a difference between things that are traditionally engineered, and which Dijkstra said that over 50 years ago, testing and by implication, other things like code inspection, et cetera, can only ever show the presence and not the absence of bugs. Only formal proof can show the absence of bugs. And similar, only formal proof can show that a system is truly secure. So in terms of, I think G-Node is the best present technology can give you. But that's not where we should end. We should strive for something that is better. And this is what research is all about. And in particular, something that is able to satisfy really high-grade security requirements, which can only be delivered by a formal proof. And this is what we're working on. OK, thanks. Thanks for all the opinions. I think this was a very nice discussion. And I would just like to say that this really shows that there are no simple answers to simple questions. What do you define by your goal affects your path or your approach, you think? So just to give you maybe a slightly different perspective, we agree that G-Node is a very nicely built system. And on this front of providing a traditional multi-server micro-carnal OS built from scratch, it is really probably the most advanced for general purpose use. But like Erno said, it unfortunately likes the formal proofs, which might for some people be important or for some use cases for other not so much. Or I mean, it's always a question of the cost. But on the other perspective, I would like to give for many people, a desktop OS means something that runs a browser. And in that respect, even having an OS, despite not being so advanced as G-Node, is really on the verge of that. I mean, we just need to port the browser. And it will be for many, many people, a working desktop operating system. And in other perspective, if you just want to post-excompatibility or maybe, let's say, GNU-Linux compatibility, you can just use GNU Hort or Minix Free. And you will also get working micro-carnal based operating system. Of course, there are other drawbacks, like hardware support and stuff like that. But I'm just trying to say that there are really many, many answers to the question. It depends on the context. I think Gano made a good point when he said, it depends on your audience you are targeting. So the consumer audience, which one's a rich software ecosystem and all the software and all the browsers and stuff like that, they usually also have lower requirement on the actual level of security and isolation they need to achieve for their use case. And we are going into more into the public sector where you have to deal with confidential or secret data and stuff like that. And usually that requirement raises. And usually they are also willing to invest more in getting a more secure system than the standard one you can buy or download from the internet. And then there is also on top of that, there is still academia. And I think we're lucky to have that, which is still pushing the boundaries further. And we will see maybe in five or 10 years, slowly getting the ideas that are now research in academia, getting traction in everyday micro-carnal systems as well. So the former proof story of SEL4, I think, is a good example of that. So we now see bedrock systems also employing former proofs on a regular basis onto their system. And we will see that in other micro-carnal projects as well in the near future, I'm pretty sure. And but it took a while to get from academia down to the business to actually use that because it's a completely shift of mindsets and how you approach problems and how you have to think about them. Thank you. Just one technical note. Our session will end in about two, three minutes. But like to discuss previously, we can continue discussing as long as everybody would like to. So there is another question from the chat. Some would relate it to what we have already discussed, but maybe you would like to add something. The question is, I'm curious what people's perspectives are on future direction informal verification. The question is intentionally open and dense. So do you have anything to add? What you have already said to this? You mean me or something? I mean, now we have a lot of questions. Whoever wants to say something to it, Gerlund, please go on. So form of the verification is going to improve massively in its importance.
|
Panel discussion and an extended Q&A session on the state of microkernel-based operating systems in 2021 and related topics.
|
10.5446/52736 (DOI)
|
Hello, in this talk I'll be giving an update on the Linux Foundation Open Source Project Unicraft, a fully modular and library sized Unicernel which aims to provide outstanding performance while making it easy to port off-the-shelf applications into Unicernels. My name is Alexander and I'm a PhD student at Lancaster University and I've been lucky enough to join the project over the last year to help bring more applications, tools and OS subsystems into the project as part of my research. Let's set the scene. Specialization is arguably the most effective way to achieve outstanding performance. This is seen classically with hardware. Most recently, with applications specific integrated circuits such as tensor processing units for machine learning, Intel's Movidius, a vision processing unit which enables on-demand computer AI vision, or FPGAs such as the NetFPGA for specialized line-rate network packet processing. But these solutions are costly, requiring additional overhead for the development, manufacturing and installation. Additionally, they are inherently scoped and can only perform a subset of compute by its very nature, making them inextensible once built. That's not to say that you can't reprogram them with new logic, but only defined by its architecture. When it comes to networking, performance can be achieved by crafting packets ready to go as is shown by standstone, leveraging the ability to write to the wire at line-rate when the traffic is known. Another direction is to add the required OS functionality from scratch for each target application, possibly by reusing code from existing operating systems. This is the approach taken by Clickos to support click modular routers and with Minicash to implement webcache, to just to name a few. In the case of Clickos and Minicash, the resulting images are very lean, have great performance and have small boot times. The big problem is that the porting effort is huge, and that it has to be mostly repeated for every single application or language. MirageOS, Erlang on Zen and runtime.js are examples of unikernels that have language-specific environments and have been created to provide performance optimizations for the language itself. So in the virtualization domain, unikernels are the golden standard for specialization, showing impressive results in terms of image size, boot times, memory consumption and high throughput. Some of the other benefits that come from having a single memory address space that will address later is the elimination of costly syscalls. Many of the benefits of specialization towards performance is the result of being able to hook the application at the right level of abstraction to extract the best performance, and this can be done in two ways. Transparenly, simply by compiling your application into a unikernel, you reap the benefits of lower boot times and memory consumption, or to modify the application itself to hook into these performance-oriented APIs. For example, a web server aiming to serve millions of requests per second can access a low-level, batch-based network API rather than going with a standard but slow socket. Unikernels' networking subsystem aims to decouple the device driver side, such as VertiOnet or NetFront, from the network stack or the low-level networking application. So can you do it with Linux? Well, existing monolithic OSes, such as Linux, have APIs for each component, but most APRs are quite rich and they have evolved organically, and so component separation is often blurred and aimed at achieving performance. For example, send file short-circuits the networking stack and storage stacks. We tried to quantify the API complexity in the Linux kernel and analyze dependencies between the main components. We used a program called Cscope to extract all the functions from all the sources of the kernel components, and then for each call, check to see if the function is defined in the same component or in a different one. In the latter case, we recorded the dependency. Here, you can see the dependencies between different subcomponents within Linux. But doing it with existing Unikernels is still troublesome. It still requires significant expert work to build and reuse existing OS components to target the specific application. Some Unikernel projects are not necessarily POSIX compliant, meaning that porting an application is non-trivial, having to provide the same kind of functionality to the application itself. And Unikernels, while smaller, are still monolithic, for example with Rump Run, that compiles against the BSD kernel, still provides a lot of internal OS primitives that are not necessarily used by the Unikernel itself. So starting from an existing Unikernel project is suboptimal, since none of the options were designed to support these inherent modularity and subsystem componentization. We opted for a clean slate API design approach, though we did reuse components from existing work when relevant. All components including operating system primitives, drivers, platforms, code and libraries should be easily added and removed when needed. Even API should be modular. In order to support existing or legacy applications and programming languages, we aim to provide POSIX compliancy as optional components as themselves, while still allowing for specialization under the API. So the kernel should be fully modular, and the kernel should provide a number of performance-oriented performance APIs. In contrast to classical OS work, we can roughly split between monolithic kernels which have great performance versus micro kernels that provide great isolation between OS components and at the expense of performance. Our work embraces both the monolithic design, aka no protection between components, and the modularity that micro kernels have advocated. With that, I'd like to give you a tour of the Unicraft internals, and how an application, given the third party libraries running on an operating system with a kernel, on a particular platform hardware, is componentized into a library operating system along with the Unicraft build system that takes individual components which are required for the application's runtime, and prepares it, fetches it, compiles it, links it, and provides a final Unicurnal image which can be run standalone on the target platform and hardware. Since all components are micro libraries that have their own makefile and kconfig configuration files, they can be added to the Unicurnal build independently of each other, unless of course micro libraries have a dependency on another, in which case the build system will target and build this dependency. APIs, such as POSIX sockets, are also micro libraries, which can be easily enabled or disabled via the kconfig menu. Unicurnals can thus compose which APIs to choose to best cater for the application's need, e.g. an RPC-style application might turn off the UK Schedule API in order to implement a high-performance run-to-completion event loop. I'd like to show you how easy it is to build Unicraft Unicurnals. You can use Unicraft's command line companion tool, Kraft, to download already supported micro libraries and applications, configure the Unicurnal to your needs, and build and run the Unicurnal itself. I have pre-installed Kraft on this Linux KVM host, and I've downloaded the source files via craftless pull. Here's an nginx application. The nginx application is configured via Kraft YAML. Here I can see which kconfig options are set, for which version of Unicraft's core, which architecture has been set to, which platform has been set to, and the libraries and dependencies that are required for it to build. I can open up the configuration menu by doing craftconfigure-k. Here I have a makemenu config. I can change the architecture platform and the individual library configurations. Here are all the different libraries, which have been enabled, and I can change individual things here. We can use Kraft build to compile the sources, which I've already done in advance. Once done, we get an output image inside of the build folder. Here is our Unicurnal. As you can see, there's only 1.7 megabytes in size. I can then run it using a pre-installed tool called Chemugest. This simply links to the kernel image, posited how much RAM I should put, which is a little bit too much, but hey-ho. Dash i for init-ramfs, which I've configured as the default file system. I'm attaching it to a network bridge. I'm setting its IP address statically, the address of the gateway statically, and it's subnet. Once run, it should just boot and start running. I can then do curl on the pre-allocated static IP address. So 1.7.2.99.02. And you'll see, I get a response. Arguably, an OS is only as good as the applications it can run, and this has been the thorn of Unicurnals since their inception, since they often require manual porting of applications. One approach is to provide binary compatibility, where unmodified binaries are taken and syscools are translated, at runtime, into a Unicurnal's underlying functionality. This approach has the advantage of requiring absolutely no porting work, but the translation comes with important penalties. So how does it compare? This table shows the results of microbenchmarks when compared with the cost of a no-op system and function calls with Unicraft and Linux. System calls are runtime translations in Unicraft are 2 to 8 times faster than in Linux, depending on whether the KPTI or other mitigations are enabled under Linux's host. However, system calls and runtime translations have a 10-fold performance cost compared to function calls, making binary compatibility pretty expensive. For virtual machines, running a single application, the syscool costs are likely not worth the cost at all, since isolation is also offered by the hypervisor. In this context, Unicurnals can get important performance benefits by removing the user kernel separation and its associated costs. The indirection required for binary compatibility reduces the Unicurnal benefits significantly. In order to avoid syscool penalty costs but minimize porting effort, we take a slightly different approach. We rely on the target application's native build system and use the statically compiled object files to link them into Unicraft's final linking step. We take the application build system, we compile it to.objectfiles and.a files, and then start linking it against, for instance, muscle. To support muscle, a libc-type library, which depends on the availability of Linux syscools, we create a syscool-shin layer, each library that implements a system call and registers with it via macro. The shim layer is then able to generate a system call interface at the libc level. In this way, we can link the system call implementations directly when compiling application sources natively with Unicraft, with the result that syscools are transformed into inexpensive function calls. Results from automated porting based on externally built archives when linking against Unicraft using muscle newlib, we show whether the port succeeded with the glibc compatibility layer, the compad layer, and without it the standard layer. As you can see, the approach is not effective with newlib. But it is with muscle. Most libraries build fully automatically. For those that do not, the reason has to do with the fact that glibc-specific symbols. To address this, we build a glibc compatibility layer based on a series of muscle patches and 20 other functions that we implement by hand, mostly 64-bit versions of file operations, such as pread and pwrite. So how many syscools are needed to get an application to run? Well, a study of modern Linux API users compatibility showed that around 50% of applications only needed about 145 syscools. Unicraft is nearly there. We're at 140. To show what Unicraft could transparently support, we use the Debian popularity contest to select a set of 30 popular server applications not yet supported by Unicraft, such as Apache, MongoDB, Postgres. To derive an accurate set of syscools, these applications required to actually run, we created a small framework consisting of various configurations, such as different port numbers for web servers, background mode, etc., and unit tests, so SQL queries for database servers, DNS queries, and DNS servers. These configurations and unit tests are then given as an input to the analyzer, which monitors the application's behavior by relying on the S Trace Utility Program. Once the dynamic analysis is done, the results are compared and added in the form of a static analysis. Here you can see a heat map, where we show the amount of individual syscools that are required by these 30 popular applications. When transparent support isn't available, there's also manual porting. And over time, we've been seeing that manual porting efforts have become smaller and smaller and smaller. With a short survey that we did from developers of Unicraft applications, we saw that the time taken to provide an application from start to finish with the Unicraft framework has become smaller. This is largely in fact due to the libraries provided by Unicraft have increased, making it easier to port applications individually. We've got quite a few libraries now. We can support modern languages, naturally C++, but there's other languages such as Go, Python, WebAssembly, and then modern applications such as Redis and Nginx. We also support some domain-specific frameworks such as DPDK and TensorFlow Lite. And we have ongoing work for Rust, OpenJDK's Java, and for JavaScript. With that, I'd like to start presenting some of the results we've done in the recent evaluation of Unicraft itself. Unicraft images are smaller than other Unicernal projects and Linux. Note that Linux sizes are just for the application and do not include the size of GLibc or the kernel itself. This is the consequence of Unicraft's modular approach, drastically reducing the amount of code for the compiled and linked application. For Hello World, no schedule or no memory allocated at all. Small image size not only useful for minimizing disk storage, but also to enable quick boot times for VMs based on those images. Here we've done an analysis of popular applications such as Redis, Nginx, and Mesculubit. Here we've measured boot time of Unicraft with different virtual machine monitors. We define boot time of a Unicernal instance as the time between the moment the virtualization tool stack, such as Camu, Firecracker, ZenZestal, etc., is invoked to respond to a VM boot to request, and the moment the Unicernal starts executing user code, i.e. the end of the guest OS boot process. This is essentially the time that a user would experience in a deployment, minus the application initialization time, which we leave out since it's independent from the Unicernal. And as you can see, Unicraft's boot time is pretty minimal across the board. Here we've measured the minimum amount of memory to run these popular applications. We define memory usage of a Unicernal instance as the resident set size, i.e. the amount of physical memory used on the guest Unicernal, plus the resident set sizes of the virtual tool stacks per VM data structure. This indicates how many instances would be able to run the host with a given amount of free RAM, which is ultimately what an operator or virtualization server cares about. Note that these measurements represent the upper bound of an instance memory usage. Some of the VMs' pages have potential for sharing across instances, e.g. shared libraries used by Camu, effectively reducing the per-VM resident set size when the VM density decreases. We use ports of Nginx and compare their performance to that of existing solutions, including the Linux VM and other Unicernals. Note that we did not optimize the applications for performance in these tests. We did, however, solve performance bottlenecks in Unicraft's networking stack implemented by Lightweight IP by adding memory pools to reduce packet allocation costs. These tests were run with 14 threads and 30 connections of a static payload of 612 bytes. We also measured the performance of Redis using Redis benchmark, using 30 connections and 100,000 requests at a pipelining level of 16. Here you can see that Unicraft is able to outperform the rest. Following these baseline performance evaluations, I'd like to show you how specializing Unicraft itself can further increase performance. Let's look at image size. We have motivated the importance of image size as a key performance indicator for small cloud images that are necessary to reduce Unicernal provisioning time and virtual machine startup time. Here we measured the image size of Unicernals. We built the Hello World, Escalite and Nginx Unicernals for KVM platform and varied a number of different configuration options, including dead code elimination, link time optimization and a combination of both. We performed two measurements, enabling and disabling link time optimization and dead code elimination. We compared the size of the resulting image using the DU utility in order to avoid incorrect measurements due to sparse images. We measured the performance of the Nginx web server in Unicraft with varying allocators. In order to limit the impact of IO calls and our results, logging was disabled. We used the Utility WRK, a HTTP benchmarking tool, and we benchmarked the server for one minute with 14 threads and 30 connections per host. Here you can see that Tani Alloc outperformed the rest of the allocators provided. Finally, we measured the internal boot time of Unicraft's Nginx Unicernal and found scalability issues when using the binary buddy memory allocator. Boot performance is similar with Escalite, with the buddy allocator being the worst and tiny alloc and TLSF among the best. At runtime, though, the pecking order depends on how many queries are run. Tiny Alloc is the fastest, for less than 1,000 queries, by 5-26%, becoming suboptimal with more requests, as its memory compaction algorithms are slower. Using meme alloc instead provides 20% performance boost when many queries are served. When we benchmarked Redis, further to confirm that no allocator is optimal for all workloads and the right choice of allocator for the workload and use case can boost performance by 2-fold. Let's take specialization even further with Unicraft. With Unicraft, we have the ability to further customize images in ways that are difficult to achieve even with existing solutions, be they monolithic OSs or other Unicernal projects. The highly modular nature of Unicraft libraries makes it possible to easily replace core components such as the memory allocator, as we've just seen, page table support, schedulers, etc. By default, however, a Unicraft binary contains an already initialized page table structure, which is loaded in memory by the virtual machine monitor. During boot, Unicraft simply enables paging and updates the page table base register to point to the right address. This is enough for most applications and provides fast boot performance. Let's see what happens when we change this for dynamic performance. Unicraft also has the dynamic page management support, which can be enabled when applications need to alter their virtual address space explicitly, for example via M-map. When this is used, the entire page table is populated at boot time. This figure shows that the guest with up to 32MB dynamic page table takes slightly longer to boot than one with a pre-initialized 1GB page table, and that if the boot time increases proportionally with the amount of memory. Let's look at trying to specialize the file system. Here we aim to obtain high performance with a web caching unicolonel and remove the default VFS layer in Unicraft and hook the application directly into a purpose-built, specialized, hash-based file system known as SHFS. To benchmark performance, we measure the time it takes to look up a file and open a file descriptor for it. For this purpose, we prepare a small root file system with files in the file system root. We measure the average time taken to do one request out of a loop of 1000 open requests and consider two cases. The open syscall requests where the file exists and the open request where the file does not exist. We compare the specialized setup versus running the same application in a Linux virtual machine with an init ram and files in the ram. And also versus the application running on Unicraft on top of a default configured, i.e. no specialized unicolonel. Okay, let's look at how we can specialize the networking stack. We implemented a UDP-based in-memory key value store using a standard receive message send message syscall and then created a lightweight-based IP Unicraft image on top of it, as well as for Linux binaries. We measured the request rate for each version and how long it can be sustained for, showing the results in this table. Unicraft's performance is 2.5 times better than the guest Linux image and half of that is out of bare metal Linux. Though performance is low across the board. To improve performance for Linux, we must amortize the cost of a system call, i.e. a single packet is sent per syscall in the most basic versions. To do this, we used batch versions of the message syscall, leading to a 2x improvement on both bare metal and guest cases. To further improve performance, we ported our app to run on top of dpdk, which requires a major overhaul of the app, so that it fits inside of the dpdk framework. This boosts the guest performance to 6.1 million requests per second, but the cost of using two cores of a VM, only exclusively for dpdk. For Unicraft, we removed the lightweight-based IP stack and scheduler altogether via the makemenu config and code against the netdev API, which we use in poly mode. Our specialized Unicurnal required similar porting effort to the dpdk one. Its performance matches the dpdk performance, but it does so using much less resources. This only needs one core instead of two. For dpdk, it has the image size of 1.4 megabytes compared to 1 gigabyte for linux image, and it boots in 80 milliseconds instead of a few seconds. Here we show the transmission throughput of Unicraft and linux kvm virtual machines with the same setup. As you can see, the throughput is high with lower packet sizes and decreases as packet size increases. For future directions and specialization, we want to introduce more combined mentalization, writing critical microlibraries in memory-safe, race-conditional, statically-typed verified languages such as Rust. We hope to use hardware-assisted memory separation such as Cherry or Intel MPKs, additional code reduction and sealing with the hypervisor so that set calls and read-only and execute-only is done after boot. And we hope to upstream standard features, address space layout randomization, stack protection, etc. And for those security-conscious minds, fuzzing for additional verification. You can find Unicraft online at our github.com. Or on our website, Unicraft.org. We also have mailing lists or you can follow us on Twitter at Unicraft SDK. That's it for this presentation. Thanks for listening.
|
In this talk we give an update on the Unikraft Linux Foundation open source project, a fully modular and librarized unikernel that aims to provide outstanding performance while making it easy to port off-the-shelf applications into unikernels. In particular, we will go into details how Unikraft (1) fully modularizes OS primitives so that it is easy to customize the unikernel and include only relevant components, (2) exposes a set of composable, performance-oriented APIs in order to make it easy for developers to obtain high performance and (3) aims for POSIX compatibility, already supporting over 130+ syscalls. In addition, there are ongoing efforts to integrate Unikraft into popular frameworks such as Kubernetes and Prometheus in order to finally bring the promise of unikernels to the mainstream. Our recent evaluation using off-the-shelf popular applications such as Nginx, SQLite, and Redis shows that running such applications on Unikraft results in a 30%-50% performance improvement compared to Linux guests. Unikraft images for these apps are around 1MB, require less than 10MB of RAM to run, and boot in around 1ms on top of the VMM time (total boot time 2ms-70ms). During the talk we will show a brief demo. Unikraft is Xen Project incubator project.
|
10.5446/52738 (DOI)
|
Hey, everyone. I wish we could all be in Brussels right now, hanging out at the Delirium Cafe, drinking amazing beer, and chatting about open source and technology. But I'm hoping that you're all safe and healthy. And one positive is that I can show you a bit of where I live, in Portland, Oregon, USA. So the other day, I was driving in my partner's car, and a light came on on the dashboard. Now, normally, these lights tend to be fairly useless. But the one thing that I love about my partner's car is that it has this little message display. And oftentimes, the messages are extremely useful. It'll say, for example, it's time for regular maintenance when you need an oil change. So this time, a message came up, and it said, see manual. My partner's extremely organized. So the manual was, of course, in her glove box. So I reached over, and I pulled it out, and I looked at the table of contents. And I realized that see manual wasn't very useful. How many times have you gotten an alert? Something's gone wrong in your system. But an alert didn't have the information that you needed to actually solve the problem. Or maybe you went to a dashboard, and you looked at the graphs. And the graphs showed that something was strange. But it didn't provide the context to know how to diagnose the issue, let alone fix it. So from there, you turn to the documentation, only to realize that there's not enough documentation. You don't know what's gone on. So inevitably, you end up escalating these problems. And you make them somebody else's problem. So how do we make things better? First, we need better alerts. And that requires two things. Alerts need to provide context and action steps. Context. Why did this alert get triggered? This seems obvious. But in my experience, people often conflate what they think is wrong with what actually triggered the alert. For example, an alert might say critical service down. But actually, the service emits a heartbeat every 10 seconds. And the heartbeat hasn't registered in the past 30 seconds. Our monitoring is usually a proxy for the things that matter to the business. Knowing the difference is critical context when you're trying to troubleshoot things. So be very clear when communicating why an alert was triggered. The second piece of context is why is this important? How does this issue affect the larger goals of my team or the company? This context allows me to prioritize and potentially develop mitigations. If the missing heartbeat indicates that a service may be down, I need to know what impact that has on my customers and business. In addition to context, alerts need to provide action steps. What dashboards do I need to look at? What runbooks or documentation do I need? And what specific sections or action plans within those should I reference? And also, who can I escalate to when necessary? But alerts are a small part of the problem. In my time at Datadog, helping thousands of customers, it was clear that many dashboards are useless because they're too busy. They have graphs of everything. And 90% of that information is useless. Or worse, it's completely misleading. We do this because we like to make dashboards look cool. But here's the secret. You shouldn't be constantly looking at dashboards. When you're driving a car, you watch the road. You watch the road ahead you and you don't stare down at the dashboard. It's only there to validate your operating parameters. Like when there's a change in speed limit, you verify that you're driving the correct speed. So don't design your dashboards to look cool. Design them for troubleshooting. This means two things. And it's really easy to remember. It's the exact same as your alerts. Dashboards need context and action steps. Context means you need to name your metrics well. I've seen so many dashboards where it's unclear if CPU was system CPU or service CPU. And was that CPU available or consumption? Inevitably, it was always the opposite of whatever the on-call engineer thought it was. If you're not sure how to name metrics, I love the guide that the Prometheus community has published. So reference that. More context. Your dashboards should tell the viewer what information it's providing. What services is for? How does that service relate to your other services? Where are the dashboards for those other services? What are the common ways that this service can fail? And how would I validate that on the graphs that are being shown? This probably means that you need to add more text widgets to your dashboards. And those text widgets are also useful for providing action steps. If I'm seeing abnormal behavior in these graphs, what should I do to further investigate or resolve that problem? Where can I find additional documentation? This advice, however, is just best practice monitoring. But how do we do proper monitoring? And what do I even mean by that? Monitoring is usually bespoke. It's an artisanal act of hand-crafting dashboards unique to each service. Proper monitoring means we need to adopt practices of modern software engineering. Proper software development requires documentation, revision control, and testing. These are things that we've now implemented with infrastructure as code. Our infrastructure now has documentation, revision control, and testing. And we need to do that with monitoring. We've already discussed your dashboards and documenting your dashboards with context and action steps. So let's talk about revision control. Store your dashboards and alerts as code. Yes, you can build them in the GUI, and that's a great place to do that. But export it. Put that into Terraform or Chef or Ansible, whatever you use. But you can now revision that code. Or even better, you can treat it like open source and share it with the community on places like the Grafana community dashboards. Keeping your dashboards as code means that you can store them with your services and with your infrastructure, whatever they're related to. Finally, testing. This is where chaos engineering is most valuable. Chaos engineering isn't about breaking things. It's a process of testing your assumptions. Does that graph really show the amount of CPU you're consuming? Use a chaos engineering tool to consume CPU and verify it. And chaos engineering doesn't mean you should cause chaos. It's about replicating the chaos of the real world in a very controlled way. After you've verified that your graphs are showing the correct metrics, verify that the documentation and action steps in your alerts and in your dashboards are valid. The process is simple. Start with a hypothesis. How do you think your systems will behave? How will that appear in your monitoring? And will it trigger any alerts? Are the context and action steps in your alerts and your dashboards enough to identify the problem, respond to it, and lead to the right results? Then inject the failure and verify it. Analyze your results and keep repeating and iterating until everything works as you expect. So let's improve our monitoring by ensuring all alerts and dashboards have context and action steps. Let's ensure that they're in code and that they're revisioned and that we share them with the community as open source whenever we can. And nothing is complete until it's past tests, which means you need to start doing some chaos engineering to validate that everything works as you intend. I hope you all enjoy FOSDOM and I look forward to seeing you next year in breath.
|
Good monitoring allows us to quickly troubleshoot problems and ensure that they remain minor blips rather than escalate into hours or days of downtime. But what is “good”? Just like good code, good monitoring should include tests and documentation to ensure that it’s always valid and easily used by everyone. In this lightning talk, I’ll share best practices for validating and documenting your monitoring.
|
10.5446/52739 (DOI)
|
Hey everyone, my name is Joe. I'm with Grafana Labs and today the promise was to talk about getting started with Tempo and to demonstrate an open telemetry instrumented application that supports exemplars. And we're mostly going to be doing that but things have drifted a little bit and it's kind of, this presentation is kind of divided maybe into three sections now. We're going to talk about Tempo as promised. We're going to talk about the state of open source exemplars and why what I initially said I was going to do for this session is not quite possible. But we'll still get to a demo in which we'll demonstrate exemplars in open source and talk about maybe where those are, why they're not quite available yet and what it will take for those to kind of limp over the finish line and for us to all be able to use open source exemplars. So to start with Tempo. Tempo is a new distributed tracing backend where, that we built at Grafana Labs kind of with this goal to sample 100% of our read path. So at the time we were sampling I think maybe 15% or so of our read path and we often were having long queries that we wanted to diagnose. Maybe a customer was interested in why a particular query took a long time or of course we are also interested in that because we want to understand why these queries are taking a long time and try to understand where that time is spent and how to reduce the latency on these calls. And the way we found to do that was simply by sampling 100% of our read path. We wanted to do this but unfortunately with our previous back and we would have to scale Cassandra or Elasticsearch to a point where the cost in memory or CPU, the operational cost, the cost to my sanity would have been far more than was worthwhile, especially for me I think. But the size of this cluster would have just been, my whole job would have been managing Elasticsearch or Cassandra which is not what I want my job to be. I want my job to be building Tempo apparently so we built Tempo. The solution is Tempo, the thing we built and we put, the difference here is Tempo's dependency, the only dependency of Tempo is object storage, so S3 or GCS or Azure. And these are of course cheap to manage. I mean there's no management, right? These are managed services. It's very cheap to store very large amounts of data in these services and to use them. And so the goal here is to basically build a tracing backend on object storage only. The trade off for now at least is that currently Tempo can only search by trace ID. I think the goal is, or at least the goal will be in the next versions of Tempo to support some sort of search, some sort of native search, but for right now we only can do trace ID search. So you are going to only be able to ask the question, give me a trace for this trace ID. This may seem limiting but at Grafano we found many different ways to kind of get around this and we'll talk about those in a second. We'll talk about maybe not even get around but ways that we feel are very powerful ways to search for traces that do not require native search in your backend. So Tempo currently supports all major open source instrumentation libraries, Jager and OpenCensus, OpenTelemetry, Zipkin. So if you are already instrumented with anything like this you can immediately start using Tempo without issue. We put things in S3 or GCS and also Azure, this slide was made before Azure support was added and I actually didn't even recognize these symbols anyway but that red one is S3 and that blue one is GCS. And we visualize everything in shockingly Grafano. Discovery, okay so you can only look up by trace ID, how do you discover traces? And right now the answer is logs and we found we were doing this even before we switched to Tempo. We were often going through our logs to find our traces so we would be logging on the same line as a trace ID. This is standard request response style logging, nothing new or weird about this but log a trace ID, log a HTTP method, path, latency, status code, bunch of common parameters or bunch of common fields and these now kind of effectively have built an index into our traces. An index we don't have to spend more money to build into our tracing backend but instead can just use our logging system as it is. Now Grafano is Loki and our demo will be Loki but this works with Elasticsearch or Splunk or whatever you want to use. Also exemplars are kind of the new upcoming feature that we're going to talk about a little bit in a bit here, the state of exemplars. But this new upcoming feature is a way to discover traces through your metrics as well and we'll talk more about what that means a little bit. So trace ID search only in Tempo, currently we use logs very effectively, we'll look at that in the demo and hopefully in the future we'll be using exemplars. So where are we at? 380,000 spans a second or so, that fluctuates more than I care to admit, I wish it didn't fluctuate at all, I wish I could just push this higher and higher but we're at 380,000 right now. We're at a little bit less than 7,000 traces a second so if you do the math there you can find out how many I should have done that before, maybe about 60 or so, I could be totally making that up, never mind, ignore that, traces, spans per trace is what I was trying to get to. So we're at 380,000 spans per second, about 7,000 traces per second and our latencies are good, I'm very happy with this, certainly we can always push this down, in fact recent additions to Tempo include the ability to scale the query path so we could actually reduce this quite a bit if we wanted to. But right now we're querying over 4 billion traces and our P50 is around 400 milliseconds which I'm very happy about. You can also see P90 is right on 500 milliseconds and P99 kind of reaches up to 1-2 seconds occasionally. So, very happy with these latencies, always something to improve of course but I think this is well within operational expectations for a tracing backend. Architecture of Tempo, this is a little detailed, we won't get too into this, don't be afraid to not understand all these pieces but this is architected roughly like Loki or Cortex if you spent time with those open source products. So we have a distributor, the distributor handles replication factor, pushes our traces to ingestors, ingestors then batch those traces up into blocks and then these blocks are pushed into our storage backend, into S3 or whatever. We have this idea of the compactor over here on the side and we are currently flushing I think something like 400 blocks an hour. So the compactor takes those small blocks and builds larger and larger and larger blocks. If right now we are doing what I would say 400 blocks an hour, 24 hours a day and we currently have a retention of 2 weeks, so 24 times 14 times 400 would be without the compactor our total block list length which would require a lot of time to search. So the idea of compaction is to basically keep the length of this block list as short as possible in order to improve query performance. And then on the query path we have this thing called the Queryer, it's job is to look into the ingestors for recent traces and also check the backend. We have the query front end which handles parallelization of queries and sharding of queries out and this Tempo query piece is hopefully going to go away soon, it's actually like a shim to translate to Yeager which is the only tracing backend that Grafana can handle right now so that's why we are using it. In the near future hopefully Grafana will support Tempo directly. So that was a lot of things and you don't have to know all those things to just get started and this is supposed to be getting started. So check out these links here, single binary deployment is important so a way to just deploy Tempo not in Distributor ingestor query or all these crazy pieces but instead to deploy it as a single binary to get started and to understand what it does and how to configure it. Tons of examples in the Docker compose folder you'll see listed here. We have Helm options, there's a very simple Helm chart that I made that I think the Helm community would hate but it's kind of the way I would do a Helm chart and we have a Helm chart that's being pulled or a PR open now to have a more official, more robust Helm chart that I think meets the expectations of people who regularly use Helm and Jason at set a blob of Jason at stuff as well to deploy. So all of these different deployment options hopefully you can kind of dig into these, you know look for the Helm YAML, look for the Docker compose, get a feel for what configuration looks like and how to deploy this thing so you can get started working with Tempo on your own. Okay so Tempo, it's our tracing backend, discovery through logs and exemplars, high volume is the goal here but exemplars is something we really want at Grafana and we need to talk about where those are because part of this presentation was supposed to be demonstrating those with, it's supposed to be demonstrating those with, sorry, it's supposed to be demonstrating those with open telemetry but we need to talk about why we can't actually do that just yet. So first exemplars are, exemplars are a record of a single request or an instance of a single request that was then aggregated away to create a metric. So the power of metrics is this aggregation right, I can aggregate a thousand, a hundred, ten thousand, however many requests into a very simple number, a single floating point and if I do that, if I aggregate it all away, I can create these extremely quickly, I can provide, you know store them extremely cheaply and provide very powerful visualizations of my infrastructure, that is the power of metrics but what's lost is the individual instances, the individual requests and exemplars aim to kind of complete that picture, to take the aggregation, to display the aggregation and give you all that power while at the same time letting you drill in and find a single instance of a request that was kind of used to create that aggregation. So where are we? Where are exemplars? Everybody wants to go, well I want exemplars, other people probably do somewhere. What's going on right now? Well they're defined in Open Metrics, so the Open Metrics spec has a defined standard for exemplars but that is not supported currently by Open Telemetry and in some of the issues I've read, they're not requiring it for GA so I really just don't know exactly what timeline Open Telemetry is looking at to support exemplars specifically. I believe they want full support for Open Metrics so we are excited to see that but it's not quite there yet. What about in Prometheus client libraries? So Prometheus client libraries support Open Metrics generally. Where is it there? Well Happycat knows that Go and Python is ready to go so if you're using either Go or Python, if you're using either of these libraries or these instrumentation libraries, exemplars are available to you now. You can expose exemplars in an Open Metrics compatible format or in the literal Open Metrics format which Prometheus would then scrape, store in its back end and make available to a visualization layer. Java and Ruby have issues open. This unofficial.NET client also has an issue open. There's a lot of other Prometheus, a lot of other Open Metrics clients out there and I encourage you to find the, or if your library is not Go or Python, if your language of choice does not Go or Python, I encourage you to go to your current Open Metrics client library. If you're using Prometheus and using the.NET or Java or whatever library or some other one, please get into those repos and request Open Metrics support. Let the maintainers know these things are important to you and hopefully we can kind of all get together and all get support in these various client libraries for exemplars. In the demo today I'm going to use Go because Go already supports exemplars and it'll be significantly easier. What about our back end and what about our front end? In Prometheus there are two pull requests now that need to be merged for complete exemplar support. The first is an in-memory support. This first PR, when it is merged, will support an in-memory ring buffer, an ephemeral storage of exemplars. These are just kind of held for a short amount of time. It's basically have much time, or basically based on the amount you're scraping and the amount of memory you give to the ring buffer and then they're stored internally only and when they're thrown away they're just dropped. The second PR adds support for exemplars to the wall to store permanently and then to remote right to push to a back end. After that second one is merged you would see support for exemplars hopefully start coming out in back ends like Cortex or Thanos or some of these other kind of long-term Prometheus storage back ends we're using. The first PR, those of us who just use Prometheus will immediately be able to use exemplars. For the second PR, those of us who use Prometheus in combination with a permanent long-term storage back end, the second PR will kind of make it so that those back ends can start recording, storing, and exposing exemplars. So Grafana, actually support for exemplars is in the tip of master of Grafana right now, so soon, actually soon. For real soon in Grafana. I can't commit of course anything. I don't work on the Grafana team directly. I don't def with them. I don't do any of their milestones or project planning, but hopefully maybe like in 7.5 or some soon near future release we should see exemplar support. Like I said right now it is tip of master merged already, which is fantastic. So let's get to the demo and talk about instrumentation a little bit. So the goal here again was to do open telemetry everything. Open telemetry has a really compelling and very powerful server and client instrumentation setup. Very easy. Got my server here right, so I just set up a new handler. I wrap my actual handler. So I have some HTTP handler. I just wrap it in this hotel handler and then I serve the hotel handler out of my server and then magic happens and it's going to instrument my HTTP server for me. Same with the client side. So in this case I'm just replacing the transport with an hotel transport and in this case my HTTP client is now instrumented. In this case I'm worried about tracing so context will be propagated correctly from client to server and everybody will be happy. This is all going to work nicely. And with a very few lines of code of course there's also like boilerplate setup code to initialize the tracing libraries and get things set up, but for just kind of using an HTTP client, setting up the server, it's pretty tight set of clean code. But OpenTelemetry doesn't support exemplars, so here we are. This is the demo you're about to see. Our metrics are going to be set up with OpenMetrics and Prometheus and our traces are going to be through OpenTelemetry and Tempo. So Tempo is our back end, OpenTelemetry is going to be our instrumentation library and then OpenMetrics or Prometheus is going to be our metric side and then stored in Prometheus. And the reason is because OpenTelemetry just doesn't support exemplars yet. And you can find this example kind of at the link there at the bottom, GitHub, Joe Elliot, tracing example. Cool. So to the demo. So this is Grafana. This is a build off the tip of master which is why, or is it build off the tip of master so this is not something you just pull in a Docker container or just installs not GA or anything like I said hopefully support for this will exist soon. I'm querying Prometheus. So this is, this Prometheus is an image from actually Callum's branch. So Callum Stan is head the two PRs. He's the maintainer who submitted the two PRs for exemplar support. So the Prometheus image I'm using is just one of his personal images. You can see it, there's a Docker compose. So this is again a OpenRepo, it's a GitHub repo. Please go check it out and you can see in the Docker compose exactly what images I'm using. So there's nothing like hidden here. There's no tricks. This is actually all working. So I'm asking like this is a normal, can I, maybe I should zoom in a little bit. So this is a normal histogram Prometheus query P99 and what's being returned along with the normal return. The normal return is the metric and this is actually a little crowded at the moment. But you can see these exemplars as well. So I see my trend. I can also go over here and click on a exemplar. I can choose one of these and I can immediately jump over here to a trace. So I have my metric. That metric is a ton of requests that are all aggregated up into a single request and I can then use this new exemplar support in OpenMetrics and in Prometheus hopefully very soon to store an example of a single request. And then here I can, you know, dig in my trace and this is all normal distributed tracing for those of us who have played around with this. So exemplars are kind of this new upcoming feature. Talked about all the different places that, you know, it would be available soon hopefully across your fingers. Like I said before, Tempo is also dependent on, excuse me, Tempo is also dependent on this search. It's also dependent on logs for discovery. So this is the way we actually discover logs right now and this is a low key query. As discussed, it doesn't find anything. Maybe I can. As discussed, you know, this is low key but, you know, this is compatible with, is it there? Yeah, okay. We're just in a weird spot. So this is compatible with Elasticsearch or anything, any logging back end where you can build a link from like an ID, you know, this is going to work. So there's no like dependency on any, you know, on low key or anything else like that. But to show you what we do in Grafana, so you can see here I have the, you know, the path recorded. I have the latency down here. I have the trace ID and I probably should have other things, of course, like, you know, the HTTP verb status code and other information. If I put all these on a single, if I put all these on a single log line, I can then do some really clever queries like this, like let's look for something over two seconds, maybe. Yeah. So now all of these traces are greater than two seconds and I am now interested in maybe some long running traces so I can diagnose some kind of latency issues. If I were more clever and had like, let's say, method equals get and, you know, I could do another pipe and do something like status equals 500 if I was interested in failed queries or whatever. So with some careful logging, I can create, and this is normal HTTP request logging. So a lot of us already have these logs. I can build basically an index into my traces that lets me do advanced searches and can also discover traces. So let me jump over here, and I should be able to get, you know, a trace out of this log line. And it's been two seconds like I asked for. Cool. Okay. So Tempo is this new, like I said, distributed tracing backend designed for high volume, extreme volume, and it's designed to be inexpensive and cheap to run. Put everything in S3, put everything in GCS, don't have to bother with complicated back guns. In exchange, we're doing trace ID search only at least for now. And like I said at Grafana, we're doing this kind of log-based lookup of our traces using Loki. Again, you can use whatever you want. This discovery through logs allows us to store traces super cheaply, super inexpensively. And then soon, hopefully, we're going to see exemplars like we were digging in only supported by some clients. Support for Prometheus has two PRs up. Once merged, we'll have complete support. Grafana has support in the tip of master. So we're real close to the point where we should start seeing open source exemplars in all of our favorite metrics tools. We also used open telemetry for the demo application. So if you want to dig into the code, and we showed some example code there. And then we had to kind of use also open metrics in order to get exemplar support. Definitely looking forward to when open telemetry has full open metric support, including exemplars, and we can use the open telemetry instrumentation. Thank you all for your time. I think we're going to do Q&A in a little bit, but thank you, Fosdame. Enjoy your conference.
|
Grafana Tempo is a new high volume distributed tracing backend whose only dependency is object storage. Unlike other tracing backends Tempo can hit massive scale without a massive and difficult to manage Elasticsearch or Cassandra cluster. The current trade off for using object storage is that Tempo supports search by trace id only. However, we will see how this trade off can be overcome using the other pillars of observability. In this session we will use an OpenTelemetry instrumented application to demonstrate how to use logs and Prometheus exemplars to find traces effectively in Tempo. Internal Grafana metrics will also be shared as we all discuss how to scale tracing as far as possible with less operational cost and complexity than ever before.
|
10.5446/52740 (DOI)
|
Today we're going to be covering production machine learning monitoring principles, patterns and techniques. It's going to be both a theory based session together with a set of hands on examples that we're going to be covering. There's quite a lot to cover in this presentation, so we're going to have to rush through quite a few key pieces. So a bit about myself. My name is Alejandro Saucedo. I am an engineering director at Selden, chief scientist at the Institute for Ethical AI and member at large at the ACM. To tell you a bit more about Selden, we are an open source machine learning deployment company. We built one of the most popular Kubernetes based machine learning deployment frameworks and we're going to be using Selden, Selden Core for the examples today. And the Institute is a research center that focuses in developing standards and tools for the responsible development and operation of machine learning systems. And we're part of the Linux Foundation, which allows us to contribute from a very practical perspective. But today we're going to be covering some of the motivations of why should we care about ML monitoring, some of the principles to achieve efficient and reliable monitoring, some key patterns that have been abstracted to the machine learning world, and then a set of hands-on examples that we're going to be switching over. So the slides can be found in this link. And then throughout the presentation, you will see a set of links below the slide where you'll be able to test the open source examples yourselves. So let's set the scene. And we all are aware that even without the machine learning context, production systems and more specifically production machine learning systems are hard. For us, we interact with contexts that require and that involve thousands of machine learning models, which you can imagine the heterogeneity when it comes to specialized hardware, to complex dependency graphs for data and data flow, the compliance requirements, the reproducibility demands, lineage, metadata, et cetera. And I have actually a talk online that you can check particularly in this area of machine learning operations. And I guess now it's not longer popular demand, but we are now aware that the lifecycle of the model doesn't finish once it's deployed. If anything, it only begins once it's fully trained, right? It's deployed, it's potentially retrained, it's probably superseded by a better performing model, it's promoted to different environments. So ultimately, there's quite a lot of time that goes once the model has been deployed and quite a lot of best practices that need to be involved throughout its lifecycle. And more specifically, what we're going to be looking at today is we're going to be taking a model, we're going to be trying to take a very simple machine learning use case as we're going to be covering very complex terminology around the machine learning space. We're going to be sending some predictions to this model that is going to be deployed as a microservice to see what can be monitored. We're going to be sending feedback to this model to be able to get some more advanced statistical monitoring. And then we're going to be delving into more specific terminology like explainability, as a architectural pattern, applied detection and drift detection. And as I mentioned, we're going to be taking the Hello World of machine learning, the Iris classifier, it's going to be a simple sort of underlying machine learning model, but to be able to focus on the overarching terminology that we're going to be using. So specifically, we're going to have an input, which is an array of four numbers, floats, and then an output, which is basically a class from three classes. That's basically what we're going to be interacting with. We're going to be interacting with this as a black box in a way, primarily as we care primarily of the inputs and outputs. And this is why the premise is set this way. In order for us to be able to deploy this model, we are assuming that there has already been data science process to identify the best performance, best data distribution to use to train this model. And in this case, we're going to just be taking a simple model, training it, and then taking this artifact and deploying that. What that really looks like is specifically getting the Iris dataset, then getting a train test split, and then training a model, which in this case, it's just a logistic regression or a random forest, it doesn't really matter for our specific use case, which then allow us to actually perform inference. So in an unseen data point, we would get the prediction, then we would be able to actually export and persist that model, in this case, as a pickle. This is the pickle that we're going to be deploying. And ultimately, this is what we're going to be using around in this presentation. So the way that we're going to be containerizing and deploying it is using seldom. We're going to be able to leverage directly the artifact or to create a Python wrapper that then will become a fully fledged microservice using these tools. So it converts it from a either artifact or Python class into an actual microservice, where the inputs and outputs can be the data that you send to this model as a prediction. So the input would be an input array, and the prediction would be one of these three classes. But now let's see what is the anatomy of a production deployed machine learning stack, the end to end area. In this case, you have the training data, the artifact store, basically the persistence of the state of your production environment, and the inference data. For the first step, you have your experimentation. This is basically the model training, your high parameter tuning. It uses the input training data, and it creates an artifact. This is what we just did. We exported an artifact. Then you're able to deploy that programmatically. So that means that with some continuous integration process that would be in charge of creating that artifact, putting it in an object store, and deploying it, so that then once you deploy it, it can go into your environment, your respective development environments, production environments as a real time model, batch model. With Seldom Core, we're going to be using it to enable us to deploy this model into a Kubernetes cluster. And then every single input and output data, as you would with another microservice, would be stored in an Elastic Prometheus data store, or a Metrix or Logs data store, which ultimately allows you not just to have persistence, but also to be able to use for then again, training data. So what are the monitoring metrics that are involved in production ML systems? So you have the usual microservice metrics, performance metrics and tracing. You have more specific machine learning metrics, statistical performance. You have things like outliers and drift detection. We're going to cover that, what that means in more detail. And you have explainability tools that often are discussed in an offline analytical aspect. But in this case, we're going to be discussing it in a monitoring production real time or semi real time basis. So these are the core things that we're going to be delving in this topic. So let's start with the first part, and I'm sure that many of our fellow programmers in this audience are going to be more familiar with this performance monitoring. And the principles are being able to monitor your microservice on its running underlying performance. So this is to identify potential bottlenecks, runtime red flags, malfunctions, or identify something preemptively before it actually goes wrong. And then of course, being able to debug and diagnose unexpected performance. And in this case, it's in the context of our machine learning model, right? Our machine learning model that we deployed crashes, behaves incorrectly, it has major throughputs, spikes, et cetera, et cetera. So that's basically the things that we want to look at. And what that looks in more practical terms, this is things like request per second, latency per request, CPU memory data utilization, distributed tracing. So to go back to our example, what we can do now is take that model artifact, and first of all, deploy it into our Kubernetes cluster, basically pointing to that model artifact, using Selden to convert it into a microservice. Now we actually have a microservice that we can send requests and receive predictions of exactly the same model that we deployed. We can see that our model is now deployed, and we can actually send data for it to basically process. So we're going to send now some data, it's sending requests, and then it's receiving the predictions. Ultimately, what we're now going to see is we're going to start seeing some requests per second, we're going to start seeing some insights on the latency. Ultimately, we will also be able to see the performance in our cluster sort of changing. We can see that the CPU utilization is now being in use. And this is the usual stuff that you would see in a microservice. This is not something new, but ultimately if the model crashes or has a wrong prediction, we will be able to see this in the number of success requests or the number of 400s or 500 requests. This is basically the number of errors that potentially appear in our system. So this is basically some of the core components. Now we can see that there's seven requests per second. So this is basically the usual things that you would see in your normal microservice, right, so let's actually pause that. And let's have a look deeper. So the patterns that you would see here is what we just said, take your model artifacts, deploy them as a microservice, and then extract the same similar insights that you would for your usual microservices metrics, logs, tracing, etc. Now let's go one level deeper, statistical monitoring principles, right, what is the statistical monitoring, this is basically monitoring specific to statistical properties of the machine learning performance. So this is things like calculating using the corrected labels so that we can actually understand how the model is performing compared to how it was performing when we trained it, right, this means that if it performed really well during training, we would deploy it, we would see on scene data, but until we provide corrected and correct labels, we can then see the actual performance, this is things like accuracy precision recall, and can be used to benchmark either models in real time, one against another one as a test, and it can also be used for evaluating in a long term perspective, we will see more advanced concepts in a bit. This is key, and this is one of the core insights that we've sort of like extended within Selden. So that it is used as a first class citizen, so that you can have your models almost out of the box with these components. And what this looks like in practice is things that you would often see as a data scientist, true positives true negatives false positives false negatives which can be converted into accuracy precision recall specificity, and then even allowing you to have more specialized metrics like KL divergent, etc. But I mean, ultimately what this is, you know, even though we're seeing something very specific to machine learning, this just basically means machine learning specific to the use use case that you are interacting with. In this case is machine learning, right, but ultimately it's abstracting some of this core patterns that we would see into reusable components. And this is important for us because we deal with thousands of models, right, we can't have every single thing, every single deployed model rasp wrapped in a flask wrapper with an on standardized interface with super specialized metrics we can't expect all of our DevOps and sres to be machine learning experts as well. Right. And from that same perspective let's not delve into architectural patterns. And this is what we, what we coined as the statistical monitoring pattern, the statistical feedback pattern. And this is basically if you remember, we deployed our model, what, what does our model do right now, it sends inference data and returns a prediction. But what can we do beyond that, we can actually make it such that it doesn't only send the data but it stores the inference data with the response, the prediction, right. And now when we send the correct label, when we tell the model hey, that ID that you sent previously was wrong. Here's the correct label, or it was correct here's the correct label, then we can have another microservice, which listens to that feedback, and is able to compare that old inference request with that incoming feedback, and then be able to provide real time performance metrics of how the model is running. In this case, is it is it actually performing well or is it not. What does that look like in practice well let's have a look. So in this case, instead of just sending a bunch of requests, we're going to be sending a bunch of requests and storing the request ID. And we're going to be using that request ID to do what to basically send the correct labels. Right. So as we finish sending this request ID and we are going to be seeing some, you know, actual spike in the in the predictions, you know, 4.2 now. We are now going to be able to not send predictions but send corrections, send hey, hey model, where are the correct labels, and this is the respective IDs to where it should reside right so what I'm saying is, these are the correct labels, and this is the ID of the relevant request. So we're going to be sending corrections. And in this case, we don't want to just send corrections because that would be a bit boring because I already have the model with very good performance. We want to send something that shows us to you. And in this case, we're going to send the randomized correct labels, which means that they're not correct. So the model is going to get a lot of things wrong. And what that's going to allow us to do is to just basically start seeing some divergence in the performance right and we want to see a bit faster divergence so we don't want to wait, you know, half a second every time that it sends a request we want to make a little bit faster. And what we can see now is that in the performance of the model, we're going to start seeing some sort of like the tournament on its performance. And then going back to the use case, if you remember what do we have deployed, we have deployed a model that predicts one of three classes, right. So here not only we can see the total accuracy precision and recall, but we can see the actual breakdown on a per class basis. And we can see that class two, class one and class zero have different accuracy, precision and recall. Right. And why is this important? I mean, for flowers, maybe not as much. But for real humans, being able to identify your accuracy, precision and recall for protected features like, like gender or ethnicity allows you to identify whether there is potential inherent bias in the real time deployed model that is performing inference on real data. Right. So if you have models that are actually having impact on humans lives, you need to make sure that this is in accordance with the distribution of the data that you saw on your training when the model was being created. And this is actually something really cool, right, because not only you're getting an overview of metrics and monitoring, but you're getting something that is not just specific to machine learning was specific to a use case that makes it particularly useful for not only the machine learning partitioners, but also the specific operational stakeholders that would be managing the process in itself. Right. So here we can see, hey, we're starting to see some, you know, bad performance of the model. So somebody should do something about it. Right. So that's where you can actually say alerts, and you can notify individuals to say, hey, this model is not performing well, maybe it needs some retraining. Right. So that's the key thing. Now let's pause that because, you know, we don't want to, we don't want to have business finding out that our model is performing bad. And let's move to the next part explainability monitoring. So what's explainability. So explainability is human interpretable insights for the model's behavior. So your model predicted class study explainability is to be able to understand and explain why did it predict class study. And this is this important use cases where you are having direct impact into potential users that maybe detrimental or of high risk for that individual, right. Things like credit risk predictions or as we have seen in some potentially high profits of bad practices like sentencing prediction. And it's important primarily now because we now are starting to see an adoption of black box, complex models like deep neural networks. So introducing interpretability techniques only allows you to leverage more of this advanced techniques. So this is use case specific explainability capabilities justifiable interpretations of model predictions. And then of course to identify key metrics such as trust scores or statistical performance thresholds that can be used not just to explain on an analytical perspective, but also used on a monitoring basic as a real time perspective, and then enabling for more complex machine learning techniques as we mentioned. So the terms that you tend to see in the machine learning explainability space is whether it is local for a single prediction or global for the entire data set, whether it's black box interacting with just the inputs and outputs or white box actually opening up the model and seeing what's inside the type of task classification regression the data type tabular images, etc, etc. And ultimately for this, we also require an architectural pattern and why do we introduce patterns. The reason why is back to the same premise. If we have thousands of models with hundreds of explainers hundreds of metric servers, we don't want to have to deal or our DevOps and sres and it managers and platform leads shouldn't have to deal with hyper specialized individual components that require high amount of machine learning expertise in order to monitor in a baseline perspective. And this introduces the ability to have infrastructural components that can be abstracted and scaled within seldom we have extracted this into cloud native patterns, which often is referred in the Kubernetes space as custom resource definitions. So this is basically abstractions for the Kubernetes world where you can deploy an explainer right you can deploy a model, you can deploy a metric server, etc, etc. So you only deal with these components, and you deal with those perspectives and this is important because, you know, we're talking about the monitoring of our models, but this is a microservice as well. And this has monitoring metrics as well. So from this perspective, just to cover in detail, when you send a request to the model, you get an inference response. When you send a request to the explainer server, you don't just get an inference response. The explainer takes that input data, it reverse engineers the models by interacting with it, and then returns an explanation, right, and what that looks like in practice is if we actually deploy an explainer so here we can train an explainer and anchor tabular that tells us are more strong predictive features, we can use an actual input in this case is just basically this first index, and we can explain it right so this tells us that actually from this explanation, the core most impactful pieces is the petal with which I assume is this one, and the step with which I assume is this one. And if they are over this terms, it would actually converge into that prediction, right so this is type of explanations that allow you to go back to your use case and explain them. Similar to the model, we can export the explainer, right, and that exported explainer can be deployed as an actual microservice component as part of that model that we have deployed, right. So if we deploy that specific microservice, we now have a deployed explainer, and similar to how we just did it over there, that we, you know, send a request to the model and got a response, we can actually send a request to the explainer, and then get a response right we just got a response, and we can print it and the response is the same. Right, this is actually a restful request that we just sent to a microservice that actually we can see in here as metrics right we can see the explainer, although it will take a little bit to actually register that that input prediction. And ultimately from that same perspective, you know, we can actually see that we could monitor the explainer look our, our model is still, you know, performing worse and worse, even even we stop the actual predictions. So that that shows you the power of not just the machine learning techniques that we're using, but also the power of the architectural patterns that we're introducing. And just for the last remaining, you know, three to five minutes, I'm going to cover the last few components, which is the outlier and drift monitoring principles. This is basically being able to detect anomalies, or being able to detect drift in data. We'll see what that actually looks like in practice. I'll buy detection. Basically you have data that doesn't fit the distribution of the type of data that you're seeing, or drift that you're seeing perhaps in certain windows of the processed inference data divergence that may flag into some mis, more performance in your, in your deployment. This can be on scope input versus output, it can be supervised or unsupervised, and it can still be for classification regression, etc. From a pattern perspective, this is slightly different to what we've seen. We still have our deployed model that receives an input data and returns a reason an inference response. But for the outlier detector, what it can do is it can listen through this, you know, cloud events, eventing infrastructure, which we're not going to cover in much detail, it can actually listen to the same inference data that goes through the model, it can do an assessment of whether the data is in distribution or not, with a set of algorithms that you can try in what in some of the examples that we actually link, using our open source tool alibi. You're going to be able to actually test how all of this fits together. And this basically stores whether the data is an outlier or not. So then when you are able to look into the data of your input request, you're able to know whether that request had a particular outlier, or you can set up more specific alerts that are relevant to that. And specific to this, we also have drift detection and drift detection is slightly different to outlier. It still listens to the inference data, but instead of acting on each data point, it actually acts on a tumbling or sliding window of data. And it identifies whether there is a drift that is within that specific set of window. And if there is, again, it sends the metrics so that you can configure your relevant alerts and notify respective individuals. Again, you have a broad number of examples with our Seldencore and alibi tools. We have a lot of examples and contributors. If you find an algorithm that is not implemented, please let us know and we'll be able to also have a look at it. There is an extra note where I'm not going to be delving into much detail, but adversarial detectors is also a key component, which we also have adopted. This is basically to be able to identify potential adversarial attacks and more specifically adversarial examples. This is basically modifications of input data that are added statistical noise that end up predicting something that a malicious stakeholder may want to. So this is often used in the self-driverless car where you can have a stop sign with some statistical noise that could cause issues. And of course, there is an architectural pattern for the adversarial detectors that is also slightly different. We can try all of these things in the open source repo. And it may be worth an extra note that it doesn't really stop there. As you would know with most programmers, we love abstractions and we love adding abstractions on top of our abstractions. And similar to this pattern, you can actually have ensembles on top of your architectures. Similar to how I was saying that you can have an outlier detector acting upon a model. You can also have an outlier detector acting upon a metric server. So maybe you can actually detect things like drift on your accuracy, drift on your precision, drift on your latency, on your request per second. So you can actually have much more complex components. And that's why it's important to introduce the management infrastructure around this. Right now, we actually bring up the machine learning model. It's a tiny, tiny thing inside of somewhere across this hundreds of microservices. And it's important because of those reasons. So with that said, I've covered most of the key things. Actually, I covered all of the key things. We delved into the motivations for machine learning monitoring, the principles for efficient monitoring, some of the core patterns that you can adopt in your production infrastructure and a hands-on example that you can try yourself and actually four or five different hands-on examples that we covered that you can try yourself as your return notebooks. And again, so the slides are in this link, bit.ly. slash realtimeemail. You can find all the links as well there. And with that, thank you very much. And if there are any questions, more than happy to take them. Otherwise, please feel free to send them over through Twitter or email. So with that, I'll pause there. And thank you very much. Thank you very much to the organizers.
|
The lifecycle of a machine learning model only begins once it's in production. In this talk we provide a practical deep dive of the best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models through concept drift, outlier detector and explainability. We'll dive into a hands on example, where we will train an image classification machine learning model from scratch, deploy it as a microservice in Kubernetes, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring techniques into infrastructural components that will enable for scale, introducing the standardised interfaces required for us to enable monitoring across hundreds or thousands of heterogeneous machine learning models.
|
10.5446/52744 (DOI)
|
Hello everyone. Today we'll talk about methodologies for database troubleshooting and performance analysis. And we have a lot of material to go, so let's just get started. Now, as I talk to developers, I often hear what their performance are very common cause of concerns. Database is blamed and actually is often responsible for down times and performance problems. And why is it the case? Well, I think first one is because database, they do not have a linear scalability. And in many cases, you cannot also easily scale them out unlike application servers. For example, if you have a single PostgreSQL database and you need to handle 10X traffic, you can just provide 10X instances. It doesn't work this way. Databases are also quite complicated beasts inside. And what is going on inside database and how they really work are often poorly understood by application development. Now, when I speak about the database, there are actually quite a lot of different tasks which relate to the database performance that ranges from troubleshooting, capacity planning, you may be tasked with optimizing costs and efficiency. Even if your database performs well, it may just cost too much. And dealing with change management, changes ranging from changes in your application to database needs to be changed, for example, because it is going end of life. Now, I think it's important to look at the two points of view at the databases. And one of those, if you'll think about the application developers, they often see the database as a black box. There, the view which is shared by database operations folks, DVAs, SREs, database DVREs, right? And there is a bunch of different titles which exist those days. They often see database as a white box, meaning they look inside it to see what they need to do it. You think about the developer point of view, it is actually quite simple. I see the database as a black box, if I can connect to the service point I am provided to, I can easy queries from my application, and it responds quickly and correctly. That's pretty much all I care about. Then from operations point of view, there is a lot more, you know, database loads, utilization of the different resources, indexes being used is where some contentions, looking at some disarmament hardware problems which can be exposed to the database functionality and performance, and so on and so forth, right? And as we talk about different methodologies, we will see very clearly how those different points interact. Okay, with that, let us look at some specific methodologies which are used by folks to troubleshoot database or optimize database performance. Now, what do you think is the typical, the most common method which is used by the developers worldwide? Well, and this is something what I would call finding solutions by random googling, right? So you have a problem, you maybe have some error message which pops up, right? Or you run into something even something simple as a database slow and you try different solutions which Google suggests. Well, the problem with this method is what it is hard to ensure outcome, right? Trying different solutions in Google may take a lot of time to find one, and then also may have some of them are being dangerous, especially when it comes to the databases, right? For example, one easy way to make a database to run faster is to relax your durability settings, meaning if database crashes, you will lose your data, but you wouldn't know what that is in effect until you actually have your database crashed. You can also get better performance in some cases by lower in security, right? And some other things which you may not want to do. It is also hard to train people, right, to be good at googling because, well, it is not a really very easily proceduralizable process. And also in the modern world when they are building systems often, which not only automatically detect the problems but fix them, it is hard to automate this self-healing if that involves googling and trying suggestions which are offered by people. There are better ways. There are actually a number of the methods which you can apply to tune in the database. And I will cover three very common ones. This list is by no way exhaustive. You can find in the literature other methods which are even general applicable to the databases, or there are some of them which apply to specific areas of a database. Students, for example, well, how can I find a way to select the best indexes for the squares and so on and so forth. And let's start with the use method. If you think about the use method, is what really developed to troubleshoot generally system performance issues? It's not focused on a database. Actually, none of the methods we talk about today are. And a goal for it was to resolve 80% of the problems with 5% of effort. And I very much like this specific claim because it doesn't say, oh, you know, it will solve every problem there is out there. And for use method, we have a very good operation system specific checklist which have been developed. Use method is also very easy to summarize in one sentence which says, for every resource, you check utilization, saturation, and errors, right? And if they are outside of a normal parameter as well, then you have to, well, basically, increase the resource, right, or reduce the load on that resource. What are resources in the use methods? What are the main terminology of the use method? Resource, right, these are all physical service functional components. You have utilization, which is the average time what resource was busy servicing work. Situations is the degree to which resources extra work which it can be can service, which is often queued most days, which adds up to the latency, and errors, which is count and maybe types of events. Here are the typical resources which you would consider, right? If you want to simplify that, I like to simplify things to CPU memory disk network, right, which is typically their simple way to use to talk about the same, but this is Brandon's checklist. If you think about use method, it can go beyond hardware stuff and be applied to the data, to the software as well. The same basic resources apply, but we are also looking at additional software resources, such as mutics logs or file distributors or things like connections, because, hey, you know what, if you will run out of resource of available connections, then you won't be able to connect to a database, which will obviously be a problem for your application. If you look at the use method benefits, we have ProvinTrac record, load applicability and detailed checklist available, but there are also some drawbacks, what it requires, a good understanding of system architecture, it requires access to low-level resource monitoring, which can be complicated in virtualized environments and the cloud, they may not just have access to like a true hardware resource. And it's also can be hard to apply in other service environments, where in many cases, you don't really offer that inside in the architecture and specific resources. Now, if you look at the Linux standpoint, this is example of a very specific checklist, which is available from Brandon Greg website. You can check it out. This one is for Linux, and it's actually a longer one, and just cuts the part of it as an image, and you can check more at Brandon website. Now, let me show some examples. What we are doing around this huge method in the tool we have developed called Percorano monitoring management. This is 100% free and open source tool, which is purpose built for the troubleshooting and performance optimization of open source databases. They're also working on the management features, as the name would apply, but these are not quite ready for prime time yet. Here are some examples. CPU utilization is easy, and many tools offer that. What we offer is also a way to look at the CPU saturation, which we like to look from the two different, through different angles. One, you would often have applications which are focused on the single thread workloads, and they would max out single CPU core. In this case, that is important because to solve that, you probably want the faster cores, or make your application more parallel. Now, though, is what we call the normalized CPU load that tells us how many applications are waiting for every CPU core available. If it's much more than one, well, then chances are you're looking at a lot of queuing, and you want to avoid that. Now, from a memory usage standpoint, I like to look at the three different parameters. One is real memory usage, right? Physical memory usage, the I1 is virtual memory usage. Virtual memory usage is especially important for databases, because if you will run out of memory, then the database process would be killed, and the crash recovery or failover with database often take more time and effort than with the application service. So they typically wanted to be avoided. And the other thing which concerns the virtual memory, right, is your swap activity. If you create a swap file, so you have more of a virtual memory, so you avoid those said crashes, you want also to avoid there is not a lot of swapping activity going on, right? Because if it does, then that can severely impact your performance. From a disk situation standpoint, I like looking at how many requests we have going on at the same time, right? And separated by number of free requests and write requests, which can be very good to indicate the disk load. In modern systems, it's actually often hard to understand how much concurrence the disk system can handle, but that often gives you the idea where your current load is compared to the maximum. And now way to look at the situation is to look at the disk latency. Again, I am splitting here the read latency and write latency as a separate measurement and compare that to your long term average. If your system latency is elevated from your long term average, well, that is likely because you are oversaturating disk system and some queuing happens. Now, if you look at the My Square piece, right, database side, you can also look at utilization from different levels. For example, you can look at how many queries, database handles, and you also can look at the more low level operations, such as a row operations that take place. And this is important because one query may just fetch one row or it may be crunching through millions and millions of rows, right? So just looking at the queries alone may be misleading. And now way to look at the MySQL situation would be to look at the number of active threads. If number of threads which are running, rather than just connected and being idle, goes to hundreds, for example, well, then that means the MySQL is overly, overly saturated. With that, let me jump to the red method. The red method has a little bit different focus. It was designed for microservices where we really have a lot of the services, a lot of nodes, which are kind of cattle, not bats, and we are not really looking at, look at every one of them in so many details. And also in the cases where mapping of resources can be fluid, right, compared to the kind of more static approaches. It is also great for other service model when internal details may be really unknown. The red method in as close to once and as possible says for every service request, check what these are, the thin service level objective or SLO rate, error, and duration. And for rate, for example, that means, well, you know what, if you're having too many requests when you design your service for, well, then that is a problem. But if you're also not having enough requests, like you in flow requests, I0, then chances are there is also a problem because there's something which is down upstream from this application. If you're looking at red method for databases, we can look at from a different levels, from a service level, individual database servers, specific application users, so individual queries, and transactions. What I love about methods, red method, is what it's easily maps to what developers really care about, right, and what is just starting in the part one. And it doesn't really require deep understanding of architecture. And it also does not require its access to low level resource monitoring, which we may not have. But also it has some dropbacks with red tools and checklists are less developed than for use method. And also it's really often focused on the answering what is wrong, rather than why, right, like, oh, we have a lot of services, errors coming in from this type of service. Well, let's now look into that in more details why those errors would be happening. If you look at what you've been doing to support red method in PMM, is you've done a lot on the query side, like for example, we can look at the set of queries, your database infrastructure handles to slice and dice them by the query type, or also by application server users to see where the load come from, wherever query count and query time are within normal operating parameters. I also created the custom red methods dashboard, right, which specifically focus on the query rates, errors, and latency, right, where you can look at that both overall for all your environment, as well as broken down by database servers, right. And you can also see here for query latency, I am looking both at the average latency and 99 percentile, right, because typically you want more than to look at the averages, because averages don't tell you anywhere no to the full story. Finally, let's look at the four golden signals that comes from a monitoring distributed system chapter in now famous SRE book. It can be used for alert in travel, routine trend analysis, right, so it's not really just focused on this issues. Here are the terms what it uses, right, and you can see it's somewhat similar to the red method, right, it goes with latency, traffic errors, and the end saturation. If you look at the database, how do we apply those? From a latency standpoint, we typically want to look at the query response time by database instance and query type. We also want to look at the number of queries of specific type served as a traffic. In terms of errors, we look at things like connection failures as well as query error codes, and also hopefully we can catch the wrong responses, but in practice that is very hard to do so in production, and typically validation happened in testing and we just maybe spot check production every so often on the application side. From saturation standpoint, in the best scenario, you know how much lower database can handle, right. In practice, it is often not quite easy to really understand the true capacity of what the database has. Another way to look at the situation, what you want to consider, is the duration of resources such as database connections, because again, you run out of connections, the application will start throwing the errors, or things like disk space, again, running out of disk space while the database would go down, right. So you want to be carefully looking at the things like that. If you look at the golden signals, it is a great methodology which is well tested by SREs worldwide, and there is a good resource book and many resources available right now. Now, if you look at the drawbacks, it's a framework for monitoring, first of all, and you don't have a lot of specific troubleshooting checklists available in the same way I've accessed for the use framework. Well, let's finish with a couple of takeaways. As you can see, there are multiple methods are available to help you with your database troubles. Beacot makes sense for your circumstances from what is offered, or invent your own method, companion, those and others which exist. Just make sure you are not settling for random Google. With that, it's all I have for you, and I will be ready to answer your questions now. Thank you.
|
Have you heard about the USE Method (Utilization - Saturation - Errors), RED (Rate - Errors - Duration) or Golden Signals (Latency - Traffic - Errors - Saturations)? In this presentation, we will talk briefly about these different, but similar “focuses” and discuss how we can apply them to the data infrastructure performance analysis troubleshooting and monitoring.
|
10.5446/52745 (DOI)
|
Hi there. Today we are going to talk about the PostgreSQL network filter for invite proxy. I'm Fabrizio, I'm working in IT for 25 years. I'm a PostgreSQL developer at Ongress. It's a company that provides professional services on PostgreSQL and develop related tools. My friend Alvaro will talk more about Ongress very soon. I'm a PostgreSQL contributor in the Brazilian community leader. Okay, hello everybody. I'm Alvaro, I'm the founder of Ongress. This company means on PostgreSQL. We do work on R&D on PostgreSQL, we provide professional services. That's part of the background. I'm frequently found on PostgreSQL conference and other databases or even Java programming conference sometimes. I also like to work in the PostgreSQL community. In general, I also founded PostgreSQL nonprofit, the foundation in Spain, which is devoted to promote PostgreSQL all around the world. I'm also a Amazon data hero. You can find me on Twitter, my personal website, for any questions after this talk. Let's talk about today how we could enhance what is PostgreSQL observability. Let's try to analyze where it comes from, how it works, and if there's anything we can do to make it better. Basically, PostgreSQL monitoring. We'll keep blogs and traces for potentially other talks. Let's focus on monitoring. PostgreSQL monitoring is something that is actually another solution that is integrated in core. You normally need to use external tools to make queries to PostgreSQL and to get some information. The good news is that PostgreSQL provides quite a good wealth of information that you can query with SQL. One of these is called the PostgreSQL catalog, which includes tables and views which you can query, provide a lot of information related to how the database is operating. This is very good for information and we can make queries to gather all this monitoring data. The problem is that the volume of queries that we need to sometimes incur with PostgreSQL for getting all this monitoring information may or may not be that small. Especially if, let's say, we're doing a multi-tenant database where we have a lot of databases, we might want all these databases to be monitored independently. That means that we may need to multiply all the queries for monitoring across all the databases. It still leads to hundreds, even thousands of queries per monitoring cycle. Then if we want a fine-ray resolution, like one second, then we're imposing some not negligible load on the database. Also, sometimes the monitor, we need to install additional extensions like PTTestats statements or PTTestats monitor. These extensions require a restart of the database and restart means downtime. This is sometimes not even acceptable on many environments. They also may require separate configuration or external binaries, agents. This leads to complexity. Is there any way in which we can do this better? Let's think about how the PostgreSQL protocol works. What is called the front-end, back-end protocol. This PostgreSQL protocol is a layer 70-CP protocol. This means that it's not operating in a DCP layer, it's an application layer. It's very well documented, which is an advantage. You can see here a drawing of what is a sequence of potential messages that flows from client to server to perform a query. This protocol is very well structured. There's no kind of catch-all or generic message for many things, like, for example, MongoDB protocol, which basically everything is run command and you wrap almost everything right now under it. Here there's a specific message for very specific actions. There are dozens of different messages and they're all very specific and very well defined. This makes it easy to decode and understand the protocol very well. It's also very stable. The current version called version three has been operating since PostgreSQL 7.4. It's a circa 2003. It's been 18 years and 10 years running with the same protocol with no changes. This protocol is very widespread. It's obviously implemented in all drivers and all tools that access Postgres. Not only is it used by Postgres, it's also used by other databases that have settled on using the Postgres write protocol to enhance compatibility with tools and drivers that don't need to write their own. They have just implemented the Postgres protocol, databases like UQB, UQB, UQB, CROCRO, GDB, CREDIO, or even the most recent noise page and experimental project from the current Mel University. All these implemented Postgres protocol, obviously also Postgres derivatives, they also benefit from this protocol. What if we could, by studying this protocol, by inferring some data from this protocol, we could provide observability to Postgres. This would benefit this whole ecosystem at once. So let's look at this, how the protocol works, what is the architecture? Well, this is pretty simple, right? We have a client, we have a Postgres server, and they connect, and then the message flows that connects between them is the Postgres write protocol. It is front and back end protocol that I was mentioning before. Now, what if we would intercept this protocol, set up a middle box, middleware layer, where we may intercept this protocol and proxy it? So in this case, what we would be looking at is an intermediate layer here between this Postgres client and the Postgres server, where the protocol is intercepted and forwarded exactly the same as it came before. So basically a proxy. Now, if this proxy just passes the TCP protocol, there will be no change, this is totally transparent to the client. But if we use this proxy, this decoder box to actually obtain metrics from this traffic and send them to an external location for storing and visualizing and processing all these metrics, then we've got observability basically for free, we're just decoding the protocol transparently to the server. So, to summarize the advantages of exposing this observability to take this metrics via this process of proxy and traffic and decoding it are basically that we're replacing a pool model with, sorry, we're replacing a pool model where the monitoring agent is going to be requiring the database to push model as soon as we've got metrics, we'll push them out and we'll have basically real time for this metrics and less overhead. But especially on the database side, there's zero impact. It is 100% transparent and this means that there's no performance impact whatsoever. And there's also no configuration required, there's no restart required, there's no agents, there's no restarts, no downtime. So this is very beneficial for the database, it's a zero impact measure on the database. Obviously, this can also on certain environments such as Kubernetes, this can be deployed as a sidebar. We can even inject on some instances a sidebar and then transparently proxy the traffic and transparently provide all these metrics without disrupting at all the existing clients and servers. The side effect of this is that because this has zero impact, we may leverage this to actually increase the volume of metrics we are providing. Sometimes we need to control, you know, putting on a DBA, post-credits DBA hot. Sometimes we, you know, we can need to put a limit and say, you know, we cannot get all these metrics or this volume or this resolution is we're going to overwhelm the database and we cannot disrupt the traffic. However, if we are providing this observability at this proxy layer, we can actually, you know, as long as we don't obviously separate the proxy either, but this will be impact free for the database. So we can increase the volume of metrics or the resolution while we're capturing in this metrics. And obviously this also opens the door to new functionality that will be discussed at the end of the talk as future research and future opportunities for leveraging this layer. So, for this functionality that Alvaro mentioned, the proxy box, let's introduce Envoy proxy. Envoy proxy is a high performance C++ distributed proxy designed to single services and applications. And it was built by learnings of solutions such as nginx, haproxy, hardware load balances and cloud load balances. And Envoy runs alongside every application and abstracts the network by providing common features in a platform in the next manner. It's a middleware, Envoy proxy is a middle between post-risk, between client application and server application, as you can see here on this slide. And the power of Envoy the extensibility, we have the ability to have a TCP proxy. So to connect the client to Postgres, Envoy just needs extension of TCP proxy filter to create a connection, a TCP level connection between the application and the server, your application and the Postgres as well. And the application takes advantage of Envoy's extensive backend features. So, on Envoy we have load balance, health checking, outlier detection and so on. And also the statistics are produced but are grossed at TCP level, for example, number of TCP connections, number of bytes passed over TCP session, number of connection errors, and so on. Access logs also limited by TCP sessions. And the connection model for Postgres as well, it's the same because the extensibility of Envoy, we have the ability to change filters. That means we can create a pipeline of extension, a pipeline of filters to do something over the TCP connection. And the Postgres QL filter introduced in Envoy understand the Postgres QL wire protocol. I mean, it understands the bits and bytes, the communication from client to server at the TCP seven layer, the TCP protocol to expose some metrics and do some tests over the TCP connections. And also, the architectures are extensible that we can produce some metadata about the traffic passing over the TCP connection using Postgres QL filter. And we use a small SQL parser to parse the SQL messages from client to server and produce some metadata to pass over to another filter, maybe to our back, hold the access, hold the basic access control to control to grant or revoke some users you have some objects like this user can use this table can write this table and so on. And this is another feature introduced in this network filter for Envoy Proxy. So, how it all started to make the history, the long history short. Actually, if you see on the issues on the Envoy development on the issues on GitHub, we all have other issues related to why we don't have, we don't have a network filter for Postgres QL yet. So, in November 2019, over create an issue with a complex, complete background context and why we need this and how we will have more information, how we can improve the observability over Postgres QL connections. So, we didn't effort. We didn't effort because we made the contribution, we had we had contributions from several companies like that from Envoy maintainers, of course, from Hungary, we did a lot of work to make this happen. So, in next slide, we will see the timeline. In a really short time, we got this measure to power. And now we have the ability to expose the metrics about post-rescue connections without a pooling mode on post-rescue back end like, I will say. This video lead to 10 issues and new features being implemented in several other ways. And of course, we need help if someone wants to help us to improve more and more the network filter and other others are very welcome to join us. This is a small timeline on November night, 2019. This issue was created. The first POC about network filter was on January 2020. On July 2020, we have the first version on release one dot 15. Last year, last October, we had a release one 16 and was released a filter metadata, the ability to produce metadata from post-rescue. And we have a very important case here that we will talk more later. Last release, it was on 17 release this month, January. We released on and why the start of the last transported stock feature is very important to have a feature called SSL termination to our next version. We hope we can finish it to next version that will be released on our next March. So this is the plan. And about the metrics that currently are being supported. We grow the metrics that those metrics are counters, I mean, metrics per second, and we have errors, messages, statements, sessions, transactions, and we drill down the counters. So the numbers of insert, inserts statements, the number of updates in statements, delete statements, other statements and so on. So I will do a quick demo soon and you see this running. And let's switch to demo. I have four consoles on the top left. I have a PSQL console. I mean a post-rescue console. Here I have the ability to run several statements over post-rescue server. On the top right, I'm doing a tail on post-rescue log just for statements. It's to see that we are doing, we are not doing any pulling on post-rescue site to grab metrics, expose metrics. On bottom left, I have a debug log for Envoy proxy when we can see the decoder working, the coding messages for connection, and on the bottom right, we have all the metrics exposed by post-rescue network filter for Envoy Post-Rescue Network filter, for sure. Let's, for example, create a small table and see what happens on the filter's perspective here. Just to have some space. Let's create a table called customer with two columns. Here, when Envoy connection, Envoy logs, we can see the statements that front end, I mean the client send server, and the response of server, of back end to client. In the metrics, we have sessions, how many sessions, encrypted or unencrypted, how many messages was sent from back end to front end, from front end to back end. In post-rescue terminology, the front end means the client, and the back end means the server, just to clarify. And here are only statements, we can see we have, we executed one statement and one statement, other, because the drill down statements we just have, we recognize, we have a pivot, insert, select, end update. Okay, and there are some parts, here, which is disabled, the parts because the currently executed parts are embedded on Envoy is very, very cool, and I'm not using here to avoid some confusion. So, let's see other statement, an insert, for example, it's the same on post-rescue, we don't have any pooling revenue metrics, here, we increase statements, now the specific counter for insert was increased, the messages, the flow messages from client server get loggered here, of course this is the bug level, please don't do this in production. We will produce a very large logs, don't do that. And the select, it's the same. And all the messages are decoded, all the comments and the response messages from client server and from server to client, and here we have a select increase. So, this is a very powerful feature because OpenDota for a lot of incredible ways to improve the observability, this is just the beginning, I used to switch back to these lights. And this is an example, there are on GitHub repository, there are some Docker compos with several containers to have a complete example of usage of this network filter, of course with Prometheus and Grafana, and here is the expected result of all the things that we talked about until now. Here we have some example for transactions per second, sorry, back reads and writes, front and back in the messages, number of statements, drilling down to set select, all DML operations, and of course number of sections per second. All right, so this is what is available right now, just go download mboy117, that's the latest version, you'll be able to use all this. Now let's look a little bit into the future, what is coming on mboy1.18, next version which is due on March this year. So what we are trying to achieve for Postgres filter for this version is to be able to consider supporting Postgres SSL. Let me show you briefly about Postgres SSL. First of all, this is not the classical T a laser thinking of Postgres SSL doesn't operate a level four of the oscillator but rather application level. So it's basically similar to what SMTPS does. It's basically you start with an unencrypted connection request to upgrade the connection to SSL's perform, and then you switch to connect to a encrypted connection similar to start TLS. So that's why this functionality will be leveraged is 30 less functionality that was implemented on purpose with mboy1.17. Now, what is about SSL SSL is obviously very desirable on a requirement of many environments, but SSL database connections are high, right, that SSL and connection database are high, I don't talk to this SSL and SSL data connections are quite high. Right, so the obvious solution is all no problem, we use a connection pooler, what is the most used connection pooler, one of the best connection pooler in Postgres, PD bouncer, but PD bouncer is single threaded. And so establishing SSL connections, even be a PD bouncer that is expensive and you can actually separate a bring totally down a PD bouncer with with a not that high number of SSL connections. So you can basically be swamped by this. Also, turning off on SSL or rotating the certificates in postgres requires a database restart, which leads to the downtime. So, is there any way where we can, you know, solve all these problems. The obvious answer is to offload a cell to mboy, right, we have already in place a filter, we have start here less support. So we can do that. And this obviously avoids the performance problems that I just mentioned from postgres and PD bouncer, but it also announces our observability and monitoring capabilities because now we're able to do to also get the same metrics that a reason was just pointing out before, but for the encrypted traffic to otherwise we just just pass it through. So that's that's quite interesting. But even more important is that we can leverage here the management and capabilities that mboy has, and boy allows for dynamic configuration via API's. And so we can turn this configuration on a cell on on or off rotator certificates without any database impact without downtime, and we can even do that programmatically via API. So this is coming to postgres 1.8 sorry mboy 1.8. Now, the resulting architecture is just an extension of the architecture we have shown before, where there's another filter change as a chain sorry as the publisher mentioned in mboy the filters can be chain. So here we'll have a change with the start key allows extension, which will decrypt the protocol and then all the rest of the flow will be unencrypted to the TCP filter with the mits its own basic metrics and this postgres filter with mits their own very specialized metrics. This is the service special if you're running kind of in a site car environment, where you will decline to connect externally to the mboy proxy, and then everything else from mboy to postgres will happen on unix domain sockets where you can obviously be unencrypted. So that the SSL demo is very quickly just switch back for the consoles that I showed before. Here I will create a encrypted connection to post as well. And then you can see the protocol tell us on that free the cipher the beats compression off. And here on the, on the metrics we can see now there are a special counter colored terminated SSL. And that's this connection was encrypted between client. This client is a PSQL console and envoy, but from envoy to post SQL it was unencrypted. So in this, using this, we can continue using the ability to expose the metrics to the code, the raw, the wire protocol, exposing the metric, even for client connection of encrypted client connections. This is under development. This is a working progress in your team working progress progress. We hope we get it measured into and why very soon. And this is very simple example of of floating as SL on and why. Okay, that was very cool. And just to close on this presentation. Let us just briefly mentioned one use case and, and some feature of the envoy post was planning. So the first one is an open source software we're also developing at Congress and wider community called stack rest you can go and stack with I O and check it out. And this is a platform for running post present coordinates and envoy is used exactly as I mentioned before is a side car to post rest container. So it proxies transparently proxies the traffic and delivers all these metrics that Fabrizio just mentioned, they are exported to custom graph and those dashboards that are shown in the console that also comes included with the source software and basically did me this means that the zero configuration, nothing for to do on the database no impact no configuration from the user, we're providing wealth, very rich metrics from post rest behavior, gathered from the server. And looking at the glimpse into the future of what is going to come into envoy in the future versions. On the next slide, we might see that there's certain cases that certain functionalities are coming. So in the fall, the parser yes skill parser is included right now is going to be improved. It's not able to pass our parts 100% of the statement so we're going to get better parsing tools expose all the metrics and as mentioned to feed the RBAC potentially filter to make more specific more granular statistics on a per database, because post rest basically supports multiple databases within the same server. So we can provide per database statistics. In the future, we might add some really advanced functionality like for example query routing and this would be based on the queries that we're doing this will all happen again totally transparent to the post rest database servers we can create charting charting or routing or read write read only alternatives or the proxy layer. So we can do traffic training so we can enable these to switch back ends to perform payloaders and all the operations. And finally, the open telemetry integration. It's also very interesting for us for standardizing all this. So what's going on join the community here you have the specifics about the staff channel related issues. And there's also some references that are provided here in the slides for you. And that's, that's basically all we're happy to take on any questions. And I hope you'll enjoy and use the post rest filter for everyone. Thank you very much. Thank you guys.
|
How do you monitor Postgres? What information can you get out of it, and to what degree does this information help to troubleshoot operational issues? What if you want/need to log all the queries? That may bring heavy trafficked databases down. At OnGres we’re obsessed with improving PostgreSQL’s observability. So we worked together with Tetrate folks on an Envoy’s Network Filter extension for PostgreSQL, to provide and extend observability of the traffic inout a cluster infrastructure. This extension is public and open source. You can use it anywhere you use Envoy. It allows you to capture automated metrics and to debug network traffic. This talk will be a technical deep-dive into PostgreSQL’s protocol decoding, Envoy proxy filters and will cover all the capabilities of the tool and its usage and deployment in any environment.
|
10.5446/52746 (DOI)
|
Hi, my name is Raphael Gomes, I'm a software developer at Octobus, we're a small consulting company, we mostly do mercurial stuff in Python and Rust, but we also do other things. Today I'll be speaking about mercurial and how we can make it go faster using Rust, and this is going to be a case study of one specific endeavor that we've had, but we'll talk a little bit more about the project in itself. For those of you who don't know mercurial is a version control system that was started in 2005, it's mostly written Python, it has had C extensions since basically the beginning for speed, but also for the past two years or so it's been gaining Rust code. The metric that you see about the number of lines is maybe not that important, but it gives you some idea of the distribution of code. It handles huge repositories and like the ones from Mozilla for example, since we're in the Mozilla Dev Room, and it has a very powerful extension system that I encourage you to go check out on your own since we won't be speaking about it today. I just said that we have about 40,000 lines of C code in mercurial, so why use Rust, why put Rust in the code base? Since Rust was created by Mozilla employees and by Mozilla employees at first and until very recently was still basically Mozilla Endeavour, I'm sure all of you know about Rust at least and some of you know more about Rust than I ever will, but I'll still talk about why we chose it for mercurial. What I'm going to say may not apply to you or to whoever's watching, but may explain a little bit better why we took that decision. The first one is from a maintainability perspective. Compared to C, Rust has a better signal to noise ratio, which means that you get less lines of code that are strictly there to keep the compiler happy or something. So you get a more algorithmic view of your code, a higher level view of your code if you will, because there's less memory management for example, there are better primitives for strings, etc. The compiler is a lot stronger, a lot smarter, so you get a lot of better compile time guarantees and a lot of compile time help and good language, small language features, they're very nice to have. It has the standardised and modern tooling in the shape of Cargo, its package manager, Rustup, its version handler, and it has a test harness also by default, so this is all very nice and pretty easy to use in the major platforms. And it's memory safe by default, by some metric of what memory safety is, but it gives you a lot of comfort in knowing that some classes of bugs are not impossible because you still have unsafe blocks that are a next-tape hatch, but there are a lot more difficult to create those kind of bugs like the use after freezing and also concurrency issues like data races. So this is all very nice and from our perspective it's a lot easier to maintain than C. But also if we're comparing it to C, it's also because of its performance. So this is why we're here. Rust is a compile language, so like C it has no started time, I mean no, it's because outside of what the OS does for you it has no started time. Unlike some other compile languages like Go or Java it has little to no run time, it's slightly higher than what C has, but it's basically negligible for our purposes. So it's very nice for a small command line application like Mercurial to be able to start up very fast. There's a joke by a developer in Mercurial that the Mercurial test harness is basically a benchmark of how fast Python can start up. There are zero cost attractions which is a term that comes from C++ and is basically nice compiler features like iterators and pattern matching and that kind of stuff that helps you as a developer produce efficient code. And is zero costs in some sense because sometimes it ships the cost to the mental cost of the developer etc. I'm sure you've all had that discussion. But Rust has a lot of nice stuff that takes inspiration from the new research and the old research of programming language. This is very nice. And I think one of the biggest selling points of Rust is that multithreading is a lot easier than in any other language that I know of. So I'm sure you've seen this picture of the sign very up high on a wall that says you have to be this tall to write multithreaded code. And with Rust it's sometimes almost easy to write multithreaded codes. It really depends on what you do. You can still shoot yourself in the foot but it's a lot harder to do so than in C++ for example. But more in the context of Mercurial, having Rust allows us to also attack Python from both sides. So when I say this it's just because it's a Python, it's a snake, it's funny. But I'm not talking about removing Python. The idea is to leverage the nice things about both languages. So Mercurial is still very much a Python project but we can have Rust that is called from Python during expensive operations. So you're in a Python context and you write, you call a module that happens to be a Rust extension that just looks like a Python module thanks to the Rust C Python crate that we use. And that will speed up considerably some of the operations you have if that's suited. And on the other side you can have a Rust executable, so it's your binary, your entry point, that embeds the Python interpreter and only starts it if you need to. So for example if you have a AG version, the command, it shouldn't take more than a millisecond to respond. And it's, if you start up a Python interpreter and you have to get the imports in and all that kind of stuff, it's a lot slower than it should be. So this is a very simple example but we can have Rust on both sides to go fast when we need, when we can and use the full extensibility and complexity of the Python version of Mercurial. We can pick and choose. So I said we use Rust C Python for embedding Rust in Python but for embedding Python in Rust we will use, because it's experimental at this point, but we will use PyOxidizer. It's been written by a Mercurial developer. It's talked about as a packaging tool because it is, but it's for us more interesting, well for me at least, it's more interesting from the point of view of being able to embed Python in Rust. So the main thing we're going to talk about today is status, the command AG status and why it was slow and how we made it faster. So what does status do? It's similar to status in other VCS. So its purpose is to show what changed in your working directory compared to the last commit. If you have modified files, added files, etc. And it has a few constraints but the two notable ones are that it needs to conform to a Jignore patterns. So it's like any other ignore file you have the git ignores for example. Jignore is very similar, just has more syntaxes, but at the end of the day just one big rejects. And more importantly, by default AG status will do the status of each tracked file, but also the status of untracked, so not ignored but unknown files within the working copy. So this is important for performance reasons that we have to be fast. So this is what the output looks like. I have an added file, modified file and an unknown file. It's very simple. So last year, well, it finished last year. I re-implemented parts of the Dirst state in Rust. I will explain what the Dirst state is but it has something to do with status. We used multithreading within the traversal code, so the code that goes down the working directory for a considerable speed up because Python being single threaded mostly could not really do that efficiently and doing so in C would be a lot harder. It's not particularly a hard problem but it's a lot easier in Rust. And we've also used shared iterators. So last year I had a talk at FusDom, well two actually, that talked a bit more about this, so I'm not going to go too much into detail, but basically you have a Rust collection that is exposed to Python as a Python collection. So it's just, it's wrapped in a Python gift bag but it still rusts memory and a Rust program. But in order to not have to move the entire object through the FFI layer and do some very complicated synchronization between the two languages, we've developed shared iterators which allow you to send one value at a time and keep it synced and keep it working quite well. So all of this allowed us to have already pretty good speed ups from us and our users. So it's been reported to me up to four times faster AG status wall time with the Rust extension by, I believe the highest score was by a Nokia employee that said that on one of the internal repositories, don't know which, don't know anything about it, but it was up to four times faster, which is already very good. Both Sensor SN Gen 2 have started packaging the Rust version, well giving options to people to start using the Rust version of Mercurial, there may be other distributions, I'm not sure, but these are the ones that I'm rare of. We're confident enough in its stability that we're using it on FOS.haptopod.net. So Haptopod is our fork of GitLab that introduces Mercurial support and we have, the FOS instance is our free hosting service for FOS projects. So it hasn't had any bugs thus far, not going with. And also Google employees that are also Mercurial developers have shown a strong interest in using this Rust augmented Mercurial. This is not an official Google press release, or anything, but you know, it's cool to know. People are interested, it works, it goes fast, cool. So last year I was at FOSDEM and I said the following. That's our target, that's what we're trying to achieve, but it is the unrealistic target. So as you can see, I said that the right column is unrealistic. This was our target. This is an experiment made by Valentin Gatiambaron, who's a developer at Jane Street, and he made a version of AGE's status that was quite simpler than what you actually need to make a full status or to integrate within the code base correctly. And it went just so much faster than the Python version. So the left column is the Python NC version, there's no Rust, and the right column is its pure Rust version. As you can see, it's a lot faster. And at FOSDEM I said that this was unrealistic because it's simplified and we should aim for this, but we'll never be able to match it. However, we've made a new proof of concept and it's even faster than what Valentin had done. So on a different repository because I don't have access to the repository that Valentin had. So on Mozilla Central for the first two rows, you can see the Python NC version in the first column, then the Valentin's experiment in the second column, and then our new proof of concept in pure Rust. And as you can see, we're quite a bit faster while also being correct. It's a working implementation. And on a pathological repository that has nothing to do with Mozilla, it's an internal pathological repository, but we're very, very much faster than both of them. So this is very cool and very good news. But to understand how we made it go that fast, we need to take a look at how it works now. Before. So how does status work? If you were to write a status implementation very naively, you would open all of the tracked files. So all of the files that are followed by Mercurial, you would open them all and compare their content to the Mercurial store. And if it's different, you say it's been modified or whatever, this would work, but it would be extremely, extremely slow and very painful. So the solution that Mercurial came up with in about 2005 was to have something called the Dirstate, which I told you about a few slides ago. The Dirstate is basically a cache of what the working... well, it's not really a cache. It stores information about the tracked files and metadata about them. So whatever is needed to do that processing of has this file been changed or not. So it stores the size, the exit bit if you have one, the end time and something about the state. This hasn't been changed since 2005 and the Dirstate has a flat structure on disk. So it looks like this. There's the parent two hashes to just to know where you're at. And then there's an ordered dynamically sized list of paths and the metadata. The old algorithm, so when I say old, it's the current one that is not in our proof of concept, the old algorithm goes as follows. You walk in the working directory, you traverse it and you check that everything is in order of or whether it's been modified etc. The exact details are not very important. You respect the ignore rules. So you don't go into a folder if it's ignored for example. And you keep track of all of the things you've encountered on the way. Then you have to check the difference between what you saw and what standard Dirstate because they might not be everything in the Dirstate. You may not have encountered everything that you were supposed to because files can be deleted, newly ignored or under a sim link because we don't follow sim links for security reasons. There are many issues with this approach but the three main ones are that it's very expensive to build the two mappings that you need to do this. The first mapping would be the Dirstate itself. You have to take the file that's on disk and generate the mappings. When I say a mapping it's like a dict in Python or a hash map in Rust. Even with the fast hashing algorithm, the hash map took about 30 milliseconds on the central in my laptop which is fast for what it's doing but it's too slow for our purposes. Then you have to build the other mapping for the results and then you have to do the comparison and it's quite slow. But then if you have anything that is not a clean state you have to check if it's ignored or under a sim link. It's very expensive because we don't have the hierarchical information that we need to do it smarter. You have to check back every parent of the file to see if one of its ancestors has been ignored as well or is a sim link. You have to do a lot of backtracking and a lot of expensive stuff. You end up maybe iterating more than once over the Dirstate. So all of this is not very efficient and in order to do better and to have a better algorithm we need a better data structure. Since what we're looking at is a tree. Basically it's a file system so it's folders and files. We should also build a tree. It will allow for a unified iteration. So you can iterate on the file system as well as on your in-memory node and structure. If it's the same then you just take the same branches and you can compare. If there's any divergence, new branches, missing branches you also know where you're at. It's very efficient and you can do all of this in one go. It's more efficient to ignore stuff because if something is ignored you just ignore all of the subtree and you just don't go in. So it's very efficient. Sim link substitution, so when a directory becomes a sim link it's very cheap to ignore because the only thing you need to do is to stat the directory going down and to check. So doing so you also ignore the entire subtree. The nested.hg stuff is basically if you have a nested repository that just happens to be in your working directory. It's also ignored super easily. And if you need to do status inside a directory, so if you have say a big monorepo and you just want to have the status for a small subpart of the romano repo that comes for free also. However, building a tree can and cannot... building a tree can be a very expensive data structure as well. The first naive implementation that I had was 200 milliseconds to build the tree. When I said it was 30 milliseconds to build the hash map. So we had to build an efficient tree that was memory that kept a good memory locality. So how do we do that? That's the implementation. So the tree has a backing data which is on disk. The backing data is the disk data which we'll talk about in a little bit. It contains all of the nodes and it's a file on disk. Then you have a vault which has a... it's a struct in Rust and it holds a immutable reference to the disk data. And a vec of the full path that you can add. Because the dir state is not only used for status. It's used for everything that needs a working directory basically. So hgadd, hgcommit, those commands will need to update the dir state. So we'll need to mutate the dir state. So we have an immutable part which is the reference and a mutable part which is the vec of full parts. Which gives us two nodes. The on disk node and the in memory node. The on disk node points to the on disk data and the in memory node is newly allocated if you need a new path. So both of these nodes go through the vault to handle their paths. And it's basically, if you know what that is, a slot map. It's a big vec, in this case two vec, but it's the same principle. It's a big vec with indices and you know exactly where to go to look for your data and that's it. So it's a lot faster because everything is kept in a contiguous vec. So a node just has to jump to that particular vec and you can use the type system of Rust to do some very interesting safety checks which makes it very very fast. However, to make it actually fast you need the back end storage to be made a certain way. So the disk data is a pendently storage. So a pendently storage is when you write entries and then if you need to overwrite an old entry or to delete it or something you need to rewrite new data. You never actually overwrite, you just functionally write new data that whatever is reading the data will know to skip or ignore or just not read. So you have unreachable or dead entries and you have reachable or alive entries and when you need to add new data, so what you could call the mutated part, it will be at the end always. This is not new, it's used by databases every where, it's used by log systems etc. It's been used in Mercurial since the beginning. It can work with dynamic size entries. So in Mercurial there's a revlog. The revlog is the fundamental data structure of Mercurial. It's a dependently data structure. But the difference with the revlog here is that the revlog never has unreachable anything because it's dependently because we keep history. In the case of the Durstate and other data structures in Mercurial, we have unreachable entries. When they get to a certain percentage or whatever heuristic you may have, you need to rewrite the entire thing. So it's dependently up to a certain point. Then that's called a vacuum in database terms. To have a vacuum that works correctly in a concurrent system, so if you have multiple calls to HG, multiple processes, server or even on your client, your IDE or something, you need to handle transactionality correctly. So per storage, because Durstate is one of the storage backends that uses an dependently data structure with a vacuum system, you need two files. Basically, you need a small docket file which holds the metadata about the data file. The docket file has anything that's necessary to read the data file, like its size, where the root node is to start reading the data file, etc. Most importantly, it has a generation. The generation is a unique identifier, unique number, if you will, that points to the data file, which also has that unique number. It's unique per vacuum cycle. Every time you vacuum, you actually create a new file. We write everything in that file and keep the old file. That way, it's technically always dependently. You rotate to a new file and then you can do some cleanup later when you're sure that nobody else is pointing to that old data. The docket is small enough and we have ways of writing it automatically to disk that it's always going to be the source of truth. If you have that plus a good locking system, then you can have working transactionality with an dependently rewritable data storage. This allows us to use memory mapping. There's plenty of stuff you can do in memory mapping, but the ones we're interested in is the file based memory mapping. What that does is that it takes a file and you say to the kernel, map it to virtual memory. It doesn't use resident memory. It doesn't use your process in memory. Your process is memory. Multiple processes will reuse the same pages in memory, the same span of virtual memory, and will page fault that way through the data. If it's already been written, it's basically instantaneous to access it. If it hasn't been, you page fault and you go read the data on disk or in the file system cache, whatever. This may not seem super important in a client configuration because you basically have maybe one or two instances of macro at once, but if you're on a server, it's very nice all of a sudden because if you have 40, 50 connections or hundreds of connections at the same time, they all use the same memory to read the DERS state. Since it's dependently, the DERS state is, you don't have to worry about truncation. When you end map, you end map to a certain length because it's file based end map, so you go to the start of the file to the end of the file, and it's always written after or switched to another file. There's never going to be a well behaving process that will rewrite your end map while you're reading it. When you're not, you can still get corruption, you can still get bad processes, so you still have to have assertions in place, but Mercurial itself will never have to worry about other instances of Mercurial screwing that up. All of this basically removes read time. I said that the tree was very slow to create. This basically removes read time. You end map the file, it's basically free once you've read it, which you'll most likely do it more than once. What we do is, if you have a tree, you're very tempted to say, I'll reuse the stems, I'll compress the roots. By that I mean if you have A slash B slash C, you will reuse A slash B and A as prefixes of A slash B slash D, so that multiple paths will reuse the same memory, that way your DERSTAID file is more compact and takes less space on this. This is nice, but this space is not very, very important for the DERSTAID, and speed is a lot more important, so we always keep the full paths. All of the paths repeat their entire route up to the repository route. That way if we need to use the full path, which is at least once per path for status, which if you have hundreds of thousands of them, if we need them we don't need to allocate anything, which saves quite a bit of time. So only new nodes, only whenever you need to add something to DERSTAID, will there be an allocation, which makes it quite fast. All of this is very nice and basically needed to go fast, but there's two other optimizations that I want to talk about. The first one is not something super surprising, but it's very important to talk about, and the other one may be less intuitive. The first one is just fast directory traversal, so we need to traverse the entire working copy, if you have hundreds of thousands of millions of files, it's going to be slow, but we can make it go as fast as possible. So we have a recursive implementation right now that spawns a task per directory, it's very naive, we should have a heuristic in place, but it's already pretty fast, as you've seen. So when I mean a task, I mean that it's pushed to the thread pool of rayon, rayon is a rust crate that gives you very nice parallelism primitives. You just push it to the thread pool and the thread pool will manage itself, it has a work stealing balancing system that works quite well, and also it's an embarrassingly parallel problem, because it's a tree. I tried writing an iterative parallel walk, and it's more complicated, it's a lot more code, and it was slower when I tried it, so I didn't want to waste too much time into this, because the only reason why I did this was that maybe the stack could be blown by a very deep repository, but I've never seen it even in Python, so I don't think that's a real issue. And the last component is we're using another rust crate called a cross beam channel, which is basically multi-threaded channels, like it was inspired by the Go channels, but it's a subprimitive that allows you to communicate between threads, it's very simple to use, it's very fast, we haven't had any issues with it, so both of those crates make it, well not trivial because you still have to learn rust, but they make it very easy to do that. The second optimization that I have is maybe almost as important, but it's not, I'll say this, in the second slide I'll say why it doesn't always work. So it's end time caching, so end time is the modification time of directories, and it's supposed to be updated whenever a file in a directory is removed or added. So if we compare the end time of the directories on disk with the version in their state, because we also store directories in the new version, if it's different we need to read the directory on disk, otherwise we can just stat the children because we have the entire list of children, because it was cached, and it's faster than reading the directory, and then you recurse. That way you can even cache an unregnored file because the only important information that we need about them is whether they exist or not, so depending on the heuristic and your size of the repository, that might be a good idea. But as I said, it doesn't always work. It only works with certain pairs of OSs and file systems that we have to be quite careful about choosing. Like any cache, it needs to be properly invalidated whenever the parents of the dirt state change, if you move file systems, if there is a new ignore rule, etc. And also your file system or your OS granularity might not be good enough, so you need to be careful about some ambiguity that can happen if you read at the same time as someone is making a change. There's a little stuff like this to get right, but if you do get it right, it goes super fast. So that's recap. When you type a gist status, a lot of stuff happens, but what we're interested in is the backing system. So you open the docket file of the tree, you read it, it gives you the metadata you need, you map the corresponding data file, it's probably already been mapped before, so the read is very, very fast. The vault already knows how to access the data, so in Rust times you transmute the data into your field of your struct, and it doesn't have to read it, it just knows where to go for whatever node. And the root node, so the point of entry is given by the docket, so the vault knows where the root node is, the root node knows where its children are, etc. Then you iterate over both the tree in memory and the working copy, you check for any differences and divergences, you also check for sim links, and by doing so, if you're doing M time caching, you get two of those for the press of one, because you already need to stat either for the sim link checks or for M time caching, because you need the M time of the directory, so that's very nice, it's only one syscall, and then you send all of the results with the channel from all threads, and you collect them and print them to screen. So to recap, the performance that we had on the Zilla working copy was very, very good doing this with our profile concept. The data state is used in many commands, it's not just used in status, it's used in commit, diff, update, purge, files, etc. So this change has and will continue to have a good impact on a lot of commands. I wanted to take just a minute or so to talk about why from our perspective as mercurial developers, Linux is the simpler platform of all of them, just this is just me venting, kind of. So paths in Linux are just bytes, you can't have an old byte, but that's it, any byte will go, and the fact that it's transparent for the kernel means that you never have any sort of normalization or encoding, decoding, it's very, very fast, no allocations, and it supports all encodings. Great for us. We have a great number of file systems, we use butterfests, if you don't like it, you have probably another file system that you like, it gives you nanosecond end time precision, well it gives you end times at all, it does deduplication, etc. On macOS and Windows you have unicode normalization issues, case sensitivity issues, so it's sensitive in one way but not the other which gave some very interesting security issues, and it's a lot more code, it's slower, just HFS plus in general is terrible, I know that not all macOS users use HFS plus but you have to account for that. On Windows you have the aforementioned issues but you also have a very slow file IO, so slow that closing files in a few years ago, from the Python implementation, spawn the thread and put all of the closing calls to the file handles in a separate thread so that the main process doesn't take seconds and seconds to close. It has also more footguns with M-mapping and transactionality which I won't get into but it's just working with Linux is very nice in that sense. It's nice but it maybe could be faster because a lot of our time in the new proof of concept is spent on system time instead of user time, so it might be a signal that we're doing good stuff, it might be a signal of many different things but maybe Linux could do something faster. For example, whenever we have to read a folder we need to do an open-dir call, then a reader in a loop and then close-dir, so it's a lot of syscalls, it's a lot of back and forth and I've seen at least one person on the internet try to use the underlying syscall directly, the getDance, it was about 20% faster so maybe we could do that. We could also use fstat at instead of fstat, so instead of studying a file you will set the file relative to its parent directory, you give it its file handle, so the kernel or at least the file system doesn't have to also walk its own tree, maybe that will be faster. I also know that the kernel currently is working on asynchronous I.O. and maybe it'll give us good batching stuff because we're not really in an async, it's not really an async problem that we have and maybe we have some good batching primitives that we can use so it's faster. If any kernel developers or enthusiasts are in our present, let me know. So the conclusion is, Mercurial is always getting faster, always it's been that way since the beginning. We have a lot of Rust endeavors, I don't really have time to go into that but if you're interested you can talk to me in chat, basically we can get very very good performance benefits using Rust. Non-Rust stuff also is happening by a lot of developers that are not in Octobus. For memory reduction, size of the repository, we have a new revlock format that will be a lot more efficient with fewer files and just better information which allows us to have better algorithms etc etc. So keep an eye out in the next few months. We'll probably have new releases, we'll do stuff on Twitter, we'll talk about Mozilla developers and update your Mercurial, maybe use the new stuff and thank you for listening.
|
Mercurial is a Distributed Version Control System mainly written in Python. While it is often the VCS of choice for monorepos for its great scalability, certain parts remain slower than they should be. Over the past two years, an effort to rewrite parts of the Mercurial core in Rust has seen multiple significant wins in performance, even compared to C implementations.
|
10.5446/52747 (DOI)
|
you Welcome to my home office and to FOSSTEM 2020 Mozilla developer room where I'll be talking about Mozilla history. There's 20 plus years of this history around and it's still going on. I will give you an overview of the origins of the past and the present of the Mozilla project. My slides are already up at slides.kyro.at slash FOSSTEM 2021. And I am Robert Kaiser. My nickname is Cairo. You will find my personal homepage at home.kyro.at. Can email me at Cairo.at. I'm a Mozilla rep and tech speaker based in Vienna, Austria. I'm not very active on social networks, but you will find me on Matrix and some others. Links for contacting me are in the slides if you look them up. As I said, slides.kyro.at, you will find it in the list there. But let's go back to when the internet was very, very young. In 1993, NCSA Mosaic was pretty much the browser of choice for everyone. It was a university project. And the most common web browser of the very few that existed back then. One of its co-writers was Mark Andreessen, who saw that there's a lot of potential in an internet browser. And there's a lot of potential for making it a commercial project. So he co-founded a company for a commercial web browser. And basically rewrote Mosaic for that. He wanted to create basically the Mosaic killer app. This Mosaic killer merged into Mozilla, reminding of the Godzilla monster and created this code name Mozilla. It was used as an internal name, a code name for the code and for the browser they were writing. And the user ancient strings with Mozilla in the start are still a reminder of that era, because basically user ancient was to be the name of the browser slash the version. But there's its own history on that. They even created a green dino as a mascot for the project. This green dino is copyrighted by the artist Dave Tito's. I learned over the years that you need to be very careful with that copyright to not misuse the green dino anywhere. But if you want to, you need to ask Dave Tito's. The public and commercial name for this browser project later became Netscape in 1994, but the internal name stayed as Mozilla over the years. So, when some years later Netscape decided to open source its code, which in the meantime had grown from a browser to including a mail client and including a web editor. And they announced that in 1998 that they would open source it. They decided to use the Mozilla code name as a project name again. They also needed an open source license for this project. And GPL was too confined for them because they wanted to basic commercial Netscape browser on it. So they created their own license back then because there were not a lot of licenses to go around. That was called the Netscape public license written by a lawyer working at Netscape at that time by the name of Mitchell Baker. We will come back to her. And that license later morphed into the Mozilla public license when the name Netscape was switched to the name Mozilla in there. They created as Netscape's website and project host for this code project. And they also created a new version of the mascot. And they also created a revolutionary red dino because open source was very revolutionary back then and it was stuff for the masses. So they alluded a little bit to the communism imagery with that and had the tagline, heck, this technology could fall into the right hands, which completely embodied this philosophy of the code was the central thing there. And they wanted to get hackers and community into this project. The target for the project was to create an internet application suite as the base for Netscape 5 and later releases. And the open source code actually launched to the public on March 31, 1998. There's a quite interesting documentary that conserves this is more or less a time bubble named code rush you can find it at archive.org it's worth a view if you want to hear more of how Netscape actually managed to open source. And it's also a nice time capsule of how Silicon Valley worked back in those days. The code itself, the product that they were building was the Mozilla application suite. And in late 1998. So when the project was going on for a few months, they realized that the new in development browser layout engine, next generation layout or ng layout as they called it or Raptor as its code name was actually a very speedy component. And then the web engine that was HTML4 and XML compliant so up to the task of the newest standards. And they decided that because this also needed to render web forms and had all those buttons and things in there, anyhow, that they could build their whole user interface based on this echo engine. Now, that caused them to rewrite the whole application suite based on browser technologies was called XPFA, the cross browser front end and where HTML was not up to the task of rendering user interfaces back then. So they created their own XML based user interface language or Zool that used CSS and JavaScript just like HTML does today. And they added XBL, which basically is the grandfather of web components, and a few other technologies like xpcom and RDF XML that didn't work out that well in the long run, but were considered state of the art back then. So there are end user releases called milestones in the beginning. M3 to M18 where those releases. The first one I downloaded was M5 in 1999. I immediately reported a few bugs in how it rendered websites. Some of them were even fixed, which was completely success story for myself. I reported a bug and they fixed it. And so that's how I became involved into the community in 1999. And in the year after Netscape finally wanted to release something out of this project and call it Netscape 6 because it so much time had already gone since they announced that they would develop Netscape 5. So we called it Netscape 6. And but Mozilla project wasn't really feeling like this was stable code yet so they called it 0.6 internally. Netscape 6 was also not that successful because it was not that stable. And it took us another two years to release Mozilla 1.0 in June 2002. From there, regular development continued and releases were done up to 1.7 in 2005. When it was deprecated for Firefox and Thunderbird, which were much more successful back then, and the suite itself was transitioned to the community in 2005 as well under the name CManky. I was involved in that project for quite a while. It still continues, but I'm not involved anymore nowadays. I said Firefox was way more successful. But that project even goes down to before Mozilla 1.0 was released in April 2002. So a few months before Mozilla 1.0. A small group of developers from Netscape and the community came together to create what they called M slash B, standing for Mozilla slash browser, which was the place where this was in the CVS repository of Mozilla at that time. And the read me said the idea is to design the best web browser for most people. They didn't think that the suite was really fit for that because it had a web editor and a mail client with it, which most people don't need. They had a lot of menu options that only a small amount of people need that were probably better solved by something like add-ons, which didn't exist at the time in that form, but were a thought for that. And so they said, let's do a leaner, cleaner browser only user interface, but let's use the same technologies because some other experiments didn't go as they hoped on native UI. So use the same browser based technologies for the user interface, but did it in a leaner, cleaner browser only way. First, development releases of that were called Phoenix because of trademark issues. They switched to Firebird. And then because there's another open source project called Firebird for a database already. Finally came to Firefox, which was developed for a few pre-release versions and version 1.0 was released on November 9, 2004, which we in the project considered to be the birthday of Firefox since then. This browser came also in this 1.0 version with a completely new concept. You could actually search the web right from the start page, right from the address bar. Google at that point was there right where you needed it and you didn't go to Google first before using search. That was a new concept. Nowadays, every browser comes with search right in front, but back then this was a very new thing. And that was also something that Firefox pioneered that point, because it was really a good browser for most people, it was secure. It rapidly grew over a few years up to hundreds of millions of users where it is still today in this area. And it started a whole new push of Internet innovation because before that the web was mostly considered dead. There was not a lot of development of new stuff happening on the web, but with Firefox, a modern browser coming to the masses, there was a new push for new stuff and new innovation. Along all of that, Netscape on the other hand went down. Market share tumbled. There was less and less interest of users and even of AOL who had bought Netscape some years before, was even less and less interest to develop it. And in the end, in 2003, AOL pulled all developers out of browser code and of browser development, which in a normal business would mean the end of it all, but you cannot just kill an open source community. And the suite actually had managed to grab the interest of a whole lot of people in the community. And so the leaders of the community back then, and Mitchell Baker had been in the philosophical lead of Mozilla.org for all those years after she wrote the license. Those leaders talked to AOL on, can we get all the servers? Can we maybe get a little bit of start funding and we'll do our own separate organization. We created the Mozilla Foundation, which we internally call MoFo, with that one time funding that they could get out of AOL. They had no employees in the beginning, Mitchell Baker served as chairwoman of the board, which he does up to now. And they lived completely off donations. But then when Firefox 1.0 was released, revenue sharing for this web search integration, because web search results have ads, and you get, and those companies like Google make money with the ads on those results. And then the revenue share was given to Mozilla and suddenly there was money in the foundation, people could be employed, and the whole new paradigm started. That went that far, that for tax reasons, this didn't fit into the nonprofit definition in the US anymore. So a subsidiary was created that could legally take in all that money and that was called the Mozilla Corporation or MOCO as we call it internally. In 2005, so shortly after Firefox 1.0, as I said, for tax reasons. After that, the product development was in MOCO, mission driven non-commercial work stayed directly at the foundation. And Mozilla continued for a long time with the Dino, the red Dino still being the mascot and the logo for Mozilla in 2017, and new Mozilla logo was created. And you can see that the bottom of this graphic on the slide, which is just this stylized Mozilla with the colon double slash in it that you see in all URLs on the internet. So this whole foundation shift also made a shift in how the project saw itself though, because initially it was just the virtual meeting place for the Mozilla code inside of Netscape. And that continued as long as it was the vehicle for Netscape to build their new products. But in 2003, so around when the foundation was created, a first actual mission statement was introduced on the website, and that was to preserve choice and innovation on the internet. Now this mission statement has pretty much stayed the same, though it was refined and rephrased. And today it's phrased as, our mission is to ensure the internet is a global public resource, open and accessible to all. An internet that truly puts people first, where individuals can shape their own experience and are empowered, safe, and independent. Which is longer, which gives more details to what we mean with all that, but in its essence in what what it means behind the scenes. It's pretty much the same thing. It's just phrased differently. In 2007, the mission was expanded into an actual Mozilla manifesto with 10 principles that actually tell how we as a community see how the internet should be shaped and how the technology of the internet should look like. A bit more than 10 years later, 11 years later in 2018, we added a pledge for the human experience as an addendum to the manifesto, because we saw that it's not only about the technology, the human interaction on the internet is also part of what we want to see in this open movement. I'm saying we, because I'm seeing myself very much as part of Mozilla. And I was part of the feedback cycles of giving input into especially the manifesto and the addendum. And it was always very interesting in this governance, governance part of the Mozilla project. Over the years, there have been a lot of other projects and products inside Mozilla, and I can only present in excerpt here. I know I have spent a lot of time already on the early years, but I think that's very important because that's something easily forgotten nowadays. But I will go through a list of some projects. And this is already two pages long here or two slides long. But a few notable ones out of those more than 20 years of Mozilla. In 2005, MTC, Mozilla Developer Center, was started for developer documentation. In the meantime now it has morphed into MDN, which originally meant Mozilla Developer Network. And it's basically that documentation for web technologies on the internet now even partnered up with companies like Google and so on. So, Sule Runner was basically a spin out of the application platform that Firefox was built on. And a runtime that you could use to build desktop applications with web technologies, something like Electron later on, but based on Sule because HTML wasn't ready at that point in time. Nowadays, things look a little bit different. That's also main reason why it has been discontinued. Rust, as a new programming language geared for parallelism and memory safety in conjunction with speed, was a personal project of a Mozilla developer since 2006, started to be sponsored by Mozilla in 2010, and has risen up a lot since then. It powers a lot of what Firefox is today, and creation of a separate foundation is in progress, because it basically has outgrown what Mozilla is and it's separate, but still connected community. Other projects were not so successful like Mozilla Persona with the browser ID protocol, which was the dream of replacing Google and Facebook and so on, login buttons with a decentralized authentication system that only lasted for five years, was discontinued in 2016. Servo as an experimental browser engine built with Rust was created as a research project in 2012, then developed in cooperation with Samsung and moved to the Linux Foundation recently in 2020. And Firefox OS was Mozilla's try to get a third mobile operating system out there. Next to Android and iOS, that would be based on the web, using web applications that never reached critical mass though and was discontinued after three years in 2016 as well. You will see that around this year of 2016, a lot of change happened at Mozilla. Mozilla also connected to the big investment that Mozilla had in Firefox OS and that not working out. So a number of projects were shut down and other new projects, new ideas brought in at that point. Like the work on virtual reality, integrating virtual reality into the browser was very revolutionary in 2015. And Mozilla was at the forefront of that, also creating a frame as a framework for easily building web VR, or now web XR applications. That moved to the community in 2018 and is still continuing. The rocket went the other way around. This was an independent add on called Read It Later since 2007. But in 2017 Mozilla actually acquired it and integrated it into its suite of projects and integrated it into Firefox. Mozilla Hub as a social VR platform was created in 2017 and actually became pretty important in 2020 with the pandemic because it's a good place for people to meet virtually. WebThings as a smart home gateway with privacy in mind was also created in 2017, moved to the community recently in 2020. Mozilla started creating some privacy tools that play with Firefox, like blockwise as a password manager, monitor as a data breach checker, which were introduced in 2018. Very recently Mozilla went into the VPN market. Mozilla VPN exists since 2019 and 2020 and is still in the process of being rolled out. 2020 was another little bit in position point where Mozilla had to scale back because of the COVID crisis and because of trying to get into its budget limits. And so some things were moved out there, some projects that the community was very excited in but didn't have as much traction as Mozilla wanted. But with that, we come to where Mozilla is now. Currently Firefox for desktop and mobile and actually also for virtual reality as Firefox, Firefox reality, yes, and Pocket are still developed at MoCo. Some privacy and security products were added and are continuing there like monitor, like VPN, like lockwise, things like this. Some research is also being done into new offerings like hubs and so on. The Mozilla Foundation still does mission-driven work like the Mozilla Festival, the Common Voice Project, advocacy and other things. Thunderbird is still under the Mozilla umbrella, under the foundation, basically parallel to MoCo, but it's very small, separate entity. And then there's a whole list of related projects. I'm sure there are a number of those like Rust, Vuxilla, WebThings, Kiosk, which partnered with Mozilla. There's a whole lot of projects that are connected in some form to Mozilla, to the community, to MoCo or Mo4 itself. And Mozilla is still very vibrant, at least at this point. I cannot look into the future, but the history ends with the present. So that's where we are right now. And so that's where this talk ends. I hope that there is a bright future for Mozilla. There is a long past with those more than 20 years. There were some ups and downs in that. It was an interesting time. I only could provide an overview. I have seen more than 20 years of the community. It even existed a little bit more as you saw here. And I very much hope that we will see at least 20 years more in this project and in this community. Thank you for listening to this. As I said, I can be found at home.Caro.at, where you also find my contact information. And of course Mozilla can be found at Mozilla.org, as it was in the beginning, as it is now, as it hopefully will be in the future. Thank you. Okay. We're about to start the Q&A. The question is if someone arrives. I'm sure people are coming. The live thing is to change to a recording. That's how we know it's starting. What's the next talk in the room, Anthony? So I see the widget with us with the Q&A now on the page. The room will open in a few seconds. If it works. Let's see. The Q&A won't open for 15 minutes. People can just ask in the chat and we can answer live. I'm sorry about that. We have a first question that I see from D Baker, which is asking where can one... The Q&A won't open for 15 minutes. We have a first question that I see from D Baker, which is... I posted in the chat as well, there is public information about that, where it comes from, and where it goes to roughly. Mozilla publishes a report every year. It usually is in the second half of the year, 40 years before. The 2019 numbers have been published a few months ago. The 2020 numbers will come later this year, but it's basically all public, at least the rough numbers, because Mozilla is a non-profit organization, so they need to publish even. I think you already answered most of the questions in the chat. I tried to... One thing I mentioned as well, the Wikipedia page for Mozilla and the Wikipedia page for Mitchell Baker, both include a video of a talk she gave in 2012, I think, that has a lot of interesting stories around this transition from AOL to Mozilla Foundation as well. So that's worth watching as well. And I also linked that in the talk page for this, and I also linked the video on the Internet Archive for the Code Rush movie that documents how Netscape open sourced their code for Mozilla in the beginning, which is an interesting view and an interesting view into what Silicon Valley culture was at that time as well. So Johanna is asking, how do you envision the next five years for Mozilla and its community? There's this old proverb of prognosis is hard, especially when it's about the future. But I think that if the community stays strong behind what the mission and the manifesto of Mozilla is, then I think Mozilla can have a good future. It may not be a future that's just Mozilla Foundation itself, though. It may be a future where it's Mozilla Foundation and the number of other like-minded projects around it. Like, for example, the web things I like, the open web docs or open web documentation, OLD community and things like that. And I think if we all pull together, there's still a chance to, as Mozilla likes to put it, unfuck the Internet. But a lot of people need to pull together for this, and I think this can be a good way forward in general. I think we need to still be the people who push in the public for openness and for freedom on the web. Like, FOSTA and all the people involved in FOSTA have done for also 20 years now, 10 years now, I think. So I think there's still a lot to do for Mozilla, and I hope that there is a good future for that. I cannot go into concrete things because I have no clue how concrete things work out. So we have Auxet that's asking, it says, Affix Mozilla is quite dependent on Google for funding. Is there any way to change that? That's a tough question. Yeah, it's hard to change that. So for one thing, yes, it's dependent on Google, that's also one reason why 2020 was a hard year for Mozilla. Because when COVID first hit, all Internet ads, the whole Internet ad market went down, and that also meant Mozilla income went down. Because basically what happens is Google is sharing ad income from the search results with Mozilla. So Mozilla could theoretically sell that to any other search engine and has tried that in the past. For example, with Yahoo, but Verizon, who bought Yahoo and Verizon is heavily against net neutrality, so it was not a partner that Mozilla could still work with, at least that's how I interpreted it. And so as long as this search market is something Mozilla depends on, the dependency on Google, or someone like Google will be there. But things like the VPN are priced for Mozilla to get some income besides that, and hopefully more stable income, and hopefully income that is not dependent on the ad system that we're not too happy with, privacy-wise. So I think, and I mean, we as all of the Mozilla community, including Mofo and Moco. And so things like the VPN, things like Pocket Premium are ways for Mozilla to become more stable in income. And I think people who want Mozilla to be more stable should think about buying those things just for that reason. But as long as a search is in the play, there is, of course, some dependence on someone who is doing search, and that's mostly Google, because people want to search with Google, unfortunately. So I don't think I see any question. So does anybody have any question? And that might be an interesting point that Andre is talking about. So somebody said that he is doing a monthly contribution to Mozilla, so to donate at Mozilla.org. And could you maybe explain what kind of project these donations are contributing to? So if you donate to Mozilla in that way, that goes to the foundation, which means that goes directly into projects that are very closely tied to the manifesto and the mission. Things like Common Voice and other MOSFest and all those things that the foundation is doing, advocacy, things very closely related to the mission. If you buy things like the VPN and Pocket Premium, you're putting it more into the product development, which the corporation is doing. Because of laws, the two entities cannot freely share their money, so it's very different what the money is used for where it flows to. And this Hank is asking, is there no subscription for Mozilla Corp. There is no subscription for the corporation itself. That question came up a lot recently, if people can contribute to Firefox development directly. The only thing you can do right now is to buy the VPN, if it's already available in your area. That goes directly to MoCo and to buy Pocket Premium, which also goes directly to MoCo. Maybe there will be other things in the future, I don't know. I see a question saying why such a resistance to allow people to donate for Firefox and nothing else. I think you pretty much covered that already. Yeah, it's not a resistance to allow people to donate. It's mostly the laws around the foundation in California. So if you donate to the foundation, it cannot give to the corporation and the corporation can only give to the foundation in a limited fashion. But the corporation cannot really accept donations labeled as that in a tax exempt way and things like that. So they need to invent some kind of product you buy. It may be an idea for Mozilla to do some kind of product that people can support Firefox, but I don't know of anything that they're doing right now. I'm sure they're thinking about what they can do, but there's nothing openly discussed right now. Maybe there's something internally discussed. I have no idea of that at the moment. So I think we're mostly that with the questions, am I right? Yes, I believe. Well, there is question about technicalities like why not use the European entity for donation to Firefox. But I don't believe neither Kero or anybody is able to talk in the name of Mozilla about tax implications. Yeah, I think the one person that could talk about that would be Mitchell Baker herself because she's a lawyer and she's CEO and she's chairman of the board of the foundation. And she knows what she can talk about publicly or not. But otherwise, I cannot talk about technicalities. I'm not a lawyer. And the European entity per se doesn't exist anymore. Yes, that's also true. There's only a subsidiary from the Moco thing that exists in Denmark and that covers the rest of the European countries. Yeah, that's a technical thing of also a tax construct basically. So there's no the best way if you want Mozilla to be successful. And the best way to do that is to help it by getting it, making it gain user share. And the main thing you can help is make people use Firefox install Firefox on the computers of your parents and your friends. And if you're any of your technical run, like Lee, the nightly link is in the description of the modular room. Run it. Make sure that you updated daily so they get reports and they have a better product. And the other thing is make meetings on Mozilla hubs, which is also still a product of Mozilla officially and. Thank you.
|
We sometimes hear statements like "Mozilla is one of the oldest Free & Open Source projects in existence today, with more than 20 years of history - and still going strong". But where exactly did this project come from? What happened early in its history? What did the project go through to come to where it is today? This talk will try to answer those questions and compress multiple decades into less than an hour - or at least give an overview of the big-picture events this project lived through. As the audience may be more familiar with recent than earlier years, more emphasis will be put on times when the project was still young - for some things even reaching back to times before the speaker joined the community in 1999. After attending this talk, you will hopefully have a better understanding of the background of the Mozilla project and how it has helped shaping the web for the better, something that will hopefully continue into the future. The roots of Mozilla go back almost to the beginning of the web itself. The first broadly used web browser in university circles was NCSA Mosaic - the co-writer of that piece of software created a commercial variant back in 1993, going under the commercial name of "Netscape" when it was released and became the first major web browser in Internet history. But its code name, right from the start, was "Mozilla". When Netscape open-sourced its code in March of 1998, that code name became the public name of the open source project, and over the years, Mozilla attracted a large community of developers, localizers, and more. A non-profit Mozilla Foundation was created, with a "corporation" subsidiary for tax reasons, and a huge list of projects and products have been associated with Mozilla over its more than 20 years of history. The most well-known is of course the Firefox web browser, which has been one of the most-used open-source products for many years - hundreds of thousands of people browse the web with Mozilla Firefox even nowadays. After decades, this project and community is still going strong - hopefully continuing to do so in the future. The speaker has been with this community for over 20 years himself and the talk will give you and overview of the origin, past, and present of this interesting project.
|
10.5446/52752 (DOI)
|
Hi everybody, thanks for joining me. It's such a pleasure to see that Faustem is alive and well during these strange times and that we're able to keep bringing our little community together. My last round as a speaker in this dev room might have been 10 years ago, but Faustem has always felt a little bit like coming home and not just because of the lingering smell of French fries in the hallways, which are doing this dearly. My talk is scheduled to be the last of the day and has a long time attending myself. I know that our dev room is visited by the sharpest talents in the MySQL world and I'm really proud to get to work with some of these people regularly. After so many years, I'm happy to be continuously impressed by all the new developments, but being at the cutting edge of this space and listening to talks that focus on solving problems at the highest levels all day, that can also create an impression that using MySQL and becoming a part of this community is quite intimidating. So here I am to lower the standards. Just kidding. Here I am to extend a hand to all of those who are here to learn more about MySQL by necessity, because it's hard not to nowadays, but also because thanks to its long life and history can be really hard to know where it starts when all you really want is to make sure that things aren't actively on fire. So my name is Liz Mendeich and despite having spoken in this dev room before and having considered myself a part of this community for the past 10 or so years, I don't actually use or think about MySQL that often. In my role as a solution architect currently, I'm more often called to sound knowledgeable about various parts of the stack rather than actually having an environment where I can build real world experience. So I find myself constantly needing to review even the most basic things and making sure that what I'm putting together or demoing hasn't diverged too far from the current reality of things. I'm what you could call a perpetual beginner at MySQL and I believe given how widely MySQL is implemented, that probably aligns pretty closely with a lot of the engineers to whom it's just one more part of a very diverse and a rapidly evolving stack, but still one they want to learn about. So I work for PlanetScale. I fulfill a technical presales role there and I love it as a company, PlanetScale has got a very strong vision of what the database of the future looks like and it's very deeply invested into the MySQL ecosystem as well as Fites, which Lomi also covered during his talk. I wish I'd accumulated anywhere near the chopsy and the others in this dev room have in these past 12 years of being involved with MySQL, but the reality is my knowledge of various parts of the stack is kind of like a sandcastle and in my role I've had to learn how to be effective at starting from scratch multiple times and finding the right tool belt to get me 80% of the way there. So that's what I'm hoping to share with you here today. So you're a developer, of course you know of MySQL. You can write queries sort of or at least you know your ORM does and for the most part it just feels like a bit of a dinosaur that's more or less a given wherever you encounter it. No wonder. The software is so old. Clearly there ought to be better solutions for things, but we're all just kind of used to it, right? So no matter the actual situation, most people don't grow up dreaming of becoming a DBA specifically and more and more the responsibility over this very critical part of the stack has become shared with that of many other components and so we have just as many stack overflow DBAs as we do developers. First of all I am one too. So hello friends. As you can imagine floating up and down the stack every day and only having a couple of moments to really sit down and learn things. I'm a very good Googler. In fact to write this talk I looked up Top MySQL DBA tools in 2021 just to make sure I didn't have any glaringly obvious emissions. Of course everyone's kind of got their own favorites but my main goal for this talk was to collect some of the tools and the areas of knowledge that any five minute DBA should be thinking of first when stepping into care for what's often their company's most critical set of data. There's so much movement and history to be found on the internet that it can be hard to identify what is proven and what is stood the test of time versus what is a brand new bleeding edge development that you probably shouldn't be unleashing right away. So without further ado here's what I'll be running down. Keep in mind any of the areas have multiple books worth of material to dive into. Some of them are written by the people in this very dev room as well as having seen 25 years of ongoing development. So I'll mainly be pointing you to the first baby steps on most of this but I hope it can give you a bit of a guide as to the main areas to have a plan around when dealing with MySQL and your development stack. Although we like getting cozy and from a community standpoint we certainly maintain very strong ties. It's very important to be aware of the heavily diversified landscape around MySQL nowadays. It's especially pronounced for us at FOSTEM this year as MariaDB is graduated into having their own dev room track. The list on this slide is far from comprehensive as cloud specific implementations of MySQL are kind of popping up left and right, not unlike PlanetScaleDB. While all of them still speak MySQL when finding solutions to specific problems that you're experiencing or understanding which tools may or may not work it's a good sense to know it's good to get a sense of what flavor exactly you're dealing with as well as its version. So a quick way to spot your service version is by logging in using the CLI client and scanning the first couple of lines. You can see that on the slide. So looking at my local example you can see I'm running version 8.0.22 which more or less already rules out MariaDB since they opted to break away from Oracle's version in past 5.5. Taking a closer look at homebrews packaging reveals that they're pulling them in from MySQL.com which means this is a normal community addition and that's good information to have. Regarding versioning in general for MySQL although I'm sure there's a reason that MySQL was stuck in a different dimension for about 13 years in 5.somethingland but coming into it as a new user it may not be all that obvious that 5.7 and 8 are actually only one major upgrade apart. So MySQL 8 was officially released two years ago but it's pretty common and not overly worrisome to still be using 5.7 at this point. If you do happen to be on 5.6 still I believe you may, I believe you have exactly until the end of the month to make your way up as official support for that is going to be ending very soon. Now cloud versions generally relate back to Oracle's, Vercona Server tracks those as well. Most of what I talk about here applies to all flavors but knowing which one you're running specifically is going to be quite helpful if you're looking to get help addressing more complicated challenges. One thing that's good to bear in mind is that the jump from 5.7 to 8 introduced quite a few changes. In some cases different ways to handle certain basic operations. So oftentimes you'll see that reflected compatibility of tools with either version tends to explicitly be called out in the documentation. So here that might be the case. Now in all my years of dipping in and out of MySQL and often needing to come across as knowing exactly what I'm doing, which are the tools that I've always come back to. So first off I don't really bother with GUI tools unless I'm planning on moving in long term so I can't really help you there. There's a few nice administrative ones available. There's SQL Pro or Adminer but I generally don't even really bother because the default MySQL CLI has evolved so much over the years that I hardly even need to look beyond that. So I love it because it runs basically everywhere. It comes with a boatload of documentation built in and honestly it's all I need 99% of the time. So having spent a bit of time over the years getting comfortable with it has really paid off in efficiency. So if you haven't heard of MySQL Shell, which is the CLI tools up and coming replacement, you should check out the linked presentation. It's a presentation by Fred and that'll give you a sense of what's coming there. I think it's quite exciting. So second point here is the Prokona toolkit, which I'm assuming will have been referenced quite a few times in this step room already today. This is probably still Prokona's most widely used and loved set of software and it's the end result of many years of consulting in MySQL implementations, all possible sizes and shapes and they built some tools that really help create visibility and understanding at all of those levels. The list of what's available can be quite overwhelming. I'm just going to call out a couple of gems in particular in the next slide. But for now, the last piece of software I would recommend is someone who's often confronted with a list of potentially questionable queries to pour over and only having topical theoretical SQL understanding myself. It's called SQL check. This is a tool that has attempted to take the work of our community's very own Bill Carwin and his brilliant book called SQL Anti-Patterns and it applies that analysis and those recommendations against a query or a list of queries that you provide. For me, this has been a great help in identifying problematic structures and sometimes I suppose I really like that it is a more hands-on way of learning about how that theory applies in realistic environments. So to come back to the Prokona toolkit, in the wild you'll be able to identify these tools by the PT prefix that they all use and their applicability is very diverse. I wanted to call out a few of the most commonly used ones as they're ideal for diving into your environment and solving problems very quickly. Archiver allows you to safely and gradually move data around between tables into an external file or simply purging those rows altogether. The reason you might want to use this is that on very large tables, running delete statements could lock up rights to those tables and so those end up being a blocking operation. So what archiver does is it takes your instructions and it diligently works its way down the entire table. I think in the documentation it's described as nibbling down the table by breaking that archiving job into smaller non-blocking steps. Next tool here is called duplicate key checker. It's one that helps you understand which keys in your schema may be duplicates or redundants and those generally end up taking up unnecessary disk space. So this is a great little spring cleaner. If you're dealing with a schema that's seen many years of gradual growth and lots of haphazard attempts at optimizing performance by adding various indexes. Now query digest might be the single most helpful tool in addressing bad performance in MySQL. It's used most efficiently by allowing it to analyze a chunk of representative traffic which you can capture by turning on the slow query log or you can query the built-in performance schema in MySQL to get a list. But depending on the amount of data available, query digest can spit out a normalized list of those queries that your server spends most of its time on. Those could mean being, you know, those could either be objectively slow or badly written queries but they could also mean very fast ones that are just executed many, many times. This overview that query digest gives you shows you exactly which queries to target for optimization. After all, even if a query takes just five milliseconds to complete, if that particular pattern takes up 80% of your workload and you can bring it down to three milliseconds somehow that will allow you to stretch your resources much further. So next in this list is stock which does more or less exactly what you would expect it to. It's built for those situations where there's sudden unexpected shifts or spikes that are disrupting your environment but you can't quite figure out where to look first. And once you've logged in, the problem's gone away and nothing seems immediately wrong. So stock is a tool that can be set up to wait for the problem to start occurring and then it proceeds to collect an absolutely ridiculous amount of information from every bit of tooling imaginable. So that spike is captured and monitored to its fullest extent and you can calmly sift through all of the data afterwards. You don't have to worry about it finishing before you're able to identify what's going on. Now variable advisor is a tool that I think we all wish existed a long time before it actually did and at this stage it might not be as useful anymore but it's still a nice little tool to run against your environment especially if it's a little bit older. Because initially my SQL and you know DB's various startup variables used to be the first stock for really glaring performance issues and quite often making tweaks in that conflict could squeeze out an incredible amount of extra performance even in a single instance. So nowadays those base variables tend to be a lot more intelligently set so there's not really that much magic that we can do here anymore but still a useful review that's always very helpful in avoiding the most obvious performance issues. So I'm going to skip over the others in this list right now as they're going to come up again later but whether you're a five minute DBA or a 15 year one if you're not using at least one of these tools at various points in your career you're definitely missing out. So switching over to privileges. Considering that relational databases tend to store a company's most business critical data there security in the way that access is provided is often overlooked and for my SQL specifically it can also be misunderstood so because my SQL displays an internal table called my SQL user it's important to understand that permissions or privileges are granted to both a user name and their location meaning the host that they're signing in from. So a user might be allowed to read from a remote location and only be able to write when signing in from local host. If you use a client from a dynamic IP address you might not necessarily be allowed to log in again when that IP address changes etc etc so at times this can be both frustrating and confusing and as a result grants are created haphazardly over the years sometimes and rarely if ever reviewed. So similarly by default when my SQL is installed it's not overly concerned with locking down root access assuming that you will be setting that up yourself but even the most seasoned DBAs are known to forget that step from time to time. So when going over the environment and trying to make sure that it's in good shape always make sure that you review which grants have been allotted and also consider running through this executable called my SQL secure installation which is included by default and that is designed to help you close the most glaring security holes on a fresh installation. Now we covered this a little bit when going over the tool belt earlier but here's another look at your basic monitoring and troubleshooting tool set. So my SQL can be very greedy when it comes to resources so when performance slows down it's always important to look at your operating system level metrics just to get an understanding of the nature of those interactions. Are you maxing out CPU? Are you IO bound on your disks? Or have we outgrown the available working memory? We might just be doing all of those but it's still good to take a look at those symptoms before trying to make a diagnosis of what's actually happening. So Prokona Toolkit again provides a couple more tools to help you relate your operating system level metrics to database performance mostly by offering them in more readable formats. So just be sure to have those ready and yeah when something happens you can take a closer look there. Another piece that might be interesting to cover here so when you know already that you're being plagued by certain slow queries the CLI Tools Explain function allows you to gather information about how the optimizer interprets it and what it expects to do to fulfill that query. That could uncover a need for additional indexes or different ways that you can write your queries to be more selective so very helpful to help you deal with specific troublesome queries and then lastly if you're not already monitoring most of this permanently, tracking it over time seems unlikely but you should. So there's another tool by Prokona that's called PMM and that's probably your quickest fix as it's absolutely packed with features to help you get a deeper understanding of your database environment and how well it's doing so definitely check that out. Now I would like to believe that anyone building a serious database environment today has given at least some thought to addressing backups but that doesn't necessarily mean that you know what that looks like and I wanted to cover a few of the basic approaches that we have available and those tend to fall into two main categories. A logical backup should be considered essentially a text-based representation of your database at a specific point in time. It's usually structured in such a way that when it's fully executed the end result is that it rebuilds your schema and its data in full wherever you might need that to happen so this is a method that's often used to allow developers to rebuild the database structure in their own environment or it allows teams to extract and transform the actual data to some extent and then they can store that in a different space so what tools like MySQL Dump and MyDumper do is they effectively log into your databases though they're a normal user. They scan all of your tables to translate that data into something executable in this case, executable queries that allow you to restore that particular state. Alternatively when we're talking about physical backups we take a more system level approach to that so we target the actual data files that MySQL stores its tables in as well as grabbing a copy of its working memory at that particular point in time so I might be generalizing quite heavily here but on the whole taking a logical backup tends to be a bit of a slow and steady process especially when it comes to recovering because all of that data needs to be read and written into a new system again whereas taking physical backups and using that strategy is generally the path to bringing your environment up and running again a lot more quickly and they also tend to be executed with less impact on the running system. Whichever method is being used both of those approaches generally make it difficult to restore the system to a very specific point in time you're kind of bound to the moment at which the backup was triggered so in situations where user action is caused irreparable damage and you'd like to restore to a state just before that action happened well if that's a situation you find yourself in I hope it wasn't you causing it but even if it was what you need to be taking a look at is MySQL's binary log tooling and pray that your backup strategy includes backups for the binary logs as well because tools there will allow you to replay those logs against a set of data and allow you to recover to a very very specific point in time if you're able to pinpoint when the mistake was made. So everything I've talked about here so far has focused on squeezing as much performance and reliability out of a single instance as possible and honestly as long as you have room to grow within those confines make it count. Before choosing to take on more administrative responsibilities try to ensure that you're not needlessly abusing or wasting resources by ensuring that that single instance is as healthy as possible and only then can you consider throwing more hardware at those problems. Evidently lots of the talks you may have seen here today have reference companies that are faced with scaling issues or challenges that need to be addressed and so over the years MySQL has had to learn how to play well with others and there's a couple of ways that you can start to consider that. So MySQL replication has been around for some 12 or 13 years now. It provides a very straightforward not entirely synchronous way of replicating data across multiple instances but on the whole it allows application architects to direct reads across multiple targets and the ideas of course that we can share the load amongst those. In recent years I'd say MySQL's built-in replication features have really hardened to a point where they can cover most scenarios where read scalability is required but of course for people wanting to introduce an additional layer of high availability for the right targets as well as well as guaranteeing synchronicity between cluster members there are now solutions like Galera and NODB cluster that have moved in to cover those bases so those are some of the options for you to consider. Now the challenge with offering synchronous writes across a cluster of multiple machines is that that means that those writes can only be accepted as quickly as the slowest link in that cluster so it means that scaling write workloads beyond the confines of an individual machine is still more or less out of reach with these systems so that's when something like sharding comes into play which is a way of stretching what is intended to be a single database schema across multiple machines. This allows for an intelligent distribution of writes as well as reads and it is of course where if a test in my employer planet scale come into play. Sharding can be considered for a variety of other purposes as well but using it to escape a limitation of your write performance is something that happens, it's used for quite often so the last links that I have on this slide are included particularly because it pertains to solving certain problems at scale especially when it comes to dealing with large volumes of traffic or when you're trying to apply changes to fleets of my SQL servers intelligently. So altering table structures is an action that's not quite as trivial when the act of doing so will lock the table up for others to write to and so that change when that change also needs to be propagated across any number of replicas that becomes a very complex situation so at this stage if this is the kind of operation you're considering you're likely not considered a five minute DBA anymore but definitely make sure you familiarize yourself with tools like PT online schema change as well as GitHub's Ghost as those are going to help you address those operations with the least amount of impact possible. Lastly we've heard about it today already in Shlomi's talk but I absolutely have to include it here as well. When you're managing my SQL topologies whether it's you know two or twenty or two hundred servers whether you're using Vitesse or any other combination of technologies orchestrator is really critical at helping you manage and understand your fleet so definitely give that a look when you're considering any configuration of my SQL servers and coordinating across those. So there we go I think that's more enough information to get you started on your way maybe you only ever need to reference the first slides in this deck maybe you've grown into needing to consider ways of scaling out already as well but wherever you are on your journey don't forget to bring your towel remember that there's a whole community here to support you as well don't hesitate to reach out directly if there's anything you'd like to discuss in more detail and again consider joining our Slack community we'd be really really happy to have you on board so thank you very much.
|
Did you wake up one day to find a baby database left in a box by your front door? While it was cute and fairly self-sufficient at first, has it now hit database puberty and is it making you wish there were such a thing as DBA school? Did you feed your database after midnight or let it get in contact with water and is it now making your life a living hell? Don't panic, because here's a 20 minute cram session of the most basic database parenting skills as well as general things you should be aware of when starting out with MySQL.
|
10.5446/52754 (DOI)
|
Welcome all. Today we are going to talk about running MySQL on ARM. So we will see why it is advantageous to run MySQL on ARM and what more as a developer we could do to make it further optimized for ARM. With that quick note, let's get started. My name is Krunal Bhouskar. I have been driving this MySQL on ARM initiative. That's quick intro about me. I have been working in MySQL space for more than a decade now. In past I have worked with Perkona, Oracle, Yahoo Labs, KickFile Terraria data. Currently I am working with Huawei as part of the open source DB group where we are trying to make all the open source database not only the database but also the complete ecosystem optimal for ARM. So that quick intro about me, let's jump to our today's agenda. So we will first talk about ARM ecosystem to get everyone on the same page and then we will talk about the state of MySQL ecosystem from user perspective and then developer perspective. We will end it back with the community and what community can contribute to. I am sure all of us have heard about ARM processors. They are widely used in all kind of applications. Very primarily they are used in mobile phones and network equipment. They are also used in lot of smart home appliances and IoT devices. They are also growing use of ARM processors in smart driving cars and of course with the space programs. What is something new and has been catching lot of traction lately is use of ARM for high performance computing. Now what exactly has fueled this? ARM low cost of ownership because ARM consume lesser power, the overall cost of ownership over period of time is lesser. So that has been one of the key driving factor. Also ARM processors are pretty powerful. They can provide better or on-pun or par performance and not only that they comes with more cores. That means you will get more computing powers and that could be used to get better throughput out of the same cost. Not only on the hardware front, on software front too, there has been quite good development. Most all OS provider now releases their port for ARM. Most of the software has already been ported to ARM. Open source databases, most of the popular open source database including MySQL has been ported to ARM. And importantly there is a growing user and developer base. Lot of new generation developers who have been working on ARM maybe through Raspberry Pi and RenoKit something of that sort are already used to the ARM ecosystem. So they are every year the developer base is growing and they are joining and that is also fueling the complete ARM ecosystem. But also another important incident that happened in last couple of years has been easy availability of ARM instances through cloud providers and even now with desktop machines. So Huawei offers ARM instances using its Koen Peng processor, Amazon does it through Gavioton or Ekl plans to offer it too. We are already aware Apple have released M1 which is powering their Macbooks. There is a news that Microsoft is also working on some kind of ARM chips. So overall everyone knows that the ARM has been gaining a lot of traction and all vendors are trying to port their respective software to ARM. And even on the hardware front it is getting stronger and stronger and the ecosystem in general is growing. So with that quick note about ARM ecosystem now let's talk about state of my SQL ecosystem from user perspective. So whenever a user think of moving from one system to other system user would like to consider these four parameters being an open source community support also makes sense. But general things are like whether the new system will have features at complete. What about the performance? What about the ecosystem? So let's try to evaluate my SQL from using these four parameters. Now on feature set front Oracle has been releasing packages for my SQL starting with 8.x and they are very well tested. They are marked GA. So we can be rest assured that they are coming with the same quality directly from the upstream Oracle. Also we haven't found any feature or sub feature that has been marked beta or experimental. So all the features that you get on the other platforms are available on ARM. Extended features like binlog replication or group replications are also available with the same quality and basically they are also supported. So user can say that the complete feature set is you know my SQL feature set is complete on ARM. If you talk about performance so my SQL on ARM scales well on ARM. So of course we are going to look at some of the graphs for it. But the good part is that you know my SQL has been there are a lot of still good number of patches that are waiting to get accepted. And once they get accepted the whatever already it is scaling better then it will start scaling further you know better than what it current levels are. My SQL has been already also working on more cores or kind of developing the new features they are developing they are trying to tune it so that they are compatible with more number of cores. So that is also helping. On ecosystem front as we all understand database is not a standalone component. There are a lot of other supporting component maybe it's a backup load balancers. So we can say that a full stack ecosystem can we boot a full stack ecosystem using open source component on ARM and yes we can. Almost all the at least one or two components from each category have been available on ARM. Most of them they are available directly from the upstream repo and this list of the tools is just growing more and more vendors are finding it important to start porting their software to ARM. If you talk about community support there is a pretty active community more than 30 plus patches has been contributed. There are developers from all over organizations are different all over world from different organizations. There is also a MySQL on ARM slack dedicated channel on the MySQL community slack where you could of course ask questions and you know if you have any queries to be shared that you know community would of course would be happy to reply back to you. So let's understand the ecosystem thing right we just talked about. So what we try doing is like we try selecting only open source products and we try to see if they are available on ARM and can we actually mark the complete ecosystem complete full stack complete. So of course the server is directly coming from Oracle now Binlog replication or group replication are the inherent part of the server so Hs solution is taken care of backup we can use Percona extra backup with community value added it works on ARM. Load balancer front proxy SQL and MySQL router both have official packages from their respective repo for ARM. PMM again community evaluated and Percona engineer evaluated it also works on ARM. Connector again connectors have official packages directly from Oracle repository. DB tool front shell MySQL shell again has official packages Percona toolkits most of them are Perl script they just work out of box community have already tested them. So overall if you look at if user wants to boot up a full stack on ARM then it is pretty much possible because from every category there are at least one or two tools available and which are available work or available on ARM and this list is just keep on growing the feature was very different one year back now we have more components and a lot of more components are getting added to each and every category and if you care about running things on you know Kubernetes orchestration kind of thing then of course Kubernetes and Docker swarm are also available on ARM. If you talk about performance sorry yeah if you talk about a performance then you know ARM is a different processor so if you want to compare ARM with its counterpart then you know there are two different models in which you can compare so one is the model where we continue to keep cost constant and all other parameter constant and allow the compute power to differ of course ARM being cheaper ARM get more compute power almost 2.5x 2.5x more compute power and as you could see with this extra compute power ARM is able to scale well you know against its counterpart and in fact in some cases the difference is almost 2x and with higher scalability once the when the scalability point is crossed in few cases ARM just continue to scale and maintain the lead with significant margin. Now that's with that was with uniform this is with ZPN which means there is a more contention and as you could see again even with more contention use case ARM is able to scale very well and all these point the cost is constant so for the same cost a user is getting more TPS or more work done. If you go back to the traditional model where we used to have you know let the cost remain different of course we know ARM is cheaper but let's keep give the same resources including the same compute power to both of the you know machines right so in this case too some may get surprised that you know ARM is powerful enough and it is able to beat its counterpart you know with higher scalability as you could see for in fact for some cases it has there is a significant margin and other cases too it is either on par and once the scalability point is crossed it is able to consistently beat the things. In rewrite case for higher scalability ARM may lag but as we said there are a lot of good patches that has been contributed by community and that gap we tried experimenting with it and one single patch itself reduces that gap by 50% and there is already a small gap and that further get reduced by just one single patch. So there is a huge potential like you know even with the same compute power ARM is able to beat its counterpart and that means you are getting all this or same performance or better performance for almost 50% lesser cost or in fact more but 50% on an average lesser cost. So that was with uniform and again this is with Zephian that is with more contention and here too we could see the same pattern that ARM is able to consistently beat its counterpart. So with that quick intro I am sure user probably now feel that you know might have got at least an idea that what is the state of MySQL on ARM and what advantage they could expect by moving to ARM. But then as a developer we could further push it and let's understand what are the some of the things which we can look for. See ARM is a different architecture so whenever you want to port or even your tune I would say any software for ARM you need to consider some of these key changes that you can concentrate on. For example ARM is a weaker memory model so you should use proper optimal memory barriers. Low level hardware instructions would be different if your software is using them. Cache line differences are there, optimized atomics which was introduced in ARM 8.1 standards, hardware parallelism through neon instruction sets, more cores, more new oneodes. We are going to see some example about it. Branching differences are there, 64 bit optimizations. Most of the time using 64 bit registers some optimization loop is done but that may be enabled only for let's say x86 processors. So you can also add to the list name of the ARM processors. ARM can also get advantage out of it. Low level construct like timer, spin loops, all these things could be tuned for ARM. So overall any software that needs to get tuned for ARM has to address all these aspects and MySQL is no exception so lot of patches are present on these category. We will see what could be done further. Global counter and state variable. We all understand global counters are pretty important for any software including MySQL. The problem with the global counter is with increasing number of threads the contention increases because all threads need to access the same variable and also that creates another problem of cache line sharing and this drastically affect the performance. So MySQL switched back to use distributed counters. So the idea is very simple that a counter is distributed into end parts and the aggregate value of all these end parts make a value of the counter. This was a good idea. It helped reduce contention but MySQL uses random number generation to select a part. So maybe core one may at point in time you update its counter P3 and some point in time it may get update it may update P5 and sometime it may update P7. Why because it is random number generated. So this actually helps reduce the contention but may cause cache line sharing issues to help and may also result in cross-neuma moment. We will see about it in next slide. But the possible solution would be having a kind of a static mapping. So for example core one always access or updates P1, core 2 updates P2, core 3 updates P3 that way. Now we could use something like schedule get CPU which will help us give this static mapping. Now the only thing to watch out is that latency of schedule get CPU shouldn't you know override the latency of moving or cross-neuma moment. Now to stretch this further one other problem you would discover is that if core one actually allocates all this counter then the counter memory will be actually allocated only on new manode which is closer to core one or attached to core one. But since the part of the counter like P5, P6, P7 will be updated by the cores which are located on say other new manode that doesn't make sense for them to always you know incur the cross-neuma latency. So what if we could actually allocate the parts in distributed fashion and keep them closer to the respective new manodes or respective node cores where attached new manodes. So this is possible by using new man memory allocation. The problem with new man memory allocation is that it is always aligned to the page size. So even if you ask for let's say core one wants to allocate in its own new might as for 256 byte it will get 4k 16k or 64k based on whatever is the page size. So that is a problem. So what could how we could solve it we could have a central allocator because there are so many distributed counters. So every counter can request central allocator. Counter allocator can interleave the memory across multiple new manodes and give slots that could be then used by respective cores and those slots will be closer or be attached to the respective core new manodes. So that would help in this thing to reduce the cross-neuma latency. That is one of the idea. Now not all of the global variables needs to be distributed. So you could actually have a normal variable which is just getting incremented but you should use a proper memory barrier. This is just a quick experiment we did and as you could see the difference is almost 10 to 15 percent with higher scalability if you use proper memory barrier versus the default memory barrier. Same thing with state variable. State variable holder discrete value and the traditional way of mutex lock and update and unlock may not be a good idea. So we could always switch back to use the norm atomics and as we could see through a micro benchmarking thing we did the difference or gain you could see is roughly around 50 percent or so. So this is the magnitude of gain that you could observe by switching from the traditional mutex to the update atomics related things. Now more new manodes. So one aspect. So let's understand this from an experiment. So what we did is we started our MySQL server using 28 vCPU and we executed a workload which we were using sysbench with 1 0 2 4 threads. So we got some XTPS. Now when we increase the vCPUs from 28 to 56 we doubled it. We of course saw reduction in the contention and we of the performance or the TPS went up. When we increase it further from 56 to 112 we saw the degradation in performance. Now that was bit surprised right because we have given more number of vCPUs and still we are instead of seeing a improvement we are seeing a pretty decent degradation almost by 40 to 50 percent. So here the problem is the distance between the two new manode. For example distance between the node 1 and node 2 is 16 units because they are located on the same socket but distance between node 1 and node 4 is 33 units because new manode 1 and new manode 4 are located on different socket. So this socket differences, new man differences, new man distancing play a very critical role while you know you start running software with more number of cores. So what we expected is since we gave it a more number of computing power we expected lesser contention and better throughput. But what we found out is exactly opposite. So cache line sharing, data relocation, weight graph all these things took over and they have disturbed the equation. We did some quick per profiling and what we found out is that of course with one from one new manode to two new manode you could see there is a reduction in the contention but moment we move from 2 to 4 new manode there is increase in the contention almost double the contention. So what we have been fixing traditionally or talking about it is scalability bottleneck getting more from same number of cores. Now we should talk about new man bottleneck getting more with more number of cores but these cores are distributed across new man. So how do how can we get more performance from with increasing number of cores but distributed across new man. Configuration based setting this is an interesting thing you may be aware of spin loop inside in ODB. Now it is tuned probably based on whenever it was created it was tuned for x86 kind of processors. Now with ARM processor this tuning may not hold exactly so you probably will have to figure out what is the best possible value for your ARM processor. Now for the ARM processor I had access to what I figured out is that the performance get optimal when the value is 24 and I see a 6% improvement versus default. But I tuned it for my ARM processor let's say you but there are so many vendors who come up with their ARM processors and there is a common set of guidelines but the implementation of low level things could be different. So it is not that for every ARM processor the 24 value will work as the optimal value. So how do you actually make sure that you know if you can make this code completely resilent to the underlining architecture that would be the best way. Now till x86 it was still fine because you know it was coming from one or two vendors only but with ARM there are so many vendors everyone would have their own way or protocol of implementing some things and maybe if it worked 24 was optimal for me maybe it will be something else for some other ARM processor. Workload distribution we all know InnoDB actually allocates his memory in interlude fashion when it goes to multiple NUMANodes. So if you have 100 GB of memory and 4 NUMANode each NUMANode will have 25 GB of memory. Now that is so good but what about the system and user thread are they also allocated or binded in the same way unfortunately no. So we leave it to OS scheduler to do it. Now sometime OS scheduler may do a better job but sometime OS scheduler may create a skewed distribution. This is an example if you have more than 64 thread I have 4 NUMANodes as you could see I was able to get a pretty you know uniform distribution but when I moved to 8 threads most of the threads got allocated to only NUMANode 2 in this case and other NUMANodes were free. So this create a skewed distribution. So this is an idea to explore if we go ahead and have a uniform distribution even for the smaller number of threads and especially even for system threads can we go ahead and get a better performance and a more stable performance especially if we can do some kind of soft binding to the user and system threads so they don't leave their NUMANode space and if that helps improve performance. Now INODB has buffer pool instances up to 64 right so if there are NUMANode which if there are threads that are flushing from a flush list then they will be localized to the said NUMANode they will not have to make a cross-trip to a page which is located on other NUMANode. Every way purged locality could be explored and all these things will also benefit the parallel query module so you know MySQL have already enabled parallel query module which I think is getting evolved further so you know parallel query module will be also get benefit out of all these even workload distribution. So there are few of the things which we talked about but beyond that also community has contributed a lot of other things for example checksum optimizations, weak memory model, timer spin loop optimizations, more NUMANode 64-bit optimization branching so overall there have been good number of patches that has been contributed and over a period of time when these get accepted you could see that already MySQL on ARM is scaling well it will further cross its own limit and you know scale way better than its counterparts. So with that you know overview from development perspective let's talk about community. So you know the community is pretty widespread we have seen developers from different organization contributing all over the world there is a dedicated MySQL on ARM channel if you have any ideas or you would like to discuss anything you can always you know put it there community will surely respond back to you and this is more like community wish list for MySQL or Ackle so if they could start improving the distro support which is currently limited to CentOS and REHL and with CentOS going phasing out you know having more distro support will surely help bit more efforts on evaluating patches from community some of them patches are 4 plus year old across the board testing which has started gaining pace lot of new patches are being tested on different you know across ARM or on ARM also and looking beyond the mainline path covering other things like whether it's Perf schema or clustering or tools like connectors and so on making those things also optimal for ARM would surely help. So with that we will now open the session for Q&A before that thanks organizers and sponsor for making this possible and providing opportunity to share the ideas and of course you can remain connected we have a blog where we normally write about the new things that is happening and of course you can join the Slack channel and follow the tweet. Thank you.
|
MySQL joined the ARM ecosystem with 8.x release. This opened up a completely new vertical and provided a cost-effective alternative to users. With multiple cloud providers providing ARM instances more and more users/developers are getting interested in running MySQL on ARM. Let's explore what it means to run MySQL on ARM through enhancement done, what more could be optimized, patches already in pipelines, features that could benefit from more cores on ARM, state of ecosystem how it could be further improved, any special configurations to tune, performance, stability, notes on migration, do and don't, etc...
|
10.5446/52758 (DOI)
|
Hello, welcome everybody to my session from single MySQL instance to HA. So this is what I call the journey to MySQL in a DB cluster. So thank you for joining this session at Fusdum. So let's start with who am I, right? So my name is Frédéric Descamps. I am known as Lefret. You can follow me on Twitter if you are looking for MySQL information. I am MySQL evangelist. So I am part of the MySQL community team. I am managing MySQL since a lot of time. My first version I installed was 3.20. I am a DevOps believer living in Belgium and I have a blog where you can find a lot of information related to MySQL called Lefret.be. And as you can see, the address are still closed in Belgium. So I apologize for this haircut right now. So evolution for 2HA, right? So this is what we are doing today. So we start from a single instance to full HA, we are using MySQL. So the start is made by a single MySQL instance. This is you just install MySQL and you are using MySQL. Some tips that are very necessary if you don't want to lose data when there is a poor cut or something like that, it's to exclusively use an asset compliance storage engine. So in our case, it's EnoDB and keep the durability to the default. So meaning that you are writing in the EnoDB logs files all the transactions that you perform. So at least when there is a crash, you do the crash recovery and you don't lose the data that is being committed. So we are running with a one machine. We are very happy. Everything works as expected. But what's next? So next level is that our database becomes more and more important and losing it, losing the database might be an issue. So what can we do? And what do we need to plan? And to do that, we need to put some targets. And these targets are the RTO. For start with, we're going to put hours and the RPU. RPU to one day. What does that mean? So these RTO and RPU are some metrics that we use to design an architecture for our database. And RTO means the recovery time objective. So all on does it take to recover the service? So all on does it take me for the service to be up again and be usable? And the RPU is the recovery point objective, meaning defining how many of data can I lose? The maximum of data that I can lose. So the RTO, to start, we do the RTO hours. So we have some hours to recover and set up again the system. And RPU, it's one day. We can lose one day of data at max. So when we have these targets or these goals, what are solutions? The solution, it's very common solution known by everybody, but not deployed by everybody. So warning here, please, if you don't do it and do it now and certainly before an upgrade, this solution, it's backups. You need to have backups, physical backups, there are commercial version, open-source version in MySQL. We use MySQL Enterprise backup and logical backups that it's just a logical backup that you can restore everywhere. And the logical backups, usually people are doing MySQL dump, please try to avoid MySQL dump as much as possible and use the new MySQL shell dump and load utility, which is much faster, much convenient. And you can use it to dump schema, dump full instance, and also to migrate these instance to the cloud, for example. It works perfect. BitOCI. So now we are happy. We have this MySQL instance, there is durability if there is an issue. We have a backup. We can recover. It takes time to recover the full backup, of course. And we can lose data, but one day of RPO. That's a lot, right? So if we want to reduce that to minutes, I can lose data, but one day, I want to, I agree to lose some data, but not too much of data. So I will change now my goals, my targets, so RPO, so I still have a lot of time to recover, but I don't want to lose much of data. And here we need to concentrate on two issues, right? First issue would be, for example, we have a human issue, a user that has SQL access to delete star from table, blah, blah, blah, and forgot the workloads. It deletes all this data, right? So in this case, how can we recover the data? So we need to use bin locks and have durable bin locks, of course. Durable is always better. So meaning that when it's committed, it's written on disk and we are sure we don't lose them. So this is default in my SQL 8. So we have lock bin on and sing bit lock with on. Watch out, reducing durability, meaning you are losing durability. So this seems strange, but it's the fact, right? And it's common. So now we have that binary lock. So our user make a delete, we can restore our backup and then we play this binary locks. So these binary locks are files where all the changes on the database are stored and can be processed later by the DBA because in a DBA locks, the DBA cannot do anything with it. So this one, the DBA can use them. So the next level is that my data is still very important. It has a heavy workload and I want to lose less than a second of data. So minutes, but now it's even seconds. And not only that user that does something stupid, but also that some there is an issue on the system or a system engineer does something stupid could happen to that it deletes data directly on the on the myscale data directory. I've seen that already. People goes there and say, oh, this is a big file. Let's delete it. And it's a very bad idea. So now RTO hours, RPO less than a second. We don't want to lose data. How can we do that? So we have to, of course, still, if we don't want to less than a second, we need to have this sing durability here. It's very, very important. So we need to have the durability. We can enable GTID. This is not mandatory, but it's optional. And it's much convenient, certainly for the next steps later. And this is where it's also important is to offload the binary logs in real time. So all the enable GTID. So first thing to do is to specify or to see if we can restart or not myscale. If you can restart, it's easy. You do set persist only GTID mode equal on set persist only and for GTID consistency equal on these two variables needs to be enabled. As you can see, I don't modify my CNF manually. And let do it from the SQL using myscale eight. So in the myscale mode, I do set persist and myscale does all that for me. And it's very convenient to this settings when this commands when you are using the cloud, for example, where you might not have my CNF access. So this is when you can afford a restart. If you don't want to offer a restart, you can do modify. So the GTID mode progressively. So the first thing to do is to do set persist and for GTID consistency equal on. As you can see, I don't do persist only, but they do set persist. So I modify the running value to and then I modify the GTID mode from off to on going step by step to off permissive, on permissive, on and final, right? So this is how we do that. You modify that without a restart of myscale. Then we want to offload the binary locks. And to do that, we're going to create a dedicated user that will be used to do that in myscale. So we're going to create this user in my case, I call it cat bin lock. You will give it password. We say, okay, let's use SSL. Then we will grant the replication privileges we need. And finally, on a dedicated machine, what we're going to do, we're going to use the myscale bin lock program to stream and read from the remote server and stream all the binary locks there on the file system. Of course, this is the command. The command, you can make it better using a system descript or whatever you want with loops, checks or, but this is the basic what it needs to do. And when you have that running, all the binary locks are stream live to another machine. So this is very, very useful. After that, if you haven't you, you can do point in time recovery. And if you want to do it faster, if you have a lot of data to process because there was a heavy load, for example, in your system, this is a blog post that explains you how you could do it much faster in parallel. So please go to read it. I invite you to read it. So this was now we have this myscale server single instance, we are doing backup somewhere, we are doing saving the binary locks somewhere else, all it's durable, we are very happy. And now my service, it's more and more important. So my business going well, my application is going well, and I need to pay even more attention to my SQL. And because when we are done, we are losing money, I want to be up much faster than in several hours. How can I do? So I want to reduce my RTO, my recovery time objective to minutes, RTO less than a second, it's quite fine, let's do that. So the next step to be able to achieve this, it's to use my SQL in a DB replica sets. So my SQL in a DB replica sets are based on the native asynchronous replication known by a lot of my SQL DBs, right? So this is where we use binary logs that we have enabled earlier, where we use GTID that we have also enabled earlier and stuff like that. But it's easier and easier, it's better, right? So let's see how we can do that. It has data provisioning included using my SQL clone. So this is wonderful. You don't because sometime I've seen a lot of people, the issue they have with creating replicas, it's how do we provision the data the first time, right? So now to provision the data, you don't need to take care of it. Like I said, it's all built in and it will take care of it and it's very, very, very convenient. Of course, you can have two or more nodes, but it has a manual failover, but not an issue because we have minutes to recover. So it's okay. And it has transparent automatic route routing using MySQL router, right? So with MySQL router, you will have different ports and depending where you connect to, you will do writes or you will do, you will reach a read only server. So the architecture looks like this. So you see we have MySQL shell that orchestrates everything. You have MySQL replication there with the instances and you have a router that the applications connect to to reach the database. So now from that single instance we have, if we want to move to inodb replicaset, what do we need to do? So on the current server, what we do is first thing we do configure replicaset instance. So we do DBA. DBA is the global object in the shell that allows you to use the admin API. As you can see the shell here I'm using in JavaScript. So GS means JavaScript. You can use the shell, MySQL shell in Python or in SQL mode. So now I do configure replicaset instance. It will check my instance if there is something missing in the RwB logs there. It's the GTID enable because this requires GTID. So it checks all that. And also it checks the user you are using to connect to it. And usually by default while you just install a server you are using root and it says, okay, root cannot connect from everywhere. Do you want me to change that? Or do you want to create a dedicated user for that? I always recommend to use a dedicated user for that. So when you have done the configuration, you do the DBA create a replicaset and you put the output of this. This replicaset inside you assign it to a variable and the variable I call it arrest. But it's whatever name you want. And you give it a name and the name of my cluster, it's my replicaset. It's my replicaset. Now you just install another MySQL server with the shell. And on that machine you just do DBA configure replicaset instance again. It will do check for the user, ask you if you want to create a new user or not and make the modification. And when it's done, on the primary, so the single instance we had earlier where we create the replicaset, we do add instance and it will add the instance for you and it will create that replicaset. So when on the new instance you do configure replicaset instance, this is what we have. So it says, okay, we want to configure it. It's a bit small on video for sure, but you will have the slides also. So it says, okay, which user, what do you want to do with the user? Like I said, we always recommend to create a new one. So we're going to create a new one, option two. And we do give a name and the name it's cluster admin. So we do cluster admin, we give a password and then it says, oh, and for GTID consistency it's not enabled because it's not a default in MySQL 8. GTID mode is not enabled. Do you want to enable it? And server ID, every time you install MySQL server ID is one, it says, do you want to make a very unique value? You said yes. Then it says, I will do the change. Do you want me to restart MySQL for you? You said yes. It's all done. And the next step when you do add instance, it will tell you, okay, now we need to do the provisioning of the data. So what we call the recovery method, and it gives you the possibility to do clone, incremental recovery or abort. Clone is what you should do there. Incremental recovery meaning it will play the bin locks from the beginning. But it's almost nobody in production as the binary lock since they created the server on the same server. So I use clone, please. You do see it does that for you. And as you can see, it works perfect, very fast in parallel. And what does clone? So it makes this kind of snapshot of all the data from the source and send it to the replica. Using the shell and no need to open different ports and stuff. It's all go there using the MySQL port very easy. So when we have created that, it says, okay, the instance no, it's added. What you can see, we can do status and on the status, we can see that our replicaset it's available and that it has my SQL single my SQL. So it's our first servers first instance, we created that is read, write online and ask the role primary. So it's the second on the screen. And the first one on the screen is the new one we have added that is secondary read only, right? Perfect. And it use asynchronous application. So now we have this single instance where you can write data, the other one where we can read data out of it and where we can use in case of an issue. Then the next component that I was explaining from the full solution. So the shell, this is what we used to orchestrate and to create all that. MySQL, we have done that and was the router. So on the router, we need to bootstrap. So on the machine where the router is, you do mysql router bootstrap using the credential where to connect the system user to use. So in this case, mysql router. So this is the operating system user and it will bootstrap make all the configuration for you and you can start it and use it. It's done. It tells you here for the classic protocol, you need to use the port 6646. Or read writes, or read only 6647 and for X protocol, the same with different ports, of course. So now in the shell, now that we have created routers, in the shell, you have the possibility to see which router you created. So you can do list router and you see them here. You can see, oh, we have one router in replicasset. So now that we have this replicasset running, when an issue happen on the primary node, what do we need to do? Like I said, we have manual failover, meaning you need to monitor what's going on. You need to know that you have an issue. It's not a big issue because we have some minutes to recover the service when it's done. So when it's done, your page you need to tell you, oh, mysql primary node is gone. We don't have mysql. Please do something. So here we are connected to the replicasset using the router. We do status and it says, OK, you have a primary issue. The status is unavailable. You need to perform something. So what is the something? The something is to force the primary instance. So we're going to migrate the raw primary to the machine that it's still up. One of the machine that is up. In our case, we have single mysql and mysql2. So mysql2 needs to take the lead. So this is what we do. And as you can see, failover, it's done very easily and quickly. So this is easy to do. The only thing is that it's manual and then you need to check. And we cannot do it automatic because we are using a synchronous application. We have no idea of the network partition, quorum and stuff like that. All that it's maintained and it's managed by group application, for example. And this is the next level. The next level, my service is very important. I would like to be always up and to have this automatic failover and never lose data. How can I do that? So the objective has changed now. The RTO becomes seconds because we want it to automatic. So we don't want to page somebody, take the time to connect, blah, blah, blah. No time for it. And the RPO zero, we don't want to lose data. Do we have a solution for it? Yes, this is mysql2DB cluster. So mysql2DB cluster is based on mysql native group application, but it's easier. Like I said earlier, easier is always better. It has data provisioning included. Clone again. You need at least three nodes to make this automation or more, but you always use an amount of nodes to avoid split brain situation, of course. It has automatic failover, like I said earlier. We again use router and we have consistency level that are configurable for session globally to avoid store reads if it is an issue for you. We can, there is a lot of possibility in consistency level. I will be glad to answer questions if you have questions on that later. And this is the architecture of the InnoDB cluster. So we have the router, we have the servers that replicates using rubrication and the shell to orchestrate all that. So how do we migrate from this single instance, from this replica set to InnoDB cluster, what we do? The first thing on the primary of the replica set, we will do drop the metadata schema and then we will create the cluster. Then on the second instance, we do in SQL, stop replica, reset a replica to forget about that. And then on the primary node where we have this cluster object we created, we add the new instance, the previous instance, we do add instance, and it's added. And now on the third machine, so new machine we just installed, we do configure instance, get cluster again on one of the machine where we have the cluster and just add that instance. And it will do everything automatically for you like magic and it's perfect. Then don't forget the router we had, we need to bootstrap it again using group replication and InnoDB cluster. Then we need then to force the writes in the configuration and we can restart it and it's up and running. So we can do innoDB cluster status and we can see that we have a cluster running with three nodes and all it's healthy online and we can tolerate up to one failure for automatic failure. So one node crash, no problem. Another node crash again, then manually you need to decide which one has the lead to because of the split brain situation that could happen. So what's next when we have that? So the next step is for example to have this innoDB cluster and have an asynchronous replica for disaster recovery somewhere else. And to do that we have asynchronous replication source connection failover in 8022, so something we have added. We have a list of servers that if the source is not there, is not available anymore, the replica will select another source from that list. In 8023 that leads it's updated supporting group replication. So that's very easy to do. You could have also a delayed mass scale replica somewhere just in case you don't want to do point in time recovery too much or you want to find data a bit from the past. So some hours delayed for example. And also you could have mass scale innoDB cluster with an asynchronous group replication node somewhere. So this is possible. It's more complicated but it is possible. And of course more to come in the future. As you can see the mass scale team, HA, shell, replication server, we're all working to improve that more and more and to give you a solution more and more. So I think it's time for questions. We'll be just there to answer your question. Thank you very much and waiting for your question.
|
During this session, I will show how we can start from a single instance to MySQL InnoDB Cluster, the automated HA solution for MySQL, passing by the following architecture: - Single MySQL - Source / Asynchronous Replica - InnoDB ReplicaSet - InnoDB Cluster I will cover the limitations of each options and how to migrate from one to the next one with minimal downtime.
|
10.5446/52761 (DOI)
|
Hi, welcome back to the 2021 MySQL and Friends Developer Room at FOSDEN. Sorry we can't all get together. I'm presenting a session now on Better User Management Tools under MySQL 80. And for the next 20 or so minutes, I'm going to go over some of the stuff that you may have missed that is in the release notes. For those of you who do not know me, I'm Dave Stokes. I'm one of two community managers on the community team. If you need to reach me, I'm at Stoker on Twitter. There's my blog and my email address. And by the way, the slides are available on slideshare.net. I will repeat this at the end for the hopeful Q&A session that we'll have. So as many of you know, user management can be challenging. There's lots of obstacles, not the least of which being the actual users themselves. Can't help you with that. But there's a lot of new features that you may have missed that can make life a lot easier. Unfortunately, they're usually hidden in the release notes. They come out in a blog here or there. And unless you have a lot of spare time and can actively keep on top of this, it's easy to miss some of these things. And I'm sure you'll agree that some of these are rather interesting. First one, with 8023, SIDER support for host notation. I've been bitten before in the past when we've rewired networks or had people change floors and suddenly they can't get into the database. Well, what's your address? And they give you the street address. Kind of a painful thing. Well now with 8023, you can actually put in when you're specifying the host, the SIDER notation as you can see here, the slash 24 address. By the way, this means you should specify the account host values in the same format as used by your DNS. If you've never managed a DNS, it's its own little set of interesting problems. Then I won't go into here. Now how does this all work? When you go to authenticate in, the server looks at your username and the host you're coming from and it goes through a list of accounts in the MySQL.usr table and then double checks that your address that you've been raw with bounces that off the DNS to make sure that things are matching up. By the way, this is done as a stream match even if you're using the numeric addresses. So the server is doing this comparison and wants to see an exact match. Now some best practices that you might want to be aware of. Suppose you're working in example.com and your machine is host one and that's what your network sees you as. Now when the DNS returns the name look up to the server, if it's showing you as host one.example.com, use that in the host field. If it just returns host one, use that instead. Now if you're dealing with numeric values, in this example we're using the second set of numbers as 5, 1. Make sure that's being returned and instead of 0, 5, 1 which happens in some situations. So if it's using 5, 1, use 5, 1, if it's using 0, 5, 1, use that. Also this also works with wildcard. So if it's using 5, 1 with the last tet being a wildcard, use that or if it's 0, 5, 1, use that. So once again it's mindful to check what format your DNS uses and make sure you use that for your host name and addresses. Dual password support, this came back a long time ago in 8014. When I first saw this I thought it was a weird idea but it makes sense for a lot of folks. Imagine you're working in an environment where you have a large number of users, a large number of servers, a replication going on, you have multiple applications, they have different needs, different uses and it comes time to do your quarterly or biannual password rotation. It's kind of a pain to shut everything down and do a massive EMAX or VIM edit of all the credentials and relaunch everything. Can't often do that. So with dual passwords you can actually make the change without having to go out and negotiate times to shut things down, actually doing the shutdown, you can do it in phases, you can take it a little bit easier on yourself. Now in this example we're going to alter, use your DAVE at deardave.xyz and we're going to give it a new password but we tell it retain current password. Well after you run this if you go out to the MySQL.usertable and you look out there in the information you'll see the password is stored as a hash. Now this is the original password not the new password. And when we make all the changes and everything's done, everything's coordinated, everyone's happy the change has been made and all the applications are running, you can discard old password. Now some things to note. Retain current password retains the current password, the password you originally use as the secondary password that's what you saw off there. The new password becomes the primary password. So the new is the primary, the old one stored off as the JSON string that you saw earlier. If an account has a secondary password and you change its primary password without specifying retain current password, the secondary password remains unchanged. That could be a gotcha. For alter user, if you change the authentication plug-in and do not specify return current password, the things will fail. So when you're changing the plug-in assigned to account, the secondary password is discarded. So be careful there. Statements that modify secondary passwords require the following privileges. Yes, there are new privileges. The application password and the privileges required to use to return current password or discard old password for alter user and set passwords. So make sure you have your permissions set correctly. Now if the account is permitted to use manipulate secondary passwords for all accounts, it should be granted create user privilege rather than the application admin password. So keep application password admin kind of secluded. Don't spread it around or use the create user. Okay, way back in 8018, we added a random password generation, which works with the configuration that you can set up with MySQL to have complexity and all that. So first example here, we're going to create three users all identified by a random password. And this is the clear text of those passwords. So if you're automating cap generation, this is what the user will have to type to get in. It also works with alter user. So you can alter users to get random passwords and you can set password to random. Now why is this interesting? Well, coming up with random passwords when you don't have a password generator can kind of be messy. This way you make things match the parameters that are used by your MySQL server. And have some notes here too. For each account which a statement generates a random password, the statement stores the password in the MySQL.user system table hashed appropriately for the account authentication plugin that you're using. The statement also returns the clear text. Remember this is clear text password that the user is going to use to actually authenticate into the system. And by the way, the generated passwords have a length of 20 characters by default. If you want to make it simpler, you can go down to five. And if you really want to make people type and be a real fun person, you can make them up to 255 characters. Probably not recommended. Back in 809 we added failed login tracking and temporary account locking. First example here, create user identified by password, failed login attempts three. So the fourth time they log in and they're figuring that password, the account will be locked for a time period of three. That time period is measured in days. You can also use this with ultra user. Failed login attempts. Here we're going to give four chances. Password locked time unbounded. What does unbounded mean? It means that someone's going to have to go, that someone being you is going to have to go and change their password or unlock their account. Now the granularity is in days. I'm not sure if that, I personally would prefer something a little bit shorter, but I can see why they're doing it in days. So this is something that you can change up from end value options from zero to three, two, seven, six, seven. A value of zero is of course this able, this option. This is consecutive login attempt. So if they blow it and then come back and get in and then come back the next morning and fat finger it but get in, that is not going to cause them any problems. However, if they do go past the end value that you set, they will be locked out. So be aware. Once again, this is consecutive login failure. Now a login failure, we've got to warn you, is not including failure to connect for reasons such as unknown user, they fat finger their username. Network issues, they're just some connectivity issue where the packets don't get back and forth the way they're supposed to do. And by the way, if you're using dual passwords, as I showed you earlier, either password counts as correct. And if you want to dig into the details on this in the information schema, there's a connection control failed login attempts table that will track all these events. Okay, warning, personal gripe upcoming. What is that gripe, Dave? And why are you sharing with us at FOSTA? Well, when I was born, the name David was very popular. And the demunication of Dave was even more popular. And one of my favorite books of all time is by Dr. Seuss. And in his collection, Sneaches and Other Stories, you'll hear the story about Mrs. McCabe, who had 23 sons, and she named them all Dave. Okay, Dave, where are you going with this? Well, if you're administering accounts and someone says, hey, I forgot my password. Can you reset it? Well, who is this? This is Bill. And you look through your telephone directory and your company has 43 bills and probably 18 Williams and four or five Billys and some other variations out there. Well, how do you tell which one is which? Well, fairly recently, we added the ability to add comments, which is also to be used with the word attribute, to annotate the cheat course field on a Unix or Lytex account, who that Bill is. As you can see here, we're creating a user and comment. Bill Johnson, room 114, extension 1234. Now, if you select the user attributes for that account, you'll actually see it's out there listed as a comment under some metadata. So the next time someone calls up, say, hey, this is Bill Johnson, I forgot my account name, you can give it to him, or this is Bill, which will Bill Johnson in room 114, you can now authenticate it that way. Of course, this also works with altered user. You can use the alternative keyword instead of comment as attribute. You can do stuff like here, we're adding the email to this account. So previously, we had Mary Woo is in room 141 and her extension. If we run attribute or comment, it will actually append that to the JSON data. Very, very handy. I hope more people can take advantage of this. By the way, if you're looking to run a GUI for MySQL to do admin work, highly recommend MySQL workbench. It automates a lot of this, saves you a lot of typing, especially if you're doing restrictive privileges. So a lot of these are coming here and hit a checkbox, then to do the command line control where you're doing, select underscore approved equals, yes, for 20 different. Privileges here, you can do it all with a lovely GUI. So getting to the time to wrap up here. By the way, if you want to try the MySQL data service for free, you can do it today, $300 with the credits and let you on the Oracle MySQL cloud. By the way, MySQL, you're part of the community now, since you're watching this in your virtual Fonstam, please join the MySQL community at Slack. You can find us at MySQL.com. You can find us on Twitter, find us on Facebook, and of course, LinkedIn. And if you're running a startup, we have a startup program at Oracle that gets you discounts and credits and a whole bunch of other stuff. So if you're in a startup, please reach out and try this. Also if you're using the JSON data type, I have the second edition of MySQL and JSON a practical programming guide by yours truly has come out. If you're using JSON, this is a concise guide to the stuff that you need to know to practically use it. The manuals that we have are very, very good. I just wanted to give people some better, more clear cut examples. So that is out there available on Amazon. And with that, we're going to go to questions and answers. And I want to thank you for suffering through this. And hopefully I've given you some tips on things you can use.
|
MySQL has added many new features to make user account management easier. The server can now generate random passwords that follow the rules you manage. If you have too many 'Dave's or 'Fred's in your organization, you can store GCOS like information in the mysql.user.User_attributes column to directly identify who you are referencing. And you can now have dual passwords on an account. These additions can make account management much easier but only if you know about them!
|
10.5446/52764 (DOI)
|
Good morning to the presentation on my review of the Percona operator from ISQL. I'm Marco Tusa and I'm working in Percona as a technical lead. I don't know going to talk too much about that. Let's go straight to the why this presentation. This presentation is mainly to help you. I would like to prevent you to go through the same pain that I had when I was approaching the product. That was partially my fault and partially because a lot of misunderstanding that are existing and that exist around. But let's go through the few of them. The first thing that happened is that when I approached the Kubernetes word to say, well, okay, let's set up a cluster, the real cluster, Kubernetes cluster and then let's see how it goes from there. And actually it was quite, takes quite a sometimes and to do it properly, not the mini cube, right, but the real one. So I decided, okay, let's keep that part and go straight to GKE, which led me to use the documentation from Percona and from several other sites, Kubernetes specifically to NGKE especially to do the testing. I have to say that using the documentation, reading and not just trying to start up to start the test right away was not enough. Anyhow, following the documentation, following what is recommended to do, there is the first step. The first step is to clone the repository in your machine to enable the GKE platform, the client platform on your local machine and then you can start to play with it. So once you have done that, it is quite straight and easy. You have to create the initial environment. So the root nodes, as you can see, there is a very simple comment to execute here and at the end of it, of which you will have the cluster below the market as GKE. I like three small things that I will discuss later on, but as you can see in my GKE, in my environment, I was having the same subnet, not only the GKE, but also other two instances, now I want to stop right now because of course reason, but otherwise it was running during the tests and they are in the same subnet. But this can be, in this case, it was my testing instance for application, but it could be another cluster where you deploy the application. So it could be another Kubernetes set of nodes. So let's go ahead. Once you have done this, you have to deploy your configuration, your pod. And again, following the instruction, you just do this. So it's simple, it's straight, no problem. You create the namespace, then you set the context, then deploy the difference element. At the end, you do the deploy the CR and the CR is actually doing the action of creating the pods. And if everything goes smooth as it should, you will have this kind of situation. So you will have the first on top are the pods and below the service that they are provided. So you can see you are creating three pods with Azure Proxy and three pods with PXE that are actually the data node. And below you have the entry points. The entry points are as a service are these ones. Cluster IPs. So the next step, once you have that, is to connect. Start to work with your environment and see what is going on. And there the first disappointment from my side. I was using the cluster IP and it was not working. Nothing to do. I mean, no way to access it. This from my application node. So the same subnet and it should be visible. Actually, it should be exported. It was not working. Neither the HAProx replica entry point. So yeah, they are in this different subnet as you can see the cluster IP, but they should be routed and visible, but that was not happening. Interestingly, and actually, I did be more confusing. If I was trying to ping directly or if I was trying to access directly the pods, so the data node, I was able to. So I was trying to say, oh, wow, what's going on here? Because I cannot connect to the entry point, but I can connect to the data nodes. This is something that needs to be fixed because for me, this is wrong and should not work like that. Anyhow, the other solution, the solution in this moment in order to utilize the cluster is to have two different methods. One is port forwarding and the other one is use the load balancer. So changing the service for the HAProxy, changing the kind of service providing right now. The port forwarding is a very simple, easy to use command and is mainly SSH forwarding. Immediate is very easy. You can use it from my point of view just for administrative thing because it's an SSH port forwarding. So it is not very performant and is unstable. So you cannot ask thousands of connection to use that one. While setting up the HAProxy to have an internal exposed load balancer IP, that works fantastic and this is going to be my method for the testing. The results mode bug at the moment, but that in here is the bug reference, but it's going to be fixed soon. So after that, I will say, okay, let's do and start the testing. I have run the command, I set up all the users, I set up my schema and then I start to run my script on the application. Given the fact that it is not really the best platform in terms of performance, it takes a while to populate. It was late in the night. I said, okay, let's go, I will discuss, I will see what happened tomorrow morning. And so I launched the script and went to bed. Actually the next morning, I found, this is what I found. If you check, you can see the HAProxy are running, but only two of them. One is not even there. PXC is in crash loop back off, which means it has already 218 restart from the day before and no other data node and of course is unusable. I start to try to understand what was going on and start to try to use the, you know, kubectl logs and investigate the logs for each available block at the moment. Only the PXC is zero and the HAProxy, but, and to be honest, the logging provide were not enough to debug. So I moved to one level up and said, okay, let's start to see at the root node what was going on. And also there, it was very difficult to identify what was going on. At the end, my eye actually captured something and say, oh, what is this? What is this? You can see it, right? You can see it. Here we go. We have that the mount point was 100% full. And the interesting part is that I only had six gigabytes. Of course, six gigabytes for an environment is really nothing. I mean, that is just nothing. You cannot do any test with six gigabytes of data on this. So I say, what, why that? And that was a wrong assumption from my side. So what happened actually is that I stopped and say, okay, I'm a little bit lost here. There is something wrong because I have all the pods restarted and then I have also the node, the root node restarted and debugger not good. And at the end, my takeaway is that the instructions that are provided are just good to say hello word. They are not good for any kind of testing. So this is a very important point. Do not follow the instruction just as a say hello word and then stop. Don't pretend to do anything else. And then the other part is that the Kubernetes operator is not allowing you soft tuning. It doesn't allow you to do any kind of scaling. It's just an operator that launched the instances and served the instances. But you have to do the world work about dimensioning. So it's not doing automatically for you. And it's not something that you can expect to have done in a smoothless way. You have to go through a process. So in the end, I'm still dealing with a PXC. So all the problems with PXC about certification, about data transfer, about SST exist, which is already something difficult to debug. So I take a step back and say, okay, I need to start from scratch. I need to dimension correctly my VMs and review the configuration about the pods and estimate the effort. This is the point. That was, whoa, wait a second. So if I have to do that, it's like having a machine, having to do the same kind of work I have to do for the standard instance. There is what's the benefit of doing through the operator? I didn't get it. Meaning, yeah, I can automate. Once I have everything set it up and world dimension automate, but if I cannot, let's say, trust the automation to help me to do the fine tuning. I have to do the calculation by myself. So this is the most important point if you want to do tests, if you want to do any kind of exercise using the Kubernetes operator, and you need to do the calculation. You need to be as precise as you can. So the first thing you have to do is to start from the VM roots. You need to calculate how much CPU and memory and this space you need for each service. You have there and you will have the database, you will have the proxy, you will have the log service and you will have the backup service as well. So you need to consider all of them in order to do the proper calculation. And the other thing is never use preemptible. That should be removed even in this fraction because preemptible is really, what it really means is that Google Cloud can, when you use a GKE, Google Cloud can remove your root nodes anytime because they can reclaim their resources. So just remove it. My bad that I use it, my fault, I should know better, right? We should know better. But my thing is that it should not be in the instruction at all. Anyhow, at the end what I choose is this to go with this kind of setup, which is a different kind of machine. I was using 8 CPU 30 gigabyte RAM and I also choose to use Ubuntu, not the standard image that comes because that will allow me to use Ubuntu with all the things for GKE, for the container, then add additional element for debugging in that image and create a new image that I will eventually use after to spin other nodes that are already pre-configured with all the tools that I want. And then you need to start to debug in depth about the configuration. As I said, you need to do the calculation. In this case, you should also be precise with the mass-quad parameters for the buffers, for the iBlog, for the buffer pool, in order to be sure that what you have configured stays inside that kind of amount of memory. So it's not that you can skip that part. Actually, you need to put more careful, more attention in that part. It's not something that using Kubernetes and the operator will always say, oh, I don't care, let's do it. No, it's the other way around. You have to be even more careful in what you're doing. And that is not only for PXEs, for HAProxy and the Log Collector PMM and backup. Anyhow, I am showing here a little bit what is going on, or the kind of settings I used. And I just, I will explain only the PXE. So for instance, I said 24 gigabyte, 4,000 million, which is a mainly for CPU as base, and then it can go a little bit up. And then finally, I also said, well, give me at least 100 gigabyte of disk, because that is the data dimension that I calculated, identify as useful. And after that, you can see that my cluster with these settings will have that over 7.91 CPU allocable, 7.20.23 are allocated, which is good. Okay. I'm utilizing, but not too much, not too low. I mean, it's the right utilization. And memory-wise, I'm keeping some buffer eventually to grow and to increment what I need in terms of buffers, for instance, if I need to do more sorting, if I have to do more, I increase the join buffer or something like that. Now it's time to do the testing, the testing of the entry points. As I said, I changed my entry point, and if you can, you can notice that I have two entry points. The first is the cluster-ish aproxies, and the other one is cluster-ish aproxies with replicas. And my entry point with the load balancer now have an external IP, which actually used the subnet where my wall environment is. So I can easily connect to it and start to do the testing, and it works fine. The other problem instead is using the entry point for replicas. I think this should be removed. I'm asking my people in Percona to remove it, to disable by default, and eventually someone can enable it if they really know what he is doing, because this entry point, given his etch-aproxies, doesn't prevent you to use this replicas entry point that touch all the nodes. Instead, the previous one touched only the primary nodes that you use for writes. The other can be randomly distributed. And you can use this entry point eventually to write everywhere, which is dangerous, which you should not do ever. So in my opinion, this cluster and each aproxies replica entry point should be removed. And then I start to say, OK, let me see what's going on. We still have PXC behind. So let's see what's going on in terms of stair read, and if I have it or not. With moderate load against that platform, I still have 37% of stair read, which means that if I use two entry points, also if I know very well what I'm doing, or if I use a proxy score with read by splitting, I will get some of the reads that are not in line with the primary, because that is the nature of the technology. We know that. It's well-documented. I've wrote articles and articles about that. So this is something that you should not want. This is something you should prevent. So how we can prevent? Either you use one entry point only, so the one that goes to the primary, or you have to tune the sync weight, the WSREP sync weight. If you tune the WSREP sync weight, you will have a performance impact, which means that maybe you can offer that, maybe not. Maybe your application is a read-stale tolerant, maybe not. This is really up to you how to want to play with it. If you play with proxy score, you can have read by splitting if your application is fault tolerant, or if you can support to have lower performance. But if not, you should not use WSREP sync weight and use one single entry point, which is probably the best approach. Why? And this is the final part. Because per-coronapporator for PXE is not a PXE cluster. The name is misleading. Do not expect to behave as a standard PXE. This is not what you get. What you get is that this solution offers you a MySQL service. A MySQL service that should not be seen as a cluster. The cluster is to provide resiliency to the service. In Kubernetes, in fact, the service is the resilient part. The pods, the nodes can be started and stopped as needed. So you may have a PXE cluster going up and down, and you start to feel, oh, my God, my cluster is unstable, while the service is still perfectly working. And that is how it should work. That's the crazy thing. So the most you stay stick to that comes which is one of the main comes here, that you get a MySQL service, you don't get a PXE cluster service, you get a MySQL service. Then if you do and follow that, you are safe. If you instead try to use it as a cluster, you will have a lot of trouble. And this is something that we are seeing from real life, from support cases. You still have PXE behind, right? So the nodes can crash, the nodes can require SST and all these kinds of things. So if you go beyond certain limit on the data that you have and you need to pass over the nodes, you have a longer time to do SST, you have a lot more disruption in the service. So what we recommend is never go for terabytes of data. Everything to use terabytes of data in the solution is crazy. This is not designed for that. It's more for a small set of data, sorry, small dimension. And as well as query per second, you cannot get millions of query per second. Also, if you scale to the higher platform, yes, you can increase, you can scale up, but still this is a solution that is not designed to serve billions of query per second. And finally, DDL still have high impact because the technology behind this PXE, right? So whenever you do a DDL, a DDL needs to be propagated and that will impact your cluster, which is your service. So finally, ProxySQL is, I prefer HApproxy because ProxySQL requires more resources in order to handle the request, eventually filtering the query, and it's not really meant to because again, we only have one entry point. We should use one entry point, so it's not really useful here. So okay, this is just to give you an overview of my initial pain, enlightenment about what this service really is, which is not again a PXE cluster, but is a Mascoula service. You should see it as a standard Mascoula service. And finally, a big question mark because I had to do a lot of work in tuning correctly, so I was wondering if that is worth or not. Okay, anyhow, thank you very much for attending this short presentation and I also add a lot of useful reference for you to read. And thank you. Bye.
|
Containers, kubernetes and virtualizations are, as never before, the shining objects of our times. While we are used to implementing them in case of stateless situations, it becomes more difficult to see them serve properly in case of stateful solutions like RDBMS. But after I have won some personal reluctance, I started to experiment with the Percona Operator for MySQL. With this presentation, I will bring you a short journey as a result of my experience as DBA in the usage of the Percona Operator for MySQL. We will see from one side failures, misunderstanding and some frustration. From the other side a learning process that brings me to have better comprehension of the possible utilization and the best way to achieve it. Finally my personal considerations.
|
10.5446/52765 (DOI)
|
you Hi, my name is Hirsten Graunen and I work for Nalida Babas Polar Levy team, which is based on MySQL. Earlier I worked for 10 years in the MySQL optimizer team at Drogel. My talk today is about how we can help the query optimizer find a better execution plan by slightly rewriting the queries. Most of the rewrites I will discuss is related to subqueries, so before we look at specific ways to rewite queries, I will present some basics about subqueries in SQL. There are basically two types of subqueries, a scalar subquery, which will return at most one row, while a non-scalar subquery may return multiple rows. The examples here show scalar subqueries in the select list and in the where clause, but scalar subqueries may appear almost everywhere a value can be used, like in a group by, order by, having and so on. A non-scalar subquery can replace a table in the front floors, and is then called a derived table. The derived table must be given a name, in this case the name is DT. Non-scalar subqueries can also be used with special conditional operators like in, exists and so on. All the example here shows non-correlated subqueries. They are so called because they do not refer to columns of the outer query. They may be executed independently of the main query, and the result may then be used when executing the outer query. Correlated subqueries contain references to the outer query. Generally, a correlated subquery will have to be re-evaluated for each row generated by the outer query, but as we will see, there are often ways to optimize this. A new type of subquery that has been supported since MySQL 8.0.14 is a lateral derived table. This may refer to preceding tables in the form list. I do not have time to cover these queries in this presentation. Subqueries may be nested. This slide shows a query 20 from the TPCA benchmark. There are two subqueries here nested within an in-subquery. The first one is another in-subquery, while the second one is a correlated scalar subquery. It is correlated because it refers to columns of the part sub table of the outer subquery, and it needs to be scaled in order to be able to compare it with the column value of the outer subquery. As already mentioned, the naive approach for executing subqueries is to evaluate it for each row generated by the outer query. We can speed up execution of correlated subqueries by using indexes. For example, for the query shown here, if there is an index on T2B1, we can use this index for each row in T1. Find all rows that matches A1 and compute the average of the corresponding B2 values. There are several techniques that the optimizer uses to improve this naive approach. Sometimes it is possible to merge the subquery into the outer query. If the subquery is not correlated, we can execute it first and use the resulting value for scalar subqueries. For non-scalar subqueries, the result is stored in a temporary table, and this table is used when executing the outer query. This is called materialization. For in- and exist subqueries, we can use semi-john or anti-john. There is this special type of queries where a value should be greater or smaller than all of the values returned from the subquery. Such queries can be implemented by finding the min or max column value. In this example here, we want to find rows in T1 where A1 is greater than all values of B2. MySQL will then first find the max value for B2 and then use this value when executing the outer query. There are also some new query transformations in A0 that we will get back to later. We will start looking at queries using derived tables, that is, subqueries from close. Traditionally, a derived table has been met materialized, that is, an internal temporary table is created and the result of the subquery is stored in this table. Then the temporary table is used in place of the subquery when executing the main query. The problem with SMI also creates indexes on the temporary table if it finds that to be useful. Since 5.7 a derived table is handled the same way as a view, in fact inline view is another name used for a derived table. This means that a derived table may sometimes not be materialized but merged into the outer query. But there are some restrictions on which derived tables may be merged. Among other things, the subquery must not contain aggregation, limit or union. For example, this query contains a limit clause, so it cannot be merged but will have to be materialized. Note that the subquery will be executed exactly as written, hence when the top query does select star, all columns of the orders table will be materialized. In this case, the other column only needs the total price column, and we see that the execution time is reduced by 2 third if we change the subquery to only select this column. The main saving here is that we reduce the amount of data that needs to be moved around when sorting the many million rows of the orders table. For derived tables that are materialized, you should also make sure to move conditions into the subquery, if possible. In the first example on this slide, we have a condition on a column used in the group by condition, and we have moved that into the very clause of the subquery. Similarly, if there is a condition on some aggregated results from the derived table, it can be moved to the having clause. Note that for newer versions of MySQL, this pushdown will happen automatically. If we look at the output from the new explain format equals 3, we see that while in 8.0.21, the filter is first applied when scanning the materialized table. While in 8.0.22, the filter is applied while scanning the table in a subquery. This way, there will be less data to aggregate and materialize, and the materialized table will be smaller. For this query, with a table of only 1000 rows, doing the earlier filter in saves about 30%. Now, let's take a closer look at what happens when a derived table is merged, and discuss a case where merging is not optimal. We are using this query where we join an ordinary table with a derived table on a column that is not indexed. This query will find two parts with the same name, where one is made of copper, and the other is made of steel. In MySQL 5.5, where merging is not supported, we will join the table with the materialized derived table using block-nested loops joints since we cannot use indexes. This will be pretty slow. In MySQL 5.6, the optimizer will see that it will be useful to create an index on the materialized table, so joining will be fast, and now it only takes 0.4 seconds. So, by using a derived table, the query will be faster than if both tables were joined directly. And many users take advantage of this, and add derived tables in order to get this automatic indexing. However, in MySQL 5.7, merging was introduced, and now we no longer have an index available, and the join will be slower again. In fact, it will be even slower than 5.5 since we will be joining the full two tables instead of the smaller materialized table, with only parts made of steel. So, what can we do to avoid this regression in 5.7? In MySQL 5.7, the only option is to add something to the sub-query so it's not merged. For example, we can add a limit clause with a high enough value to not change the result of the query. In MySQL we have an alternative where we can use a no-merge hint to prevent a derived table from being merged. However, after hash join was introduced in 8.0.18, we do not necessarily need indexes to join efficiently. In fact, for this query, hash join is 35% faster than index nested loop. MySQL 8.0 also introduces common table expression. Instead of using a derived table, we can now declare the sub-query in a width clause before the query, and replace the derived table with the name of the CTE. One advantage is that the readability is better when the main query is not cluttered with multiple sub-queries. Especially in this example, it becomes much clearer that we are using the same derived table twice. Another advantage is that a CTE will only be materialized once, while a derived table will be handled as a separate entity. Hence, using a CTE may give better performance than using a derived table. Here is an example of using a CTE to improve performance. In this case, we use query 15 from TPCH benchmark. The query uses a view, but as I said, a view and a derived table is handled the same way in MySQL. So the same will apply to derived tables as we discussed here. The view computes the revenue made from each supplier, and it uses it twice in the main query, once in the front clause and once in the scalar sub-query in the rear clause, where it finds the supplier with the highest revenue. To use a CTE instead, we just put the view definition in the width clause, and now the view will only be materialized once. Since materialization is the most part of the work in this query, the execution time is cut in half compared to when using a view. So next we will look at how we can improve some queries by rewriting scalar sub-queries. What we will do here is to use a derived table instead of a scalar sub-query. Or in this example I will use a CTE since I find that more readable. The original query, called Q1, uses a scalar sub-query to compute the average quantity that is ordered for some part. And this is used to compute the total price per year for orders of less than 20% of the average quantity. This means that the sub-query of Q1 will compute the average quantity for each part multiple times. So the idea in Q2 is to instead compute the average quantity for each part only once in a CTE or derived table, and then join this with the original query. Then we can do a lookup in this materialized table when running through all the orders. However, note that we are only interested in parts from one specific manufacturer, and Q2 will compute the average quantity for all parts. So the idea in Q3 is that since we do not actually need any information from the part table in the outer query, we can move the part table with the filtering into the CTE. Then we will only compute the average quantity for the interesting parts. So here are the results from running this query. As we can see, we can save significantly by using a derived table instead of a scalar sub-query in this case. And we can even save more if we can put the filtering into the derived table. However, using a derived table will not always speed up the query. Here is a slightly different query where the virtual condition is much more selective. Then computing the average for all parts as done by Q2 will be much slower than the original query. So unless we are able to put filtering into the derived table, we better keep the scalar sub-query in this case. MySQL 8.0.21 supports automatic rewrite for scalar sub-queries to derived tables. But only when the scalar sub-query is not correlated. So it will not have any effect on the query we discussed here. So this feature is also off by default, so you have to set an optimizer switch to turn it on. And there is no cost-based evaluation for this feature, so you may get worse performance with the switch on. So now we move on to sub-queries used by IN or EXIST operations. So traditionally IN sub-queries was often very slow in MySQL, and it gave sub-queries in MySQL a better reputation. However, 5.6 introduced semi-join, which changes this. So the idea is to convert the IN operation to an inner join. However, extra care is needed to remove duplicates. For example, the IN sub-query at the top of this slide finds all orders which included an item that was shipped on a specific date. So we only want to get each order once, even if several items from the same order were shipped at this date. So if we are joining these two tables, we have to remove the duplicate orders. So the advantage of converting to inner join is that now we can use the join optimizer to find the optimal way to access the tables. For the example query with semi-join, we now start with the line-item table and find which items were shipped on this date, and then we look up these orders in the orders table. Without semi-join, we would have to go through all the orders and check if they had any items shipped on this date. So for this particular query, this reduces the execution time from 1 minute to 100 milliseconds. As I mentioned, semi-join can not always be used for IN sub-queries. This query, which is query 18 from the TPCA benchmark, contains a group by so semi-join cannot be used. However, remember that semi-join was needed to remove duplicates. In this case, we know that the sub-query will not produce any duplicates, since we are grouping on the only column on the select list. In that case, we can just use an ordinary join instead. So we put the sub-query in a CTE, and we join it to the rest of the tables using the conditions given by the in-operation. So here we show the two query plans. To the left, we see the original query plan, where the sub-query is materialized. And we do a lookup into the temporary table for each row in the orders table. On the right hand, we see the new plan. Since an ordinary join is now used, the join optimizer may put the materialized table first in the join order, so that we do not need to go through all orders, and the execution time is reduced from 3 to 2 seconds. The last type of sub-query we will look at is how we can use window functions to replace the scale or sub-queries. A few tips each query uses a pattern that we see here for query 17. We use a non-curly sub-query to compute the result, and then the main query will come paired with this result. You have probably already noticed that you have seen this query before. It's the same as we discussed earlier with the scale or sub-queries. So note that we have these queries accessing the line-18 table twice. One is to get the average, and one is to go through and over the items. However, if we use a window function, we can actually go through all items and compute the average at the same time. So by using a window function, and I've shown here in a CTE, we can access the line-18 table only once in this query. Here I show the results for the TPCH queries that may be rewritten using a window function. As you see, the mileage varies. For query 17, the execution time is reduced by 80% from 6.5 to 1.3 seconds. But the savings will depend on how large part of the total work is saved by avoiding accessing the extra table. For query 15, there is no savings since, as discussed earlier, we have already avoided the double table access by using a CTE instead of a Vue. Finally, I will briefly mention that we can rewrite queries by adding optimizer hints. The syntax for optimizer hints is to put the hints in a special comment right after select. The comment must start with a plus sign. There are several hints that affect how mays get executed in subqueries, and we have already discussed the merge or no merge hint. But there are also hints to control which strategy to use for some a join on other in-subqueries. And there are also new hints to control the condition pushdown feature introduced in 8.0.22. There are also other useful hints, for example, to specify the join order or which index to use. But what if you are not able to rewrite your queries because an application cannot be changed? For this purpose, the query-rewrite plugin is provided. So here you can specify how incoming queries should be rewritten by adding a rewrite rule to a specific table. And special APIs is also provided so you can create your own rewrite plugins that intercept and change the query either before or after parsing. So to sum up, I have summarized the main tips on how to re-write your queries to run faster. So you should only select columns in the drive table that will be used by the outer queries, and push condition into your subqueries. And we also show that the automatic merge of the drive tables may not always be optimal, and how common table expression can be used instead of drive tables. We also showed rewriting scalar subqueries to either drive tables or window functions to improve the performance. Another rule, if you are not using a very recent MySQL version, is to prefer in over index, exists. And if SEM Agenda does not apply for your in subqueries, you can check if it can be replaced by a drive table. That's all, and I'm open for questions.
|
Two MySQL queries that will return the same result, may sometimes have totally different queries plans. This happens because the query optimizer does not realize that the queries are equivalent. In this presentation, we will discuss how we can rewrite queries to help the optimizer find a better query plan. We will show several examples of how we can transform subqueries to make them more efficient, and we will also discuss how we can identify queries that can become faster if a subquery is replaced by window functions. Finally, we will discuss how MySQL 8.0 can do some of these transformations automatically.
|
10.5446/52766 (DOI)
|
Hello and thank you for tuning in. My name is Shlominoch and this is open source database infrastructure with Vitesse. I'm an engineer at PlanetScale. I'm an author of a bunch of open source projects in the MyScale ecosystem including orchestrator, ghost, reno and others which are relevant to today's discussion on a maintainer for Vitesse. PlanetScale was co-founded by the co-authors of Vitesse and we build interesting things on top of Vitesse. But we don't own Vitesse. Vitesse is a CNCF-graduate project released under the Apache 2 license and has maintainers and contributors from around the community. PlanetScale just happens to be a major contributor to Vitesse. Vitesse is a database-clustering system for horizontal scaling of MyScale but today we're going to present a different aspect of Vitesse, a database infrastructure aspect. On our agenda are four recent developments. They're either in progress or experimental. They're in the paintbrush. You can use them but they're not officially stable at this time. To understand these we're going to need to understand the general architecture of Vitesse so as to realize how Vitesse is able to provide automation of such complex operations. Today we're going to discuss throttling, table lifecycle, online video and high availability in failures. To illustrate the Vitesse architecture basics let's begin with looking at a simple MyScale replication cluster. We have one primary, three reticas. The first thing we're going to do is to attach a VTTablet to each of the database servers. This is a demon typically co-located on the database server which can then both communicate with the demon. It can send queries and receive responses but also it can control the MyScale service so it can stop and start the service. For example to run a backup and a restore etc. In production you will have many clusters and many servers. Again you will have a one-to-one mapping between a tablet and a server. Now on top of everything we're going to put a smart proxy it's called VitiGate. VitiGate impersonates as one giant monolithic database. It speaks the MyScale protocol, it parses your queries, it knows how to route them to the backend servers and to you it looks like a MyScale server. Now in reality we'll have many VitiGates to scale out the read and write capacity but for our purposes we'll only discuss one. And the big question is how does VitiGate route queries, how does it know where to write a query. You will have different schemas, different tables, different datasets on the various MyScale servers. Some of your classes could be shotted, some unshotted. How does VitiGate know where exactly to route the query. It begins with just the query itself, right? The application will use a specific schema and select something that indicates to VitiGate where to get the data from. So if we select order ID and price from orders where customer ID equals 4, the value 4 may indicate to VitiGate that in the commerce schema the value 4 is only located in shard 0. So based on schema, based on information VitiGate could be able to know where to route the queries to. But again how does it know how do all VitiGates know that? Well we have a backend database that's the state of Vites itself. We call it Topo, essentially it's an etcd or zookeeper or console, a distributed KV store, which stores all the meta information about the Vites cluster like what schemas we have, what shards we have, who are the tablets involved, etc. So VitiGate reads that information from Topo, it's mostly static information and it's mostly cached within VitiGate. The last component I want to introduce is VitiControlDemon, which is a demon which runs ad hoc operations like reshotting, it also acts as an API server and we'll follow up later on to see how it helps us. The first feature I want to present is throttling, pushback for massive rights to protect your database cluster. It's based on Fresno, a service I call for at GitHub, which is a cooperative throttling service. It's now imported into Vites and specifically into VitiTablet. The idea is to protect against replication lag. So the moment your replication lag grows higher, you begin to push back. We want to keep replication lag low because we want our replicas to serve fresh data, we want our failovers to be happy and we use a mechanism similar to PD-HardBit where Vites injects a timestamp onto the primary MySQL server and we read the value on the replicas and we calculate replication lag on sub-second resolution. The fun thing about Vites and throttling is that Vites is aware of the topology and of the identity of the service, not all servers are created equal. We have, of course, the primary and we have replicas, but some replicas are serving production traffic whereas other replicas could be like backup servers or OLAP servers or be out for maintenance momentarily. The fun thing is that Vites knows exactly at any given time which service serves data or not. The fact of it is controls the streaming of production traffic through to those servers with Vitegate and so Vites is able to say, okay, I'm going to consult the reading replicas for replication lag but not other types and this is overreadable with user command line flag. So the primary tablet of each shard runs a continuous pull for replication lag on the relevant replicas and it serves an HTTP endpoint and the users can call on that endpoint. It returns with 200k when everything is good, meaning there is no lag or small lag on the relevant replicas or other HTTP code when there is lag. It also periodically consults with the topology service to see if there's any changes to the topology. Like, is there a new server, a new replica or maybe a replica which used to serve traffic is now down for maintenance or change role into something else and so Vites is able to say at any given time whether, you know, throttling makes sense right now or not. And the front-end service is basically internal. We don't expect to expose it to the users while it's actually visible but it's really used for some internal mechanisms. The two that I will describe today are table life cycle and all MDDL but we have more ideas for the future like automatically throttling massive updates or massive deletes. What happens if you update the entire row set of a table or try to delete a gazillion rows? The next development is table life cycle and automatic garbage collector for all tables. And this comes to solve the woes and misferchance of dropping tables in production. There's two major problems to dropping tables in production. One is that you may have missed something and some part of the application still needs the table and once you've dropped it, you're in a big problem. The other thing is that actually dropping a table on a busy production system can bring your can lock down your your primary database. That's something we've seen many times in production. And so people have come around to similar solutions in different configurations like first of all, how about you don't drop the table but we may need to something else. Application-wise, the table is gone but in case you forgot something, you can always rename it back and make application happy. Next is that dropping a full table with the gazillion rows is risky. So how about purging it first until it's completely empty? This can take a few days and actually in that process, you bring most of that table back into the even the buffer pool memory because your your access in the data in that table. So you're going to need a few more days to get those pages evacuated from the even the buffer pool. And then once the table is empty and it's being evacuated from the buffer pool, you're able to drop it. And there's more variations on this process like handling things different on the replicas like black hole in it, black hole in them or just truncating because the workload on the primary on the replicas is really very different. So this is a very difficult process to track, right? You need to know which state each table is and what's been done, what's not been done, you need to coordinate and schedule stuff. It's not trivial and Vitesse wishes to solve that. So Vitesse defines the following states in the table's lifecycle. Table can be in use, which is just normal, or it can be held, which means it's kept safe intact, like no data is impacted, and you still have time to regret and rename it back into the original name. Or it can be in the purge state where it's actively being purged of rows. And then the evac state where you just wait out for the pages to evacuate out of the buffer pool. Finally, the drop state where the table is imminently going to be dropped, and then the table is gone. And the idea is that all this is based on the table name. So there's a magical table names. Take a look at the first table name here, Vitti hold something, something, something, then some number. If you look closely, that number is actually a timestamp. So a table with this name Vitti hold something, something, means Vitesse is not going to attach this table until 2021, January 31, 9 30am. And then when that time passes, the table gets from the transitions, transitioned into the next phase. Or if the table is Vitti evac, something, something, that means the table remains in evac mode. We wait out for the pages to evacuate out of the buffer pool until February 7, 2021, 7.15am. So what about the purging state? Well, the purging state is more complex because we're actually doing something here. So for purging tables, what we do is that Vitti tablet on the primary server is charged with purging the table data, right? And it only schedules a single table at the time. And it repeatedly purges like 50 rows at the time. And it uses the tablet throttler. So we've introduced tablet throttler earlier. This is the first use case between each such purge of 50 rows, the table garbage collector consults with the tablet throttler to make sure that it's not pushing the cluster health too hard. And so with this idea of encoding the states within the table names, the process becomes stateless. There's no metadata table where we need to track the progress of each of its tables, life cycle. It's enough to take one look at the tables in your schema to know what we need to do next. Vitti has to always score with the relevant tables and will always do the right thing and converge into dropping those tables. Not all users are the same. Some users are perfectly happy with just dropping the tables, which is fine. Others say, well, I just want to hold them for a few days and then drop them. And others, like in my experience, like we have to go there, the fool, the full flow, right? We need to hold and purge and evac and drop and then some. And so that's the default and it's controlled by a command line flag. Online DDL, big development, which incorporates both previous developments discussed. So skin and changes made easy. We all know the trouble you want to alter table production. It's a big problem. It's either blocking and locking on the primary or on the replica. And it's exhausting. It's uninterruptible. And it's an old go. Most people would use being on a scheme change or ghost to run scheme changes, which is great. These are external tools, but they create an operational complexity because you need to install them somewhere. You need to ssd somewhere to run them. You need to tell them where the what the topology is. So there's this discovery issue. What's the primary water directly because you need to create an account on your nice build service. You want to schedule the migrations because you typically don't want to run two or more concurrent migrations. You need to be able to throttle somehow. You need to be able to track the migration interrupted if needed, etc. And that that takes the ownership outside of the developer's hands. And so in small companies, developers will be able to just run this in production. But as your company scales, this doesn't that's not sustainable. So you need specialized people to do that. And that's that's not so good. And Vitesse is in the unique position to solve really all of these problems because it knows about the topology. It can throttle. It can help you track in discoveries trivial. It can run external tools on your behalf. It's a run. And so Vitesse reduces the problem to this. All you need to do is set the deal strategy to ghost or PDO on a skill exchange. That's a new session variable followed up by your normal art table. The response is a bit funny with a standard direct art table. The statement can take like hours. And then you get no response. Whereas here, the query returned immediately with this weird uu ad. So that's a that's an asynchronous migration. And that's a job tracking ad, which we will use later on. So what goes on here is that the application requests scheme of change, but it doesn't talk directly to the database. Every talks to Viti gate. Now, Viti gate is under no obligation to actually pass this alter table statement onto the tablets or the back in my skillservice. Instead, it says, oh, okay, I was requested to run a ghost migration. That's the strategy. So I'm going to just store the request in topo. No Viti control the imperiality checks for those requests for such requests. And it sees our alter table request and it knows, okay, that's the commerce schema, the product table, it figures out which clusters belong to the commerce scheme, which are the shots, it picks the primary tablets for these shots, and sends them that request. And all the fun happens in the tablet, it receives the request will be controlled and persists it schedules the migration because there may already be migration running. It prepares the script and temporary directories. It creates temporary one off credentials on the MySQL server, especially for that migration, it then cleans them up afterwards. It runs ghost and PD on a scheme of change with the correct command line arguments, letting them know where the topology is. It uses the tablet thrower with ghost, it's pretty much built in with PD on a scheme of change, we override the replication land plugin to consult the throttler. It rocks the migration, it cleans up afterwards, like it will clean up the triggers for PD on a scheme change, whether the migration succeeds or fails, it will Viti tablet will ensure the triggers are removed. But how about the artifact tables like the old tables, you can't just drop them, right? So no problem, we just feed them into our garbage collection, we just rename the old tables into those magical table names to be collected by the table lifecycle garbage collection mechanism. And so we got this UU idea in response to our alter table statement, which we can use to then track the migrations like we can show the status for a given migration on these four shards, it's still running on three, it's completed on the fourth one. We can cancel ongoing migrations, we can retry, cancel or fail migrations, etc. But online DDL is more than just alter tables, we can only also use it for create and drop, create is pretty trivial. But how about drops? So this is where we integrate our table lifecycle, again, the second time in online DDL, when you drop a table in an online DDL strategy, VitiGate will transform your drop statement into a rename. So it will rename into something hold, which will, you know, put the table into a health state, you still have time to regret, and then eventually it will be picked up and purged and removed. So this puts back ownership into the hands of the developers. It's almost zero dependencies. If you're on Linux AMD64, Ghost comes pre-compiled within Vites, you don't really need to do anything to make that happen, you just need to invoke the alter statement, it will retry in case it will fail over, etc. There's a bunch of goodies for this. Future work is to integrate with ongoing reshalling. So think about immigration that keeps running while you reshot the database and while you actually repart, reparent, like promote new replicas as primaries, etc. Last development I want to discuss today is VitiWork, the orchestrated integration. So I began writing orchestrates seven years ago and throughout Outbrand and Booking.com and GitHub, knowing independently, and the idea was that when I joined PlanetSkill, I would assist in integrating Vites and orchestrate, and thankfully that did not happen. And Suguru, co-creator of Vites, took this upon himself and did the integration, and I'm ever so happy because if I were to do that, I would try to make Vites work nicely with the orchestrator, whereas Suguru made orchestrate work nicely with Vites, and that's all the difference in the world. And there's many aspects to that, but let me just describe one. One of the greatest problems with MySQL replication clusters is that MySQL is not aware of replication clusters. MySQL is only aware of a replica, reticating from a primary, but it doesn't have the concept that there's no entity called a replication cluster. In MySQL, there's no name to that, but to you and me, this means the world, right? That's our database cluster. So orchestrator's approach is mostly to observe and then act upon what it perceives to be the expected scenario. If it observes a cluster and the primary fails, then it can kick on a failover, but it still doesn't know per se that all these servers are necessarily part of the same cluster. It gives some metadata for name and et cetera, but it's still very heuristic. Just as an example, if you have a split-brain scenario and one big cluster splits into two clusters, which of these two is the real cluster? Is there a correct answer? So if orchestrator were the one to run the failover, then heuristically it says, okay, the one which I promoted is now the real cluster, but is that the correct behavior? This is just heuristic. And there's other more complex scenarios where the heuristics are not so clear. But Vitesno's. Vitesno's exactly what it wants, right? Everything is written and so forth. It keeps a state. It knows which cluster it has, which members in the cluster, what's the role of every single server in a cluster. It knows which server it wants as a primary, which is a replica. The old Vitesno orchestrator integration was based on APIs and hooks and scripts. And there was a lot of friction where both were trying to take ownership of the separation, which led to conflicts. And Vite Orr is a spin-off of orchestrator, where the orchestrator's code is actually embedded within Vites. It uses the Vites stop or it uses Vites functionality like locking clusters and locking operations, etc. And the mission of Vites is inherently different from that of orchestrator. Its mission, its goal oriented in its mission is to make the replication clusters to look like Vites expects them to. So a few examples to scenarios that Vite Orr handles that orchestrator doesn't. From left to right, there's a broken replica. Or a stop replication. Or it's just observed and said, well, there's a broken replica, but Vite Orr says, well, why is it broken? According to my records, it should be running, right? So let's kick it in. Or there's a detached replica. Orgustrator wise, it doesn't really know which cluster this detached replica belongs to, but Vites knows and Vite Orr knows and can rewire it back into place. If there's a dual primary, a co-primary scenario, it knows which of the two should be the master. Take a look at the right. There could be a fully functional cluster, right? An orchestrator would perceive it to be completely healthy. There's a primary, there's three replicas, everything seems to be working fine. But maybe Vites says, well, you know that first replica on top, that according to my records, that should be the primary, not the existing primary. So Vite Orr will actually actively promote that to be the new primary, because its goal is to make the cluster match Vites' definition. So that's super cool. There's a lot of work in progress going on with future customizable availability and durability rules, etc. So lots of things to look forward for. So with these developments, we turn Vites not into a shot in-frame work, but into a database infrastructure framework that brings in all the database complexity and solves it in one single place. And with that, I am able to take questions in chat. Thank you so much for tuning in.
|
This session reveals four experimental Vitess developments that automate away complex database operations. With these developments Vitess is able to run its own database infrastructure, transparently to the user, and take control of risky and elaborate situations and operations. We will briefly explain the Vitess architecture and how it supports said control, and discuss the following developments: - Throttling: pushback for massive writes. - Table life cycle: safe and lazy DROP TABLE operations. - Online DDL: automating, scheduling and managing online schema migrations. - HA, failovers and cluster healing via vitess/orchestrator (aka vtorc). Vitess is a CNCF open source database clustering system for horizontal scaling of MySQL.
|
10.5446/52768 (DOI)
|
Hello everyone and welcome to FOSSTEM. We're really happy to bring you our talk, talking about Isinga to the network monitoring deaf room. And before we jump into it a little bit about ourselves. So my name is Foy Murek. And I'm Julian Bost. My pronouns are they, them and I'm streaming to you from the Isinga HQ. My pronouns are he, his and I am streaming from my home and we are fancily merged together in a green screen. So I've been working with Isinga and been part of Team Isinga for roughly four and a half years now. I did my three year traineeship as a developer with Isinga and for the past year I've sort of moved over to be more in a community and yeah generally communicative position. So I do talks, I supervise our community forum and I try to be in contact with everyone from our deaf team and the community. And while we're talking about the deaf team. Yeah, I'm from the Isinga core team. I just recently joined this team last year. So I came straight from university where I did my computer science degree and now I work on making Isinga all nicer, better, fancier. And yeah. Okay, so the first question that we wanted to talk about is what is Isinga? When we're talking about Isinga we're basically talking about Isinga 2 as Isinga 1 is an entirely different product. Isinga 1 is based on Nagyos. So it's a fork of Nagyos. And Isinga 2, the current one that we are talking about is a complete rewrite of Isinga. So it doesn't really have anything to do with the old Nagyos fork anymore except for some similarities. So what are similarities that you can think of Julian? So the most prominent one is probably that all the check plugins that we execute are basically the same as Nagyos. So the interface remained exactly the same. You can use all existing Nagyos plugins you have lying around, written in the past, found somewhere on the internet. They all just continue to work with Isinga 2. Yeah, that's right. And on some operating systems there's also the plugin folder. There's also still called Nagyos plugins. Yeah, or Debian even calls the user Nagyos as which Isinga runs. So yeah. So Isinga basically spans over all different aspects of monitoring that one can think of. So when it comes to what you can monitor, we have both infrastructure and cloud monitoring. So you can basically connect your entire landscape, be it sensors or servers or anything that gives a signal basically. What are some cool nifty things that you can think of right now? Well, in the midst of a pandemic where toilet paper was an issue, there are of course people who have monitored the availability of toilet paper in the local supermarket. So you can do everything if you can get the data. Yeah, I also recall a case of someone monitoring their potted plant with a moisture sensor in the soil checking whether the plant needs watering as well. So you can get pretty creative. Also monitor your home automation and all that and manage it over that. Yeah, so you can monitor anything you can think of. And the next competencies that we value are metrics, logs and analytics. So all the data that you collect is stored in logs. And you can also visualize that data that you collected properly. So you can have nice graphs that show you what your infrastructure looked like in the past. And also generate SLA reports, for example, for uptimes. Do you use graphing in your Asingar? Of course. So Asingar, with Asingar you can integrate it into, for example, Grafana. So if you know Grafana, it's a fancy tool where you can build all these nice dashboards that you may have already seen somewhere. So you can integrate into that and even integrate graphs from there back into Asingar Web 2, so our own web interface. I mean, Asingar Web 2 also has its own little graphs. I mean, it's just doughnut graphs in the tactical overview. Yeah, that's the current data. For historical data, you will have to use some integration, like for example, the Grafana one, or for example, Graphite is also supported. And the next competencies would be monitoring automation and notifications. So you can basically automate all different areas in the monitoring from the configuration to checks when they run. And you can also send out your notifications in all sorts of different ways. So via email, you can have Asingar call you, you can have Slack notifications, PagerDuty, Telegram. There are so many integrations. I don't think there are any tools right now that I can think of that don't have an integration. And if you find something that doesn't have an integration, you can always write it yourself. I mean, Asingar is open source and you can just hook into it and write your own stuff. So what else is there about Asingar that you can tell? So Asingar also scales very well to large environments that our entire price customers have. So we support large distributed environments. You can run Asingar on all your infrastructure. So we have built in clustering functionality, which allows you to both distribute checks around your data center. For example, if you have huge numbers of checks, you have to execute for all your networking equipment and so on. But also all your servers. In case of servers, you can also run Asingar directly on them. This is then called an agent setup where you use Asingar to execute the actual monitoring plugins or the old Nileus plugins, if you will. And this works on both Linux servers as well as on Windows machines. Yeah. And of course, if you want to have a large cluster, you want to have it to be reliable. So we have a high availability functionality built right in. So you can have your core nodes all be a zone with two nodes in it. And then the second one will take over if the first one fails. I think that pretty much answers the question. What is Asingar and what can it do? Pretty much. Okay. So at this time, we want to talk about our own experiences with Asingar and what kind of systems we've seen. Yeah. So do you just want to explain your setup, what you run at your home or? Yeah. Well, I personally don't administer my own Asingar. My partner does that. He's the sys admin. So he has his own Asingar. I've got my own user where I just have some checks checking my Minecraft server, whether it's online or not. But he has our NAS, our router, different switches that we have in our apartment, and all sorts of different servers that he has in his Asingar. But yeah, that's basically our setup. So it's a really small one. We don't have a cluster. We don't have much going on there. It's mostly just one Asingar instance that sends out notifications. So my setup, yeah, I tend to quite over engineer my network at home. So I have all the dynamic routing and sometimes it's breaks. And so there are all lots of pink checks that just check that everything is still connected as it should. But apart from that, of course, all the the other infrastructure I run. So for example, some websites, my DNS servers and so on, all that's get checked and ends up in a folder with mail notifications that I check sometimes. So it doesn't scream at you as much? No. That's good. So you do have your own little cluster? Yeah, I have mostly one running at my home right behind me and one running on a server in a data center somewhere, and they are connected and I can modify it with my home internet connection dies. Yeah. Okay, that's that's cool. If you can receive the notification without internet. Okay. So here at Netways, just a little explanation, Netways is a partner company of Asingar and it's also kind of, yeah, we share the same offices and they also use Asingar for their hosting. So we can always see, like have a look at the production environment here with the developers. So yeah, there's a lot of hosting. I don't know how many machines are on the Asingar. I should actually have checked that. But I don't know either. It's a lot of them. And in the past, I've also seen other larger environments even in the scope of a survey that I did with a bunch of colleagues. So we traveled Germany back when that was still a thing and checked out different companies and what they do with their Asingar and they have been somewhere that have been thousands of thousands of thousands of hosts and they've all neatly visualized them in maps, basically. So there's been, for one, a maps module that showed where the different data centers are in Germany. And there's also been a logistical map, a logical map, basically. So you could see the connections between the data centers was really interesting. And also how they use really cool schedules. So I think they have at least 20 different time schedules for the different people that work there on call and in office. So it was really interesting to see how different companies use Asingar because there is no Asingar that looks the same. So yeah, in this part, we want to talk a little about our strengths and weaknesses. So what would you say is our biggest strength? What can Asingar do really well? So what Asingar can do really well is scale. So it scales at large installations. We already briefly talked about this. Huge enterprise customers are no big deal. It works if you design your cluster the same way or the appropriate way for this. But this is also possibly a challenge for a small few-node setup because you still have to deal with all the distributed features which are maybe a bit complex for such small setups. So maybe you know it from yours. Yeah, I definitely do. I mean, that was kind of the issue. I looked at it and I was like, there are so many options. So just as a point of reference, I worked in the web team. So I didn't actually do much in the back end. I was just trying to make things look pretty. So I was completely overwhelmed with all of the networking features and the whole, yeah, cluster and scalability things. And I can imagine that's what a lot of others can see as well. So how would you explain how should you manage a larger-scale Asingar? How would you build it up? So, well, you typically start from the core. And so you have your master's zone in the center, which you will probably build as a high availability setup with two nodes in your setup, if you want to have a reliable monitoring. So, and then as we're talking about a larger setup, you probably have quite a number of satellite zones around those ones to just spread a load out a bit. And then to either schedule checks in the individual satellite zones or even perform the checks on some agents, which are then connected to the satellite zones just to get a bit of load of the master. But of course, for a small setup, this is way more than you actually need. So, yeah. You kind of helped me prove my point there as well, because everyone who doesn't know the terminology of Asingar will probably have noticed that this is very blown at a certain part. And people that do know something about it might have taken something from that explanation. Hopefully. So, yeah. I mean, a lot of people like me who are overwhelmed often wash up on the forum, which I think is pretty cool. So, I would say the community forum is also a big strength of Asingar. Like, the entire forum is based around the idea that the community helps each other. So, we host the platform. It's a discourse forum. And everyone can just go there, ask questions and answer the question of others. And I think also by answering questions that other community members have asked, you learn a lot about Asingar yourself. So, I scan it a lot. I check out what people do, make sure that people don't bash each other's heads in. But in general, it's pretty cool that you can help each other there. For me, the community also always shows how flexible Asingar is, what people do like, even if I look at the community forum, often there are things, okay, I've never used this feature. Yeah. Oh, that exists. That works. You're right. For some fancy integration that I've never heard of. Yeah, I guess the integration is also a big strength that we have. Like, in general, there are so many integrations and that you can just write them yourself as well as Asingar is an open source tool, as we've mentioned before. So, you can just go on GitHub and check out the code and see where you can hook in. So, in Asingar Web, for example, you can just write your own controllers that get shown in the web interface and you can just, you know, add some stuff to the web interface of Asingar that you use. But I think in the core, you can't actually patch anything into the code, can you? The integrations work differently there. No, most integrations are just external programs that are executed by the core. So, all checks, all notifications just happen this way. Or you can, of course, integrate using the API provided by the core, which is an REST API, which you have access of HTTPS. So, that's one way to integrate into Asingar too. Yeah. And I think while we're on strengths and weaknesses, I think the open source aspect could also be seen as a weakness as we're an open source company. And we also have enterprise customers that, yeah, pay for features. So, we do custom development. If a company asks us, hey, we would like this feature and we pay you this much for your development time. We also have a hard time prioritizing properly because we do want to take our community at heart and make sure that the bugs that have been reported by the community and the features that are requested by the community also go into our features, our next release. So, I think it's difficult time planning-wise because we do have paying customers that want their stuff done. But we also want to honor the community. I think that's also a different part. Yeah, sure. And often there are some overlaps. Like, if someone from the community finds a bug, of course, we don't want to have our enterprise customers find the same bug. So, if we can fix it before, it's always great. Okay. Yeah. I think that's also a pretty good point to go over to our next question. So, the next part is how does our Asingr development work at the moment? So, you're from the core team. So, we're going to mostly talk about the actual Asingr to core. So, my first question to you would be what happens if a community member finds a bug? So, just someone from out there finds a bug in Asingr. What happens? So, the best thing you can do is just go to our GitHub project and open an issued air, describing the bug. Ideally, with lots of information that helps us to actually reproduce the bug and then be able to fix it. So, logs, configuration examples, all are happily welcome. So, yeah. And then we will see what we can do. Hopefully, we can fix it soon. Hopefully, it's not too bad. And if you're unsure whether what you found is actually a bug or a mistake you made or just something doesn't work and you're not sure if it's broken or if you're just not figuring out how to do it correctly, you can always go to the community forum and ask, like, hey, do you guys also have this thing? And is that any single problem or is that a me problem? So, that works as well, I guess. That's a lot of the questions that I see washing up on the forum. It's like, was that me or is that broken? Happens to everyone. Yeah. And a lot of the times, it's not a bug, but just like a typo. It's always a typo. What if an enterprise partner requests a feature? What does happen then? So, we typically receive these requests in private and we will then take a look at them, see if it fits into your single, if you can do it in time. And, yeah, also, we take a look at what is exactly requested, maybe if we can, maybe we can generalize it a bit to make it just a useful feature for even more people, like so that everyone can profit from this enhancement. And, yeah, then it will just take the usual process, like we do all development, it will happen on GitHub, where we will, where some developer will then take on the this feature request and implement it, put it into a pull request, and later it will then be reviewed by other team members. And hopefully it make it in some of the next releases. Okay, so on the topic of releases, there are different kinds of releases, right? Yeah, so we of course have our major releases which bring all the fancy new features and then for, but also bring some changes that might affect your configuration or something like this. So, we also support the older release for some longer time. So, we have minor releases there which only bring in bug fixes, which will hopefully make your environment more stable. Okay, so there's the bug fix and the feature release split up basically. Yes. Okay, cool. And how do you communicate with the community? What is your external communication? What does that look like? So, most happens on GitHub, of course. Sometimes I also take a look at the community forum, but really I spend most of my day on GitHub doing stuff there. So, apart from in my text editor, of course, where I write code if I do. I'm personally a lot in the forums as I'm kind of the moderator there trying to figure out what happens and when to push people to go to GitHub and when to not do that. And we also have a colleague that takes all contact requests from our websites on Isingacomm. You can also get in contact with Isingar when you don't necessarily have anything to talk to the developers on GitHub or the forum we are looking for help. So, those will be the external channels, I think. How does internal communication work? So, in the Isingar core team? So, well, currently we are mostly at home. So, our communication takes place over the internet. So, we mostly have our internal chat tool and then we do video conferences using Jetsi. Every morning, we just talk about what we will do. And yeah, then if you want to work with someone, you just meet in a Jetsi call and maybe share your IDE or just do whatever you want to do, which works surprisingly well. And as I just recently joined the company, I've spent most of my time working that way. I was only in the office for two weeks and from then I was working at home. Yeah, I mean, I still recall the good old times where we would just meet in a conference room. Like, all of the developers sitting together trying to figure out what to do next and mostly just talk about nonsense for a good 10 minutes before we get started. Yeah. But yeah, I can confirm. There's a lot of Jetsi meetings going on. Like this one we are doing right now. Yeah, actually, this is also happening in a Jetsi. So in this part, we want to talk about in which direction we would like to go with a singer. So both as a project and the product, product would be as a user trying to use a singer and project would be as someone who wants to develop on a singer. So both as an actual like a singer developer, but also someone who just wants to contribute to the code. So we're going to ask for an opinion there because we recently joined the singer core code base. So what are your experiences? How would you describe it? Yeah, so there is the developer introduction in the documentation where it says how to get a setup for working on the singer code base. So we're going to ask for a developer introduction in the documentation where it says how to get a setup for working on the singer code base. This all works, but it's heavily based on what people did some years ago and in the meantime, people have found different ways. So you get all input from everywhere. You can do this a bit better. So this could need some tiny updates here and workflow we use nowadays. But of course, the old ones or what's written there still works fine, but sometimes things can be easier than that. So this could be improved to get started in code base. So in general, the code base has a lot of, you think, a specific code in there. So because we have our configuration language, which basically is a programming language, parts of that are throughout the whole code base and you have to get used to this first. And I think there one could improve a getting started documentation, which points out what this is, how it works, where you can look at the code if you need to, or if you see something that you don't know from like you see a C++, but you will find all kind of custom data types from our config language over there, how do these work. Some nice introduction would be worth doing. Yeah, I mean, I remember doing something on the getting started on how to code your own modules last year. So Web 2 is written in PHP. And we have a little module for a single that you can look at in the web interface that explains how to write your own module. So it's kind of a self teaching method, which is kind of cool. So that's out there. And what I also remember from joining the single web team back in 2016 was that there were actually quite a lot of good code reviews. So my mentor back then was very, very, very into doing good code reviews. So sometimes we'll just sit there for two hours and discuss what I could do better. And I still see a lot of code reviews in the pull requests. Yeah, we do them too, right? Yeah, sure we do. So all pull requests that make into it into the single code base are reviewed by someone else. And often there are multiple revisions of the pull request before it gets actually merged, hopefully making it all nicer, better work better and avoid introducing problems. And yeah, yeah, as much as it felt odd to get corrected in every single bit that I did in the in the pull request, it was a really good learning experience for me, like also getting better at coding in general. And I think we do that both for the single devs and just community submitted pull requests as well. Yeah, they both basically take the same path and yeah. Yeah, so that was a bit into the direction of how to make the code more approachable. So the product is singer is also in the works of getting more approachable user wise. We also mentioned earlier in the strengths and weaknesses that the scalability makes it really complicated for you to set up your own smaller kind of a singer environment. And I feel like that's a bit of an issue with new users coming in just wanting to try it out, like seeing, yeah, how does this a single work? And they can't get a simple thing to work because it's just so complex. And on the new singer com website that we relaunched last year, there's an entire new section dedicated to getting started. There's a little live demo that you can look at. And yeah, in general, we're working a lot on how to make a single more approachable. I think that's the main direction we're going towards. So last but not least, in the spirit of the foster, we want to tackle how can someone contribute to the project? How can you help a singer and do something? I guess the first place to start would be the documentation because documentation is a thing that you can always improve in every project everywhere. Yeah, so if you get started with E-Singer, you struggle with some point in the documentation, but figure it out in the end. It's always welcome if you take your time to actually think a bit, can I write this a bit better than it's currently explained? And then you can contribute this documentation change to our documentation. So this also uses the Orgithub workflow. The documentation is just a part of our code repository there and takes all the normal, make an issue if you just want to mention it or even better make a pull request that actually improves the documentation in some way that would have helped you if you knew this before. And yeah, submitted, we'll take a look at it. Documentation changes are quite easy to review, so this should be done fairly quickly. Yeah, so another thing we get quite often, which is also a good way to start is E-Singer ships with check command definitions for all sorts of plugins that exist around the web. So if you found a fancy new plugin that you're using with E-Singer and you have written a check command definition for it, you are also welcome to contribute that one to the E-Singer template library, which ships all these definitions. Or if you found emitting parameter for one already existing definition, this is all very easy to get started with and helps a lot of users, so also a great place to start. Yeah, and to like check out what there is out there as well, there's the E-Singer exchange, so exchange.e-singer.com is a collection of plugins and modules and user written stuff. It can be themes as well. And you can just have a look at it. It's usually linked to the GitHub wrappers and you can also just look through it and see if you can modify it, if you can make your own version of it. So contributing by adding new modules, new check plugins, new themes for a single web. I think I would also count it as a contribution because it just adds to the universe. Sure. Yeah, and I also mentioned the module helper in the last section, so you can check out our little tutorial on GitHub as well. How else can you contribute? Well, if you have a favorite feature you're missing, you're also welcome to dive into the code base and implement it. We'll take the same workflow as, for example, check command definition sort of documentation fixes, just do it on GitHub, send a pull request there. Ideally, open an issue first so that we know that someone is working on this so that we can discuss first if this makes sense for the project, if we want to make it a little bit different, so to make it more useful or in general so that others also know someone is working there so that two people spend time doing basically the same thing. And I guess if you don't know where to get started with that, the forum would also be a good place to go because we have a lot of people that work on a single, either at the company or from the community, just being on the forum. So if you have any questions on how do I code this and that, how do I implement that, you can also go to the forum and just ask. There will be someone who points in the right direction for that? Most likely, yeah. And I guess you could also write a blog post, like both on your own platform. I think it's also a contribution if you talk about a single on social media, if you write blog posts, guides, tutorials and just generally spread awareness and teach other people things. So answer questions in the forum, write blog posts, write info articles and publish that. I think that's also a cool way to contribute. And you can also write blog posts on the Isinga blog as well. So if you use the contact form on Isinga.com, you can also collaborate with me on writing a blog post there. So if you want to spread awareness of your own self-written module or your theme that you just made, you can also just get in contact with me and we can see if we write a blog post about that together. So I think that pretty much sums it up for now. Looking at the time, we're pretty much there. So thanks for listening. We'll just be over in the questions and answer section to chat with you about more nonsense. Yeah, it's you in a moment. So time to go live. So I don't know actually when we are online. Yes, I think so. Perfect. Then yeah, thank you for and Julian for your talk. I think now we can answer your questions. There's also the chat room. You can still ask your questions. The first one from Alain Ginritz. Is there a demo of Isinga True or can we view on somewhere? Yeah, I already mentioned in the talk that we do have a demo. The link to it is rather easy to find. It's isinga.com. I also paste it in the chat. So you can just have a look around. We installed the most common modules, things that you might want to have a look at. And it's chat with demo data. So the things that I read aren't actually broken. It's just flashy so you can see what it might look like or hopefully it won't look like in your environment. Blair, you're responsible for the demo, right? Yeah. So actually, we do have this online demo with some default values that we provide there. You can see lots of things that are possible with Isinga. Some other things that make Isinga so special are not visible there because they are pretty hard to visualize like scaling and flexibility and different integrations and so on. So the demo system that we provide on our website is a very basic demo. It's nice to just see how it looks like and to get a feeling about it. But still, you cannot see the full power of Isinga because in the end, you will have to just use it in your own infrastructure once to just get a feeling about what actually you can do with it. But the demo is a good start. I think you'll see a little of how you actually configure it only, how it looks in the end or can look in the end there. Yeah. And maybe something else worth to mention here is that we're currently working on some Docker images for Isinga that there's no final release yet for them. But still, I think it would be a good point for someone to just get started with Isinga without having to configure a fully blown machine with everything for production use. But just take a look at it. I think the Docker images we're currently building are a good starting point for that. And you can find them on GitHub. There are two repositories, one for Isinga 2 and one for Isinga Web. And we also do have images for the upcoming Isinga DB as well. So you can have a look at the upcoming things as well. All right. Perfect. I see there's another question in the chat about the vision around the future of Isinga. I think you're also the best one to talk about this one there. Yes, of course. I'm just thinking if, so what I could figure out in the meantime is we do have some blog posts about this around this topic. It's difficult sometimes to actually communicate this correctly. To do it in quick, I would answer that is we are aiming to build modules around Isinga that are able to monitor dedicated technologies in addition to traditional infrastructures that we are able to monitor right now. So the world of Isinga 2 is something that is here to stay, of course. And our idea of the whole Isinga ecosystem is to have an ecosystem where we can add more and more monitoring capabilities to Isinga in addition to traditional infrastructures. So to be able to monitor Kubernetes systems, clusters, shared resources, storage systems in specific. So everything that is not a host and a service, but still in a good way to be able to visualize everything in a correct manner and to receive alerts for the correct things. Of course, this is like a vision that we are following for the long term. It's not something that we're going to achieve within the next couple of months or the next couple of one or two years. We are working on building the foundation to be able to do these things in the futures. And basically, this is the idea and I'm going to paste maybe one or two articles about this topic in the chat. If you're going to stay around for a couple more minutes, I will provide that. All right. Thank you. So Fire Mike mentions he really likes the idea of official Docker images. And he's also a question. It's about contributing new services and command templates of own checks. Where can this be done? Yeah, I think I'll answer this. So these checks are part of or check command definitions are part of the so-called Isinga template language library. So that's just a whole bunch of conflict templates that ship with Isinga. These are part of our normal source code reports. So they are on github.com. That's the repository where they are all living. There's the itl Isinga template language subfolder in that repository. And you can just fork the repository there, make your editions and then send a pull request. Yeah, I'm also linking a few bits from our documentation that basically deal with how to contribute and how to contribute. I'm currently working on an improved development guideline where we can kind of pull those strings together. But for now, we have those sections in the documentation developer. All right, perfect. So I think on the official live stream on the FOSSTEM website, you can't see the links. So if you want to look at them, you can just come into the chat.fosstem.org live stream and just can look at those links. Yeah, maybe we can also add them to the talk description later on. For anyone looking at a recording, probably makes sense. All right. Are there any more questions? I think we have two more minutes or so. I don't think there's anything unanswered so far. All right. So if I think if there are no more questions right now, then again, I would thank you for your talk. It was really nice, really informative. And well, I think I will then give it up for David McKay and his talk about network monitoring with influx DB2 and telegraph. So again, thank you and maybe see you the next time. Yeah, thanks for having us here. We'll stick around. I'm going to be in the network monitoring if anything comes up. So see you in the chat. Yes, water.
|
Julian and I work for Icinga and want to shed some light on what, how and why we do what we do and also what YOU can do. The format is going to be a bit like a podcast, where we just talk about our topics for a little and try to provide some light entertainment while staying technical. We were thinking of covering the following topics: - What is Icinga - what can Icinga do for me? - Where do our strengths (and weaknesses) lie? - How does our development work at the moment? - Which direction would we like to go with it? - How can someone contribute to the project?
|
10.5446/52770 (DOI)
|
Welcome everyone to our talk, the three rules to rule them all, the magic key to large scale network monitoring. We will start with a short introduction about ourselves and about the tool which we'll be using and then my colleague Alex will go into the three rules to rule them all, which will be a practical session where you can see a lot of things and learn a lot of things and then in the end be available for a Q&A session. So who are we? I'm Martin, I'm a project manager at Tribe 29, Tribe 29 is the company or the team developing CheckMK and Alex with us here is a CheckMK consultant and one of our assist admins and he started with Nagios in 1999 and fell in love with CheckMK in 2011, so almost 10 years now. What is CheckMK? CheckMK is an IT monitoring tool, you can use it for infrastructure and application monitoring and basically you can monitor almost anything, starts with servers, applications, databases, but also containers, container management platforms, cloud services and as with every monitoring tool you digest the data, then the monitoring tool basically suggests okay, this is good, this is not good and then basically the alerting starts afterwards with notifications in a lot of different ways via mail, via SMS, via incident management tools like PagetDuty, then you can also visualize the data, if you don't want to visualize it within CheckMK, you can also visualize it with Grafana for example and connect CheckMK with CMDVs like I do it. What makes CheckMK special is that we have more than 2000 plugins for what we call plug-and-play monitoring, so basically you can just monitor anything and then you do it in a couple of clicks, you don't need to configure a lot of things, it just goes quite smoothly. You can get CheckMK obviously in our open source edition, the CheckMK raw edition, obviously free of charge, it has all these plugins, you can get support by a fantastic community and we follow the open core model, so we do have an enterprise edition with a couple of more features with a better performance and that's where you get support from us. That's already everything about CheckMK, about us and I think enough of an introduction, I will hand over to Alex now who will give the actual talk. Yeah, thank you very much Martin, also one welcome from my side to our talk. So as Martin said, we will talk about the three rules to rule them all, which basically means how to set up CheckMK to do a very efficient network monitoring, so the goal here is we wanted the full visibility of all our network interfaces. It doesn't matter if it is an access port or an uplink port to another switch or a server interface of your ESX server or whatever and of course you want to do it as efficient as possible because every port really means every port, so we are quickly talking about thousands, tens of thousands or even more interfaces that we want to monitor. So we wanted to be able to do that also in large scale networks. So why do we want to do that? Every port can have errors and if it has, yeah, that definitely needs to be addressed and this is also important for end user ports, access ports because if there are errors, also their working experience with clients of applications will for sure be affected, so everything should be almost error free. The challenges end user ports are allowed to change state and speed, so if one goes home, takes a laptop with them, for sure the port will go down, dependent on the firmware implementation of the switch, it may also change speed or not. Some switches keep the last speed in the SNMP output, some others go down to the bare minimum like 10 megabit half duplex and they are allowed to do that. Very important ports, as we call them here for now, like uplinks are definitely not allowed to do that. So if you are uplink to your other switch goes down, most probably there is an unwanted issue, same happens when a server connection goes down from gigabit to 100 megabit, also you want to detect that. This means that we need to address these ports differently because they behave differently or are allowed to behave differently. So one solution for this can be we will use port names to determine between access and non-access ports. So to keep it simple, we could simply name just our important ports. So for example, if you have an uplink to switch number three, call it uplink switch three within your switch and check in case able to detect that alias. The next challenge is some windows, even if for example on Cisco the command is called description, they are not writing it into the SNMP description table but into the alias table. So we need a way to switch the tables that we are using for the port names. We need to be able to switch between those tables. So the solution is we will create two rules to switch between them and also we will create a so-called host tech which is the host property where we can say use this or use the other one. So it's demo time, so let me quickly switch to our demo. Here I'm going to install a check in case the raw edition. This is the upcoming 2.0 version here in beta stadium. And after installing, when it's done, I will create a site. So here I'm going to create a site called foster and I will set the admin password directly to foster so that we need to know how to log in. After creating, we can already start it. And we're done. So one quick introduction to a site. What is a site? It's a self-contained monitoring environment that can be independent from other sites. As example, you could create one site called production, another site called test and you can run tests in your test site without affecting the production. So it's a completely independent monitoring environment on users, on plugins, on setups, on rules, everything separate. Or it can be so-called distributed monitoring. So if you have really large scale setup, you could scale out and share the monitoring load over various instances. There is virtually no limit on how many instances you can run check and K, but managed from one central site. Another reason for distributed monitoring can be geographical reasons. So if your main headquarters is in Brussels and another location is in China, it doesn't make sense to monitor the location Shanghai from Brussels because of the latency. So it makes sense to create a site in China, a site in Russia, or wherever you need and manage them from the central site. So this is also possible. Now it's time to switch to our browser and to log in into our newly created site. So here it is and I will look in with the built-in administrator, which is check and K admin, CMK admin, and the password that I created, which is for STEM in this case. So what we see here now is not much. We see an empty site. We have zero services, but we are going to change that. Because of the screen resolution, I will for now get rid of this sidebar and turn it off for now. And now we are going to create our first host. So for that, we are going to our setup module and we're going to host. And we see here that we have the possibility to create a host or a folder. I want to create a folder here for now. And I will call it which and on a switch, we will have SNMP. So I'm turning on SNMP V2 here in this case with the credential public. Actually you don't need to do this because if you don't give anything here, we will use the community public by default. But let's do it like this. On the other hand, we don't have the data source normal check and K agent. Check and K agent is a small lightweight agent, which is intended for the normal operating systems like Linux, Windows, FreeBSD, whatever. Obviously we can't install it on a switch. So I'm turning it off here and I'm saying no on this. We don't have an agent. So all these settings here will be inherited starting from this folder downwards to subfolders or hosts within this folder. So we will save that. And when we create here now a host. All these settings are inherited here. So SNMP V2 with the community public. And after this we can already go and say, save and go to service configuration. So this will basically do an SNMP scan of this device and it will already detect some services. But it will do it in a way that we don't want that way, we will optimize it a little bit. Why do we want to do that? Because here we see interface but enlisted by interface index numbers. And we also see that something is missing here. So we have number one, two, three and four is missing. Five is here. Plus we see here the descriptions that have been set within the switch for the support. And yeah, set in the introduction, we want this as the service name. So I'm not going to save this here yet. I will do first the switch, the host attack to be able to switch between the alias and or the description table. So we will add a tech group here. We will call it internally if alias description maybe. Maybe without typos. We will add two tech choices which will result later in the host configuration property in another drop down as you've seen for the check and K or the SNMP settings. And the tech ID which is on top will be the default. So we should set here the most common setting and the most common setting is that manufacturers nowadays use the alias table. So let's give it the internal tech ID if alias. And for the second option, we want the description table. So use description. I'm saying safe. Now this tech exists and we have here a preview and this is the drop down that we created now. This tech by itself just exists. It does nothing. So we will need to connect it with the rule so that the switching the tech influences which rule is used. For that we're going to our setup module again. And now, yeah, because we have around 2000 plugins already, we have a lot of rules. So the best thing to do is to search here. So we will search for interface. And here we find the rule network interface and switch port discovery. This influences how to discover these interfaces. So we will create a rule. And this rule, the first setting that we will do is the appearance of the network interface. We will change that we don't want this index number. We want to use alias in case that the host tech that we created. So let's search here for interface and access as a condition to this rule. So if this tech is set to use alias, we will use alias. We basically don't need to set anything else here by this implicitly, we will turn on other things. We will turn on a match all interfaces, so we will also find the missing interfaces that we didn't find so far yet, because they were down. This could be specified much more in much more detail. But as we want to go as deep and as wide as possible with the monitoring and we want all interfaces, we basically leave it to all interfaces, because all interfaces matter. If they're up and causing errors, that's a problem. If they're up and causing a no error, also not problem. If they're down, they're not causing errors. But if they go up one day and start to cause errors, you will detect that. If you're not searching for every interface, you will miss that error and you don't want that. So all these settings are applied to conditions now. Here I will put the condition to the main directory, because what I'm setting up here will scale for every network interface. It doesn't matter if it's a network interface in a Windows server, Linux server, a switch, a firewall, a router, whatever. This will scale over your whole monitoring. So we're going to save that. And so far we're just using alias, but we need to create a second rule for the description. So we will simply clone that one. We will switch that to description. And we'll toggle here to use description, and then we have our second rule. We can go back to our host. And we can do another service discovery. And you will see a big difference here now. So all the interface numbers are gone, but we're on the ports where we configured names, like example on these access points or this firewall, we see the names. And this is what we can utilize later on to determine, are we talking about access port, which is a lot to go down to change speed, or are we talking about a port to my firewall, for example, the firewall, it shouldn't change speed, obviously. So now we can say, monitor undecided services. Then we can go and apply our changes and activate them. Now I will turn on the sidebar again, because now you will see we have one host, 35 services. Let's go to the services. At the moment they are still pending. While they are still pending, I will create the next rule. I will steal this service description because this is an unimported. And then my naming concept, un-name port is an access port. And I want to create now the rule to ignore state and speed changes. So I'm going to our setup module again. I will search again for interface. But this time not for network interface and switch port discovery, because we did this already. Now we want to influence how to deal with the result with the discovered services. So we will go to the common network interface and switch port rule. Also here we will create a rule again in the main directory. We will change the condition that it only applies to specific ports. So it will match all ports that start with gigabit ethernet, for example. In other cases, you also need for un-configured ports or un-named ports, you will need to add 10 gigabit ethernet, 40 gigabit ethernet on other switches like this, for example, for Cisco and Huawei switches. For HP switches, you will need to address it by number. So if they don't have a name, they will simply call it interface 001, 002, and so on. So let's create a regular expression here. When this interface description starts with several digits, then we want to match this rule. What do we want to do in this rule? We want to ignore the speed, because this is our access port. We want to ignore the state. And that's basically it. Here we can give a little comment so that also our colleagues know why we are setting up this rule. So it's not important to explain here what you're doing, because that's obviously visible in the rule. Here, it should be documented why you're doing it. You could give a description here like according to our naming scheme, unnamed interfaces like interface 001, or gigabit ethernet, or 40 and so on, our access ports are allowed to change state and speed, so basically everything why we are talking about this. That's being done. We can apply this. And nothing much will change. The change will only be visible during monitoring. If this will change state and speed, it will go red. If this one will change state and speed, it will stay green. So even if it's now, for example, up, if it will change speed to down, it definitely will go red and will give you a, it will not, sorry, the access port. This one, of course, will stay green. But all the name ports will get red if they change state or speed. Yeah, that's being said. We basically set everything up that we need now. So what we can do now, we can go again to our setup module. We can go to host. We will go again to switch and we will add another host. Again, because our rule and full structure, because we are not template based, all rules and settings are inherited from top, so we don't need to change anything here. I'm just giving it the DNS name. I'm saying save and go to service configuration. This is actually a quite big switch. So, and it's also in a remote location. So I'm expecting a longer runtime here for not making you bored waiting for this. We will do something else while it's running in the background. We background, we will go to host and we will create a new folder here. Oops. Sorry, this click was too fast. Add subfolder and we will create a folder called server here. On a server, we can install the check-in-k agent. We don't need to use SNMP and most probably shouldn't because you will get much more information and much more performance, with much more performance from the check-in-k agent. So we don't need to change anything here. We save the setting, this folder. We go here and we create a host. So let's add my approximate virtualization server here. Save and go to services. And here we go. Here are the services. And also when you scroll down here, you also see here that you get here the Linux interface names. So this is also an effect of applying these three rules that we created now also to everything because we put it into the main directory just being bound to this host tech. That's exactly what we want. And also here we can say monitor undecided services. Also I will show you again the properties. Here in the custom attributes, you will find this use allias because I made it the default in case that you get an unwanted result because here you put in an older HP switch or some firewalls that require that that you use the description table. Simply switch it here to use description and you will switch the discovery process to use the other as an MP table. That's being said. We can activate our changes again. Please look again at the sidebar. Three host 94 services. What still is missing here is a big switch. So we will go here again and go to the switch folder and go on our core switch. Now it will load what it has discovered before. So you see all the interfaces are there. There are many unnamed interfaces as you see here. But here also named interfaces. For example, we see VLAN interfaces here. We see named firewall ports now where the firewall is connected to and these things. And here we still see that it's on module A port 8. So here we can say monitor undecided. You can apply our changes. Again, I ask you to take a look at the service numbers. So still three host, but suddenly more than 200 services more because we detected them very efficiently. This is basically everything that you need to set up to have a common approach for all network interfaces in your network. It doesn't matter if you're talking about 50 host or 5000 or 50,000 host, you can apply that. So another thing that I quickly want to show you because this track that we're talking here is the talk to network monitoring and discovery or inventory. So I want to show you an inventory function. We're going to our setup module and we are searching for inventory. So we have a hardware and software inventory module and we will say do hardware and software inventory. Here are already some predefined rules, but they are just for checking case servers and Docker objects, but we want a common rule for everything. And we simply add here that we want to do also start a starter inventory. So that's enough. We don't need to apply any other conditions then do it in the main directory, do it on the global level, do it for everything. So we will save it. We will activate our changes. And we can go to host again here. For example, let's go to the smaller switch. I will do a rescue here to be a little bit quicker. And we see that we have a lot of inventory here already. Now somebody could ask why are you doing hardware and software inventory on a switch? Is the software changing? No, of course not. Hardware changing. Yeah, no module switch. It can happen. You can swap in modules or do something else. So simply let's go to show hardware and software inventory for the source. And here we can look at the hardware, for example, what kind of device do we have? Which physical components does it have? Does it have modules? Yeah, not really because that's a small switch, but that's actually not what I want to show you. I wanted to show you this because when I switch the sidebar, when I open the interface index, we can see here all the interfaces that we have. And we can see that here are ports which are right now down, but they are in the state used because a couple of days ago, this port was up. So we have a configurable threshold that we can set here before we declare a port as free. So maybe you know the industry standard of patching a cable. You want to patch a cable, you go down to your network rack, you open the network rack and you get shocked because of all the cable mess that you will find there. Also, you will find out that the whole switch is occupied already by patch cables and you think, oh man, this can't be. So it seems that colleagues didn't clean up when decommissioning a PC. So there the industry standard is, you will look for an port which is down. You will pray, you will plug out the cable, you will plug in your cable and hope that nobody will complain. And of course, what happens the next day is the manager is coming saying, I can't reach Google. So this can't happen with this status data that we're already getting from the switch. So that's a really nice feature about this hardware inventory on switches. Let's come back to the switch again, of course, I want to show you something. So when we go to the switch, you also see that we already get some metrics here and we can look deeper into the metrics. Of course, we're monitoring since the switch now since 10 minutes, so we don't have a lot of data here. But I wanted to give you a quick overview of what we're doing. So we are calculating the bandwidth in and out. This is a minimum maximum average. We see that we are recording this or that we have predefined views here for the last four hours, last 25 hours, so one day plus one hour or one week plus one day, a little bit more than a month, 400 days, a little bit more than a year. By default, we are recording here 720 days, which can be adjusted. So also here, when you switch here, of course, we don't have values here. One could think, what are we storing here? Because here we are sending out the data because we are not saving the data every minute, but we are taking the most significant bits, let's say, that we are storing. But for what? For minimum maximum average. Actually, we're doing for all. So you can switch here between minimum maximum average. Of course, again, because we're doing it since five minutes, you don't see much here. For a small feature, you can set a so-called needle here, which will mark exactly where this needle is. So you see the date and the exact values at that time. And this needle is saved. So even if you compare it with the filling rate of your Drive C on your Windows PC, you will find this needle there as well so that you can compare values between services and even hosts. We are not recording only the bandwidth, but also the error rate. As already said before, this error rate should be always almost zero. We have very low thresholds for that by default. 0.01% for warning and 0.1% of the total traffic for critical. My advice is never, ever change it except for wireless networks. Wireless networks are by definition shared networks. So they are shared with the neighboring access point, with your own access point, but also with atmospheric influences and lightning. Their error rates are pretty normal, but on everything that is connected to a cable, there shouldn't be errors. So don't change it. Also, we see broadcast and multicast rates for input and output. So you could detect broadcast storms and things like that here. We see this also for non-unicast, so multicast and broadcast together, versus unicast packages. And we can also monitor the length of the output queue of the switch if the switch is securing it to many packages instead of transferring them over the cable. So now we are coming to distributed monitoring. I want to show you a connection or how it looks when I'm doing a connection to one of our test sites. So again, I said mainly two reasons why to do distributed monitoring, scale out, or geographical reasons as I said in the introduction. And you can configure your connection to push the configuration so that you manage your remote sites from the central site, or you can create some breeder only. In this case, I will create some breed only. Let's call it FOSSTEM. Give it an alias FOSSTEM. We will connect to a host. In this case, it's the test site. We don't use the TLS encryption there, so I will do it in this case unencrypted, but of course in production you should do it encrypted. And I will choose here no replication with the site. It will be read only. So actually let me correct that one. And I need to give two more settings here. You need an URL prefix, how to reach that. Normally also, of course, you should do it via HTTP S, TLS encrypted. But as we are here in a local network, I can do it without. Okay, of course, this was a bad idea because our main site is already called FOSSTEM. So let's do it like this. That's being said, we can apply the changes. And I'm turning on this sidebar again so that you see what's happening here. And suddenly you see we have one on our line hosts and 4,055 services. So within one second you connect to an already existing site to get a central overview. And for example, if we look at here, we directly see some errors that are on the other side. For example, here are some services that need to be restarted because they have been patched recently or some updates have to be installed and things like that. Last but not least, because we are talking about network monitoring and today there will be two other talks by our colleagues from NTOB NG, Luca. We partnered up with NTOB to integrate NTOB into CheckMK to be able to see information that is gathered by NTOB in CheckMK to enrich CheckMK host information with information that is recorded by NTOB. For that I will switch to one of our test sites because the error is connected to NTOB. The test site is called Production. Let me switch back to the full screen. And here we could look for the NTOB information. I am struggling a little bit because of the video resolution here. Normally this doesn't look so big. The screen resolution here unfortunately has to be very small. According to FOSTA requirements. But here we will click on NTOB host and we see that there are hosts which have some NTOB information included. So let's go here to the host information and here let's get rid of the sidebar again. Again the screen resolution is actually too low. We see that we have some NTOB information. We have no engaged alerts for the source. There is no active alert flow. We see the traffic distribution sent to where those received and so on. We see the ports that NTOB detected in the network flows. We see the protocol breakdown TCP versus other protocols and with which peers communicating over which ports and protocols. So we can go to the traffic overview. So we see which traffic is creating the TCP Flack distribution. Port distribution which ports are unused with which peers it is communicating. So for example here it is communicating with one of our build servers. It is communicating with our firewall which application categories. So basically here a lot of check and K traffic is recorded. That's because this one is in a network which is a distance from our monitoring system and we are recording on the firewall and that's why we are just recording check and K for this particular host. We see the flows so maybe not so interesting. So maybe let's look quickly for another host. Here for example we have a firewall. So let's look here at the traffic and when we look at the flows here we should see different things. So we see some unknown protocols SNMP, other engaged alerts, no other past alerts. So this is the quick overview about our upcoming NTOB integration. Please take also look at the talks from NTOB from Luca and his colleague. That's it from my side so far. If you want to contact us you can do that via Twitter and of course you can write to us and also we have a stand at the first step. Of course we will join now the Q&A session. So thank you very much for your time and hope to talk to you soon.
|
So you want to monitor a large-scale network-- where do you start? This talk will give you some practical tips in strategizing your network monitoring to avoid future problems, detect those you didn’t know are causing performance issues and save your time in configuration. You’ll learn practical tips, summarized into 3 simple rules, coming from the speaker’s 20+ years of experience as a network specialist. Whether you're starting your monitoring from scratch or improving an existing setup, these tips will be useful for you to have a holistic network monitoring. Along with the practical tips, the speaker will show some demos on how you can apply them using Checkmk, an open source IT monitoring software. We will discuss best practices to take advantage of rule-based monitoring in discovering all your network interfaces. After learning the fundamentals, we'll discuss how you can take it further with network flow monitoring using ntopng. This will help you troubleshoot network issues through in-depth network performance monitoring and network flow analyses. By the end of the talk, you’ll get to learn a holistic approach to network monitoring; saving your organization lots of time, as well as budget for hiring external consultants.
|
10.5446/52772 (DOI)
|
I'm going to talk about NDPI that is a toolkit for traffic analysis, in particular focusing on traffic monitoring and security. Before starting, let's say a few words about me. I am the founder of Antop, an open source project that develops open source network security and visibility tools. What Antop has done in 20 years is basically condensing this slide. There is Antop NG that is a web-based traffic monitoring and security application. You will hear more about it later in this session. NDPI, that is the goal and the focus of this talk, that is talking about the DeepPocket inspection. We will be talking about that in a second. And then we have other tools, including NSCRAP, that is a software for the DOS mitigation, so-called SCRAPER, and N2N is another open source peer-to-peer VPN. I'm author of various open source tools and my lecturer at the Computer Science Department of the University of Pisa in Italy. Let's start with NDPI. What are the needs today? In essence, we need to monitor traffic. But in addition to that, we also have to enforce traffic policies, meaning that we have to make sure that what we expect to see in the network is happening. And in particular, as you can see in the picture on the right, there are users and there is the internet. In the middle, there is the need to have some sort of boxes that are monitoring the traffic error, making sure that what is happening is what we expect. Typical activities include limitation of the bandwidth of specific protocols, for instance, big torrent, or protocols that are not super important, that they can flow in a healthy network, but only if there is bandwidth available. Another point that we want to have in a healthy network is to block malicious communications. And the problem is that over the past few years, traffic moved from clear text, so something you can analyze and expect into encrypted traffic. It is an interesting point. Or we have to prioritize specific protocols, such as, for instance, WhatsApp or Skype. And so to make sure that the activities that are important for us are really happening, there are activities that are not so important can happen only if there is some available. In this talk, and in general, we don't want to decrypt the traffic, because this is something for hackers. Here, we want to maintain privacy. Encryption is important. It is an offset. If somebody has created traffic in an encrypted form, so HDPS, SSH, for instance, there is a meaning, it should stay like that. So we need to make sure that we enforce the policies, we monitor what is happening, but at the same time, we respect the user's privacy. This is the goal of this talk and of NDPI. So in essence, in order to do that, we need to create some sort of fingerprint, so that is in essence a way to characterize a network protocol and to recognize it. So to assign a certain label to label it, so that if in the future we see the same label, it means that this is the protocol we have observed before and that we need to somehow monitor and recognize. Also we want to make sure that specific traffic, and in particular, over TLS, that is the encrypted traffic that is used by most of the application today, this traffic is good. We don't want to allow encrypted traffic streams that contain malware. This is a big problem. And another point that we want to have is to understand how to monitor and to measure the traffic that is happening in our network, to understand if people are using Netflix, are watching YouTube videos, are doing SSH or other things. And of course, malware is a problem and therefore we want to analyze it. Analyze means don't disset the content, because like I said before, this is not our goal, but to recognize it. If you are able to recognize this malware, we'll drop it or we'll raise an alarm saying there is something happening in our network that is not what we expected. We have to stop and analyze that host that is producing the traffic. At the beginning of this talk, I said a word that is DPI, DeepPocketInspection. What is that? In network traffic, there is the packet header. So you can find, for instance, the IP address, the port and other type of information similar to this and the content. The content is what the packet is carrying from one side to another. This is called packet payload. Usually, this is an activity that is computationally intensive, because we have to open the packet after capture and we have to understand what is happening. So we have to reconstruct the stream and go deep. That's why DeepPocketInspection, up to the end of the packet to understand what is really shared. And in particular, when we do this type of activities, we are concerned about privacy and confidentiality. That's why I said at the beginning, we must make sure that here we understand that we don't want to decrypt traffic. This is not our goal. And because encryption is becoming pervasive, we need to extend our DPI toolkit that years ago was pretty simple. Everything was in clear text to something more sophisticated. We don't like false positives. So it means whenever we receive some traffic, label it with the wrong label. So let's say, recognize Facebook as YouTube and different. This is a big problem, because if we exchange two protocols, it means that if you are doing traffic monitoring, we'll mislabel the traffic. But if you are enforcing the traffic, we say block YouTube and accepting traffic that it is not YouTube, it is perceived as YouTube, this traffic will be blocked. And this is not a good practice. So we must make sure that in the worst case, we say unknown. This is not a traffic we know. But when we say YouTube, it is really YouTube. This is very important. So in 2012, we started to play with NDPI. We were looking for a toolkit for the packet inspection. And before creating our own, we said, let's go around and see what is happening. In those years, there was a toolkit called OpenDPI that was no longer maintained, open source. We started to send patches to the creators of this traffic, but nobody ever replied. So we said, OK, let's make a fork and maintain ourselves an open source DPI layer. This is where our DPI started. And today, there are many protocols over 240. I will be talking later what is a protocol. But just to say in this slide that all the latest protocols, all the major protocols are detected. And these days, compared to years ago, when everybody that wanted to create a new protocol was using TCP and top of it, it's a protocol. Today, there is another layer in the middle, the encryption. So most of the protocols today are encrypted. They are sitting on top of TLS or to the UDP version of TLS that is called Qwik. That is an evolution of many protocols that have been created over the past few years. And then at the end, it's one of the latest protocols that are being introduced. So there is not a big variety of protocols. The main difference between one protocol and the other is what this protocol is carrying on. In cybersecurity, there is a need for the bucket inspection, mostly because we have to analyze encrypted traffic. Like I've said, we have to inspect the bucket payload. This is very important. And we have to understand what is happening. Because in a healthy network, we have to do some sort of lightweight monitoring. And many people used to do that with DNS, for instance, or with ICMP. When you see ICMP errors, let's say destination and reach for things like that, this was a good way of monitoring the health of a network with very little traffic to watch. Unfortunately, today, this is more complicated because we have encryption. But we have to live with encryption. Also we have things like DGA, domain-generated algorithm. So in essence, domain names that are generated not by humans, but by computers, and that are used for this type of purposes. And in order to do that, I will show you later that NDPI associates one risk to a flow so that we can say, this flow is special. We want to make sure that it's recognized and handled properly. In NDPI, the protocol is divided in two parts. We have the major and the minor. The one in red is actually the transport. In this case, it's quick, the protocol I mentioned before, or with Facebook, you will see it is DNS. Here we attach to it a so-called minor or application protocol. In this case, this is not the protocol in the IETF parlance because here we are talking about application protocols in the meaning of people. So when I talk with somebody, if I say Facebook, it's clear what is that. So in this case, the protocol is recognized by characterizing the stream according to some usage. For instance, the quick.youtube is as such because inside the quick stream, we recognize that there is a name that ends with youtube.com or things like that. So in this case, in NDPI, the protocol name is divided in two parts. This is very important. Now, today most of the protocol, like I've said, are based on HTTP, probably less and less today, but in particular on TLS. So in order to detect one protocol from another, from the natural standpoint, everything looks like HTTPS or TLS if you want. So how do we recognize if a certain protocol is Netflix or YouTube? For sure, we do want to use IP addresses. At least this will be the last resort because in particular for cloud-based protocols, this is not a good idea because the IP address is not permanent. They are changing. They are not static. And depending on the world vision, what you are, you will see one IP or another. So in this case, for instance, let's have a look at Netflix. In this case, the Netflix protocol is in essence TLS traffic that contains a host name, something like netflix.com, nflxd.com, and so on. So this is the way modern-quoted protocols are recognized inside NDPI. And note that here we don't put IP addresses. Again, the IP address is used as a last resort. If I have to look at the last resort life cycles, basically what happens is that we receive one packet, the engine looks at it, says, okay, this is UDP. Okay, let's start to apply all the possible dissectors supported by NDPI. For instance, for UDP traffic, we don't definitely check, let's say HTTPS or HTTP because these are TCP-based protocols. And we start to analyze the packet. This until one sector matches the traffic or up to a certain number of packets. So usually the limit is eight. And when we have monitored enough packets without being able to find out the protocol we give up, and we say, okay, this is unknown traffic. The performance of NDPI is quite good. This is an example with a single core E3 server. As you can see, we are able to do per core about 10 gigabit, okay, or 1.8 million packets per second. So the performance is quite good in particular because with the packet inspection, we don't analyze the whole stream, but we analyze only the first few packets. Like I've said, up to 8, 12, so not more than that. And because of that, the performance is affected only at the beginning of the communication. So therefore, if you have many flows, you will have to do, let's say, more DPI. But if you have one flow that carries on 100 gigabit, the equation says that you have to analyze the first eight packets. That's why it is really fast. Now for encrypted traffic, the story is a little bit more complicated than clear text traffic. First of all, because we have to decide whether the TLS team we are watching is something like HTTPS, so it's a web navigation or is a VPN. This is very important. And how do we do that? We analyze the beginning of the communication. We try to extract the specific fields that indicate the nature of the communication. This is the way, for instance, where we are able to recognize the TCP connection that current TLS that is, for instance, an open VPN communication and is not a web communication. We want to recognize malware. And we do that through the concept of data beaming. So we have so-called traffic beans. In essence, we take time and packets size. I will be talking about that in a second just to recognize a specific fingerprint. And then we look at the content. So there is something called entropy that allows you to measure how bytes are distributed in a stream. And as you can see, without decoding the traffic, if we look at some text that is exchanged, you will see that there are some tiny differences between different type of content. But those differences are very little. So we cannot really say in an encrypted traffic this is for SCP, so HTTPS, sorry, SSH, a copy. We are not really able to say 100% whether we are monitoring a PNG or a text. It is very close. So they are very, very low, the differences. So that's why we have to be somehow smart in order to do that. I will show that in a second. HTTPS, in addition to what I've said, is trying to add something called flow risk to a communication to recognize, for instance, things that are unexpected that should not happen. A typical example, for instance, for HTTP is a binary, the taxi application transfer that is very happening very often. For instance, with malware, when the malware has infected, you know, this host is trying to download, for instance, the binary application that is doing the dirty job. Or for instance, when we have a TLS certificate that has an obsolete version or a weak cipher, so they are not very good in quality. NDPI reader is an application that comes with NDPI that allows you to decode the traffic. As you can see, it's divided the traffic in two parts. This is an example of how it looks. The first part here, we have the so-called behavior. So we have a look at the IP addresses and parts of the score and everything, the entropy we discussed before. And here we have the fingerprint. So a way to uniquely identify the stream with certain properties. There is some fingerprint called JG3 that allows us to recognize the library that has generated the stream for its open TLS or open SSL, this type of libraries. And now we have here the name of the server or in this case, the name of the server that the client wants to connect to. When we have to detect malware, basically we have two options. If the malware is clear text, we have the signature. So namely, a unique fingerprint of that malware that is very complicated. There are many signatures and every day they are added to the scene. Or we have to look at the behavior, so for instance, something similar to what I showed you before with the flow risk. With encrypted traffic, we have two options with NDPI. First of all, the fingerprint, that is similar for instance, the HTTPS fingerprint that you see when you are monitoring or we are connecting to our website. If you click on the lock on the browser, you will see the hash of the certificate. Or we have something called Bint, time and length, which is what I did in a second, like I said, that allow you to recognize a specific pattern. Or we have the entropy that I showed you before, it's speculating about the content nature. In essence, the fingerprint is a way to create a unique signature of the initial communication. J3 are interesting, they are very interesting algorithm that are looking at the TLS client hello. But they are not unique, so they are characterizing a communication while they are not telling you for sure that this is certain communication. Because like I said before, a certain library or another library will be sharing the same fingerprint. The byte entropy, it is interesting, it is measured in every bin, so in every slot. So we have up to 255 slots per flow in NDPI. How many bytes are falling in a certain slot? So if you have the byte, let's say one, you do plus one, if you have the byte X, we do plus one and so on. With the entropy, we can see for instance that with certain protocol, there are some sort of differences. So let's say if I have DNS traffic, the entropy is about four, TLS about seven, 7.7, with NetFlow it's about four again. So in this case, we are not able to say if it's DNS or NetFlow, because they are very close. So we know for sure that if you see something like, looks like a TLS stream, that is an entropy of four, this is not TLS, because unfortunately there is something that doesn't work. So just to show you an example of practical malware analysis, this is a malware code trick bot. So here I put a link, you can load a pickup file containing it and as you can see NDPI it helps software for cybersecurity to detect it. For instance here we see, you know, I worry about obsolete TLS protocol, or we see here binary application transfer. So when the machine has been infected properly here, we are transferring the binary. But if I look at the traffic bin, so in essence a distribution of the time, so the bucket sequence, the bucket length and the time, you will see that there are different streams that look similar. So here I put some colors to show you how similar they look. So this is a way to fingerprint the traffic. This is a way to say, okay, this one looks like that. If I write it down, this will be my signature. So in case I see another type of traffic with the same signature, then this looks like trick bot, so I can put another word. This is all for today. This is basically NDPI. I invite you to look at the source code to compile it, the software is on github.com slash ntop slash NDPI. Thank you very much for listening.
|
As most of modern traffic is now encrypted, deep packet inspection is becoming a key component for providing visibility in network traffic. nDPI is an open source toolkit able to detect application protocols both in plain text and encrypted traffic, extract metadata information, and detect relevant cybersecurity information. This talk shows how nDPI can be used in real life to monitor network traffic, report key information metrics and detect malicious communications. The pervasive use of encrypted protocols and new communication paradigms based on mobile and home IoT devices has obsoleted traffic analysis techniques that relied on clear text analysis. DPI (Deep Packet Inspection) is a key component to provide network visibility on network traffic. nDPI is an open source toolkit designed to detect application protocols on both plain and encrypted traffic. it is also able to extract relevant metadata information including metrics on encrypted traffic for easy classification and accounting. This talk introduces nDPI, demonstrate how to use it in real life examples, and it presents how it can be effectively used not only for traffic monitoring but also in cybersecurity being it able to detect unusual traffic behaviour and security issues.
|
10.5446/52774 (DOI)
|
n opoly refugees n nd how Entopen G unify active and passive monitoring. After that, I talk about how Entopen G and ANS active monitoring thanks to the active discovery, and I end the presentation with a brief demo on Entopen G. So first of all, who am I? I am Matavisk Kosi, I can see it before, and I graduated from the University of Pisa last year in 2020, and currently I'm working at Entop as a software engineer. Let's start. So, what is Entopen G? Entopen G is a monitoring tool, mainly a passive monitoring tool, that can both analyze packets and flows. It has a particular characteristic, has a monitoring tool, because Entopen G is based on the DPI traffic analysis technique. DPI stands for Deep Packet Dispaction, and Deep Packet Dispaction is a traffic analysis technique that is based on the principle to go to analyze the packets until the level 7 of the ISOSI model, so to the application level, and in this way, we can understand a lot more information about packets, flows, and so on. For example, we can understand if you are talking about a YouTube traffic, or if you are talking about Spotify traffic, and so on. Obviously, we cannot extract all the information from the packets, because some of them are encrypted, so we can get just some information on it. So, Entopen G doesn't stand alone, it can stand alone itself, but it can even communicate with other programs, software, for example, it can talk with Suricata, Syslog, SMP, and get information from them, and Entopen G can even export those information to external services, like Discord, LogStash, Telegram, and so on. So, Entopen G has other features, for example, it is embedded with an high capacity flow database, like you see before, it can export data to external services, and it can get information from external services, and now, let's talk about active and passive monitoring of Entopen G. So, active monitoring has been around for a long, long time, like we all know, and the problem, I think, is that people usually think of active monitoring just as doing ping and service check, but active monitoring show its strong point if we add it to the passive monitoring. Why this? Because passive monitoring has a limit, and this limit is that its scope is limited to the traffic that we cannot observe. For example, we cannot understand if a service isn't working properly with the passive monitoring. This is, in fact, the work of the active monitoring. In this way, Entopen G goes to use the active monitoring to cover the blind spot of the passive monitoring. So, Entopen G monitoring feature set is based on the active one and the passive one. The passive one captures the traffic and reports the statistics, and thanks to the passive one, we can understand some services from the traffic. For example, we can understand if an host is playing music, just checking from its traffic, in a specific case from its Spotify traffic or YouTube traffic, and at the same time, the other monitoring set is the active one. So, Entopen G complements the passive monitoring by analyzing the blind spots of the passive one. In this way, Entopen G provides a rich network view that could not be obtained just from one of them. Entopen G does a total of five types of active monitoring. We have the continuous ECMP, the HTTP and the HTTPS, the ECMP, Speedtest and Fruitroot. Now, we do both ECMP and continuous CSMP. Why this? The reason is simple, because continuous ECMP can understand if a service goes down for a little brief of time. This thing usually isn't noticeable if we use ECMP alone. So, we have the Speedtest to understand the speed of the network if the network is overloaded or it has some problem. Now, another feature of Entopen G is that it can monitor even other infrastructures. Just think of this example. We have a network with different Entopen G instances. This feature goes to resolve this problem, because in this way, an Entopen G instance can monitor different Entopen G instances. In this way, we can understand if that specific network is reachable using the HTTP active monitoring and how the network is doing. For example, we can understand if it is overloaded and so on through the throughput active monitoring. Thanks to the passive plus active monitoring, we can extend the active monitoring with realistic discovery host, host types and software service and so on. We can have a measured view of the network. But here, we have a problem. The problem is simple, that the simple active monitoring cannot understand, cannot find the hidden networks. And this problem is resolved by the active discovery. The active discovery used is that it is the one to discover all the devices inside the network. Even the silent one and collect different information on them. For example, through the active discovery, we can understand the device type, the operating system and the manufacturer of a device. Entopen G do these things to using different protocols. For example, we use ARP to understand to discover all the devices inside the network. And we use MDNS to understand the services offered by the IT. After that, we can even use SNP to find for example, the physical location of a device inside the network. And we use other types of info. For example, we use the MAC address to understand the device type or the operating system in some cases and so on. In this way, we provide a quite a bit of information about the network and host and so on. And not only basic up and down, but we even provide time series analysis of round trip times. And whenever a trash shod is crossed, we send another, we trigger another. In this way, a user can understand if there is some problem inside the network. Now, Entopen G can benefit from Entopen G's access and its knowledge and gathering information and so on. For example, we have the leasinga and the checkmk open sources tools. So now, let's see a brief demo on Entopen G and how it practically works. All right. Let's go. This is the main dashboard of Entopen G. Now, let's see the first network discovery of Entopen G. And after that, we take a look at the active discovery. So, let's discover all the hidden hosts and hosts inside my network. Here, we are doing like I said before, for example, there are traffic, the MDLS and so on. For example, here we discover the ghost network. Entopen G, like I said before, offers the device type and we can understand it analyzing the MAC address. And the name of the host, the manufacturer, in this case, it cannot understand the manufacturer and other information. Now, let's see instead the reactive monitoring. So, for example, let's add a new active monitoring. Let's add an active monitoring to, for example, the gateway, the router. It should be this one. And in the meantime, let's see, for example, a chart of another active monitoring I was doing. And as we can see from the screen, we have different types of charts and we can see some different information. For example, the average, the 95th percentile. Let's see, for example, 30 minutes. So, we can have a view of the active monitoring. Now, let's return to the... And as you can see from this one, Entopen G did a check and this is read. So, the measurement didn't go well. As we can see, it crossed the threshold. All right, so this is a brief demo on Entopen G active monitoring and network discovery, active discovery in case of Entopen G. So, thanks everyone and see you next time.
|
This talk shows how ntopng, an open source monitoring application, can be profitably used to discover, characterise, classify and enforce network traffic policies. ntopng is an open-source passive traffic monitoring tool based on packet capture. These monitoring capabilities have been complemented with active device discovery for the purpose of characterising them (e.g. is this device a tablet, a TV or a PC?) and thus apply monitoring decisions (e.g. a printer should not do any bittorrent traffic, a router should not print). This talk covers passive and active techniques implemented by ntopng for discovering and classifying network devices. As well it shows how this information is used to create monitoring reports and enforce security policies. Finally the talk shows how selected devices can be both actively and passively monitored in order to implement a comprehensive monitoring solution.
|
10.5446/52776 (DOI)
|
Hello everyone, my name is Saustin Acheo and I work at Percona in the support team as a support engineer. We are going to be talking about how we can use Percona monitoring and management to monitor any database we want. So briefly, the agenda will be we are going to first talk about PMMs architecture just so we can understand certain base terminology when we say things like PMM admin, PMM clients, external exporters. So this will let us have all the same base knowledge and make sure that we are understanding what we are saying basically. So what is exposed. And then we are going to talk about the external exporters functionality in particular that is the one that is going to allow us to monitor any database that is potentially not supported by PMM already. And how to, after we have the metrics or the data, how we can make sure we are making the most out of it by visualizing it by the dashboards. And then we are briefly going to cover some other, let's say tangentially related topics like custom collectors that will also in some sense will let us, will give us this added functionality, this extra functionality that we need to monitor any database. So what PMM architecture, what is PMM like, let's say, so it follows a client server architecture in which the client is the part that has the exporters that will connect to the database that we want to monitor. And we make sure that the server has the, these raw data that is comprised of the metrics that we get out of the database for the server then to store and manage for end users then when we want to visualize it via the dashboards. So up to recently version 2.12 we used to use Prometheus for managing the metrics, but we are now changing for in favor of Victoria metrics. In short, it is because Victoria metrics supports not only the pool kind of architecture that Prometheus has, but also can push. So what we can do is from the client we can actively send metrics to the server before it used to be more passive, let's say, in which the server was the one in charge of constantly pulling data from the clients. This has a lot of benefits, but we are not going to go into too much depth here, but there is a blog post that covers this in more depth in percona.com. So you can search for Victoria metrics PMM and you will surely find it. And then on the PMM server side, the main components that we need to be aware of, at least for the purpose of this talk is our Grafana, which is the one that is, will be the one we will use for visualizing the metrics. And then Victoria metrics, Clickhouse and PostgreSQL, which are responsible for the metrics, the data related to query analytics, and then storing configuration respectively. So the important one here is of course, Victoria metrics, but it's also good that you know that Clickhouse and PostgreSQL also exist within the PMM server. So what does PMM offer out of the box? It has support for four main, let's say, kinds of databases because Ligno is always also there, but software. So Linux, MySQL, MongoDB and PostgreSQL would be the four, let's say, main branches. Of course, we then have Percona XTB cluster, ProxySQL, Percona server for MySQL, etc. So the point of this slide is if you want to monitor anything that is not here, you will need to do it via this external exporter functionality that we are going to discuss here. So lastly, regarding PMM, if you want to give it a test and you don't want to necessarily install it, we do have an online demo available that you can check. It's a nice way of getting the feeling of what PMM can offer in terms of the, let's say, native support for, as we were discussing before, for Linux, MySQL, etc. So if you're in doubt, you can just click on that link and it will take you to the home dashboard from which you will have access to all PMM functionality. Okay, great. So now that we know a bit about the versatile view of what PMM can offer, let's dive into how we can get the data, the metrics, the data we need from other databases that are not supported natively by PMM. So there are two main, let's say, scraping models for this. And for the purposes of this talk, we can just call them external and external serverless. But know that the serverless keyword here has nothing to do with what you are probably thinking about, like AWS serverless, for instance. This serverless means just that the PMM clients doesn't necessarily have to be executing in the same node as the databases or service we are going to get metrics from. So you can see clearly, the first example here is the PMM client is living in the same node that we want to take metrics from. And in the second one, we have a PMM client running in one node that is going to get metrics from the other node that is surrounding yellow here and named as external service. So what do we get with the external functionality? Okay, so as we were discussing before, the client on the server, in this case, the client, which we are calling PMM agent now. So the client will be running inside the node that has the database and the metrics exporters running. Additionally, in this case, we will get the side benefit, let's say, of having the operating system already monitored when we install the PMM client. So we will get code for free Linux metrics, basically. So we will use the PMM admin add external commands to do it, to do that. And then we will need some other flags that we are going to see now in the following slides. Then we have the external serverless, which as mentioned before, we will need a node for running the client that has access, of course, to the node that is running the database and the exporter. In this case, unless there is an explicit exporter for OS metrics, we won't have this added benefit because, of course, the PMM client is running in another node. For this, we will use the PMM admin add external serverless commands. And again, we are going to see the exact syntax in some of the slides. But you can always tap dash dash help. And in PMM, we have, it helps. It does a good job in what it should do. It can guide you very well just by tapping dash dash help within all the sub commands. You can already get the functionality that you need from the tool. So the basic steps for, let's say we are in the first case where we have the external exporter running within the same database server, the steps that we would need would be, first, of course, install PMM client. Then configure it and by configure, we mean that we make the PMM server note that we are starting a new PMM client. So the only thing that we do is make sure that PMM server knows that there is a new client that is going to do to export metrics. Additionally, as soon as we configure the PMM client, we will have this added benefit of having the operating system already being monitored. So it's going to start a PMM agent that will start scraping metrics from the OS. But then, of course, what we are here for mainly is for the database metrics and how do we do that? We need to explicitly add the external exporter. As mentioned before, we are going to use the PMM admin add external command and then we will need, in this case, we will need to at least add the ports where we have the exporter publishing the metrics. In this case, it was 7070. And then, optionally, we can add a service name and a group. We are going to see in some slides down the line how the service name and the group affects what the data we see from Grafana. But for now, just remember that we have a group that's named Cassandra Cluster and the service name, in this case, that Cassandra knows Dev1. So at this point, we have the exporter running, but we want to check if it's really getting metrics. How can we do that? The first step I would use is just PMM admin list and it will type, it will output all the exporters that are running within that client. And in this case, we see that the Cassandra Cluster external service type is running for the node Dev1. That's the first step. Then I would go to, in Grafana, we have what we call the advanced data exploration dashboard. In it, we have all the metrics from all the nodes that PMM service is gathering data from. So you can just, I'm not sure if it reads well, it may be a bit small, but the metric in this case was Cassandra Clients connected native clients. And then the node was Cassandra Dev1. So we indeed have data that is being collected. In this case, there were connections being made to the Cassandra node. Again, for the external serverless exporter, the steps are fairly the same or very similar. We have to have a PMM client or PMM agent, as we were saying before, installed and configured in a third node or second node that is not the database node. But that we need to make sure that that node can, of course, reach the database and exporting node. So after we have that installed and configured, we can use, as we mentioned before also, the PMM admin add external serverless commands. And in this case, we will, of course, need not only the port, but the address so we know how to reach that node. In the previous command, we didn't need one because we assumed it was localhost, because the external exporter was running within the same node that the PMM client was running. And then again, we have the external name, which was service name before, but it will serve for the same purpose. And then the group, we are going to have the same group for this Cassandra cluster node. As we can see, the tool will give us output on the service ID that it has. And we are going to see how we can use that information on some other slides. So how can we check if the external serverless exporter is running correctly? Again, PMM admin list would be the command to use as a first step. In this case, we can use it from any node because all the nodes will share this information. It would be like the second section of the output from PMM admin list. It will show all the external exporters that are running in this way. So the service ID, we can use as I was saying before. This one is going to be 14 CF something and 131. And as we can see here, we have the external exporter that is 14 CF something something on 131. So this is the way we can map and make sure that it's at least added to the PMM server environment. Now, again, the second step I would follow is the advanced data exploration dashboard. And checking in this case, we can see here it's Cassandra Dev03. And again, the connections to it. But this is an easy way we have to check because this advanced data exploration dashboard can show us every incoming metric that we have. Without prior knowledge, we just on the metric drop down box, we just start typing Cassandra and we will get all metrics that start with Cassandra. Another way we have to check is what we call the PMM inventory. And this is like the central repository where we have information on all services, agents and notes that are currently registered within the PMM server. So if like, like we can see here, we have the Cassandra Dev03. This was the serverless external exporter. And then we have 01 and 02 that these are the external exporters. So these have their own PMM agents running and this doesn't have its own PMM agent running. I'm not sure if it reads either, but in this case, the dash dash group can be useful so that we have some context as to this service name, which is the Cassandra Dev03 and Cassandra Node Dev01, that they are part of the same cluster. So this would be like logically grouping some external services for which otherwise we wouldn't have any information. And then the final thing I wanted to mention in this section was related to where we can get these exporters from. So Prometheus has a very nice and well-documented section on instrumentation and exporters. And in particular, I would use that as a first approach on this. Like if you need an exporter for, let's say, Cassandra, like we were using now, it has the JMX exporter and it is very well-documented and it takes you to links that have proven useful. So that would be the first approach. And then of course, GitHub is a great resource on this and Google is our friends here, so we can just type that search, those keywords and it will search only within the GitHub site. There will be a second approach and then of course, the third one is just Google without the site keyword and see what you get. But with these two pointers, I think that 99% of the cases should be resolved for getting exporters and instructions on how to run them. Typically, it's very, very easy and even now there are some exporters that running Docker, so it's really easy to, at least for having a quick instance to do some testing or proof of concept. And then the next section is okay, now we have the metrics and we need to make something out of them, right? So how can we do it? In BMM, we have the notion of dashboards, which would be like logically grouping series of graphs together. So what does that mean? If you check the top right corner on BMM, this is Grafana, so you will have the services tab which gets you the main four supported software or tools that we were discussing before. Linux, which is the notes overview, MySQL, MongoDB and Postgres. And then if you click on, let's say, notes overview, it will take you to all the dashboards that are relevant to the operating system. So this is the dashboard list that we have for OS metrics. And as you can see, they are really something very specific, right? Even if we are saying notes summary, which is broad and it will of course entice this CPU memory and all the subsystems, but it is detailed in itself because you know that this dashboard will have broad information about the whole server and then if you want to dig in more into, let's say, CPU information, you can use the CPU utilization details dashboard, which will have much more complete set of graphs. So dashboards are really important and are the core of the metrics visualization and at least how we view it in BMM, right? So we have three main options when it comes to presenting information to our users. The first one is, of course, create a new one from scratch. I wouldn't say this is for advanced users, really. It's very easy to add new dashboards to start using at least very basic promql to get some data quickly showing. And then the other two options that we have is we do have a GitHub page with some compatible dashboards, which are not currently included in BMM, but have proven to do a good job at whatever level of metrics showing it worked. So for instance, there was one proof of concept that we worked on with the process exporter. You will find it there. The dashboard for that proof of concept will be there and it generally is well documented so that you can either well documented or has a blog in prokona.com or slash blog that you can use. And then, of course, we have the graphana database that, again, as we were saying before with Prometheus exporters, this is the dashboard counterpart. And it has a great deal of dashboards that the community has worked on. So let me show you a quick example on how would adding a dashboard be in this particular case from the graphana database, but we can also import via adjacent file or text. So as you can see here, my keywords only were Cassandra and then the data source Prometheus and we already have a list of available dashboards. Let's say we're feeling lucky and we just clicked on the first one that appeared there and what we will see is the dashboard ID at the right section and we can just copy that and then easily import a dashboard just with that number. We're going to see that in the following slides, but before I move forward, let me also mention that typically you will have information on this page in the overview section, in particular for this one. I didn't want to extend too much here, but you will get information on the generally authors, including information on versions that are supported, how they tested it, etc. So this is important because different versions may be exporting different kinds of metrics with different names. So if you want full compatibility, make sure you are using the same versions as the authors did. This is what I was referring to with just the ID in PMM, if we click on this plus sign and we go to import, we will see this screen and we can just import Viagrafana.com simply with the dashboard ID that we were mentioning before. And then the second screen presents us with choosing a name and the ID that has to be unique, but Grafana will tell you if it's not in red. And then what I do want to stress here is that in here, the Prometheus data, you will have many, depending on the version used, you will have different ones. You will need to check which one applies for your version, maybe metrics, maybe Prometheus or maybe another kind of exporter if you manually edited it, but just be aware that you can try different ones and see which one works for you. And then finally, let me mention, so this is like the conceptual end of the talk, but then we also mentioned like this tangentially related topics that are custom collectors that also allow us to expand on what PMM already offers by default or out of the box, but it lets us get even more information from our systems. So there are two different custom data collectors that PMM offers and they are the query collectors and the text file collectors. As you can imagine, query collectors are just queries that we log into the databases we are monitoring and get data out of them. And there is a caveat here that only MySQL and Postgres are supported for now. And then the text file collectors allow us to expand much more on the metrics we have by generalizing the Prometheus ingestion just from a text file that conforms to certain specifications. We have support for all the kinds of resolution scraping times. We will see some outputs in some slides going forward. But as we mentioned, the custom queries are from MySQL and Postgres SQL and it will connect to monitor DB instance and get the query results from it. And then the interesting one in this case I think is the custom text file collector. So one possible way is to simply and it's very simple bash and cron and you will already be getting some interesting functionality at it. Of course it's not limited to this. You can use whatever you want that lets you get the data you want and then write to a file. That's all you need to do, just be able to write to a file. So custom queries collector, I'm going to skip this because we are reaching the end of the allowed time. But the text file collector is the one that I find interesting in this case. And the only thing that if you go to this path, you will see an example of the output that Prometheus or Victoria metrics expects to get. If you remove all these, the two comments here, you will have the file as should be. So the metrics that he's expecting is metric name, potentially some labels, the value, and optionally some timestamp. So if you can output a file with this format, we can have it input as metrics from PMM natively. So you don't have to do anything else. The only thing you have to do is just be able to write to this file. And then finally, I will leave some links here to a quote, unquote, real life example in which Peter demonstrated MySQL custom query collectors. And he even got some really nice dashboard that he used to check to have more insight into MySQL memory usage from the different subsystems within MySQL. So this is all. We will now accept some questions, I think. So thank you very much for your time.
|
Your databases and monitoring are all set up and you've got your MySQL and MongoDB databases figured out - you're monitoring them and everything is fine. You're killing off those occasional monster queries and you have it all in check. But now you've been tasked to keep tabs on that new Cassandra cluster your company has - we'll show you how to incorporate monitoring it into the Percona Monitoring and Management tool and which features enable you to get the best out of any new and existing database you're incorporating. Database problems? Not on your watch. You will also find out how to include other databases into PMM as a handy open source tool you can use to monitor your databases. You're invited to ask questions during this presentation as well as in the Q&A section.
|
10.5446/52777 (DOI)
|
Hi, my name is Simon Meckle. I'm very happy to be a speaker at this FOSSTEM conference this year. The topic of my presentation is Robot MK, how you can integrate the results from Robot Framework into the monitoring system CheckMK. Let's get started. I'm coming from the south of Germany, close to Munich, and I've been working with open source monitoring systems like Nagios for 20 years now and the last 10 years I've been working as a consultant. My focus topics are data center automation with Ansible, monitoring, I'm specialized on CheckMK, and test automation with focus on Robot Framework. And last but not least, I'm generally interested in Python development. So first let's talk about IT infrastructure monitoring. In my opinion, the best solution for this is CheckMK. CheckMK was written 2008 by Matthias Kettner as a kind of Nagios add-on, first of all to only solve problems with large Nagios installations and its configuration limitations. But it's gradually evolved and became a fully integrated monitoring system, which aims to be an all-in-one solution for monitoring everything while still being user-friendly and very flexible. It comes with more than 2,800 curated check plug-ins and is available in two editions. This is the raw edition, open source, and the enterprise edition with some additional features and enterprise support. The most important fact you should know about CheckMK is that monitoring systems like Nagios, Neiman, CheckMK, Isinga, etc. All together are able to monitor up to OC layer 7. And strictly speaking, they all start on OC layer 3 by sending ICMP packets to check if a host is up. But didn't we forget the applications? How do we monitor applications? How do we monitor if users can really work with an application and how performant? The answer is by default there is no such monitoring available. And don't get me wrong, this is not a criticism of any of these monitoring systems. It's simply too complicated that these systems could provide a kind of generic way to monitor applications, to monitor what the user really uses and sees. Layer 7 is called the application layer, but this is a kind of false friend, I think. It's only the interface layer to the applications. Layer 7 only contains the protocol, for example HTTP, to be able to contact a web server. The content of HTTP packets is the full responsibility of the application itself. And this is what we should check. And this is my second disclaimer. I don't say that we do not need checks in layers 3 to 7. It's absolutely necessary to have them, but they are not the whole truth. What I want to say is, in the end, it's all based on a hypothesis. I found a nice paragraph in this book here, weniger schlecht programmieren, less bad coding, which I think perfectly fits. Observations want to become hypotheses. It's not easy to only record what you can see. An untrained observer tends to see a man running after the bus, but in reality, the man and the bus are moving in the same direction accidentally. It's nearly impossible to observe something freed from hypothesis. This means we are monitoring through the best of our ability the application preconditions and hope that the applications are happy with that. And if the application is OK, we tend to believe that everything we already monitor is everything the application needs to work. We are displacing the question if there could be more reasons to break the application, and unfortunately, there are many more. A great way to test applications automatically is the open source tool Robot Framework. Robot Framework is a generic test automation framework written 2005 by Pekka Klärk at Nokia Siemens Networks. And since 2008, open source. It's completely written in Python. And there are two features that set Robot Framework apart from other testing tools. It's the keyword-driven approach, which I will explain in the next slide, and the library concept. Robot Framework itself doesn't test anything. You import libraries with special keywords for web testing, image recognition, REST calls, and so on and so on. And this makes Robot Framework for me to a Swiss Army knife. In my opinion, one of the best reasons to take Robot Framework as a test tool is that it's open source and independent from any software vendor. The decision for Robot Framework is not a decision for a company. It's the decision for the lingua franca of test automation. Robot Framework will always be the property of the community. So what are keywords now? Simply explain keywords abstract source code. Source codes can be written in Java or in Python, mainly in Python. And they can be written and used like functions. They can take parameters and return values. They are case insensitive. They allow spaces, which is not a big feature, but makes the code much more readable. And a question I often get asked is, why should I use Robot Framework? There's Selenium for Python. Why should I use Robot Framework? It's absolutely OK. You can see here a small snippet of Python code using Selenium. And for sure, this code would need some better exception handling and methods to integrate the data into other systems. And if you are writing hundreds of those tests, for example, you will of course try to separate the common code from the test code. And in the end, you will have some kind of a self-written framework. And this is exactly what Robot Framework already does for you. In my opinion, there is no reason not to use Robot Framework. The same test as you can see here in Python can also be written in Robot Framework using the Selenium library. It's a much more cleaner code. The yellow words here are the keywords, which I would say is also readable by anyone who is not a real programmer. But Robot Framework is not a programming language itself. It's more like an additional abstraction layer for existing Python modules. So in theory now, we know CheckMK and Robot Framework. Let's talk about the integration of Robot Framework into CheckMK. And this can be done with my project RobotMK. RobotMK comes as a MKP extension, MK package. It's a collection of files in the end for CheckMK to integrate the results of Robot Framework. I've started this project in November 2019 and presented a proof of concept at the RobotCon conference in January 2020 in Helsinki. I got a lot of good feedback, even though hardly anyone knew Nagyus or CheckMK. And I continued my work. And thanks to some customers of mine, I always could further develop and improve the module. This slide normally takes up to five minutes to explain. Time is rare. For now, it's only too important to know for you that RobotMK consists of mainly two components. Like any other CheckMK agent plug-in, the RobotMK plug-in gets triggered by the agent on the monitored host where the Robot Framework tests are. Other plug-ins collect system data, memory consumption, or something. And the RobotMK plug-in executes Robot Framework tests and transfers the XML result back to the CheckMK server. On the CheckMK server, the check script is responsible then to evaluate that Robot XML. By default, a Robot Framework test already has a state. This is if the whole test was successful or not. And the RobotMK rules, these are normal CheckMK rules, enable you to enrich the result by adding more monitoring tests. For example, monitoring the run times of sweet tests and keywords. In the following demo, I will show you a start from scratch. We will integrate a small Robot Framework test on a Windows 10 machine and integrate this result into CheckMK. And I will show you how to install RobotMK and how to set some settings and show you features of RobotMK. Let's start. Okay, here we are on a Windows 10 VM. The first thing I want to show you is the Robot Framework test itself. It's a very simple test. As you can see here, it consists of two test cases. Each of those is connected to seleniumeasy.com and executing a dummy test. I can execute this test by hand so that you see what it's doing. It's starting Chrome. Enter some text and expect the page to display the entered text in a separate field. So that's all. The first thing I have to do now is to copy the directory of this RobotTest. This is seleniume test into the agent directory of the CheckMK agent. The RobotMK plugin expects the Robot Framework test case in a certain directory. This is called Robot. And within the Robot directory, I'm pasting this directory. We only need this here. So this is the first step. With RobotMK, you can control where RobotTest cases should be executed and how. Let me show you how this works. This is a fresh installation of CheckMK with one monitored host. This is my Windows 10 VM where the seleniume test should be executed by the CheckMK agent. Now I open the host and service parameters and search for the rule Robot. This brings up three rules. I'm taking the agent plugin rule. This rule controls how agent installation packages should look like individually per host. This individually means the MSI package I'm now creating for this Windows host in a few minutes will contain everything. The RobotMK plugin and a YAML definition file which is read by the plugin and which tells the plugin which Robot suites should be executed. I create this rule here. The first thing I do is choose explicit host. I only want to apply this rule on my Windows VM. Here I can choose to deploy the plugin. Of course there are two modes of execution. This is the spoolier mode which means the plugin gets executed externally for example by a task scheduler or the asynchronous mode which is default. This means the CheckMK agent executes the plugin but in another interval than the normal CheckMK interval which is one minute per default. I'm choosing this interval and I'm setting let's say two. Robot suites directory is an option where I can override the default directory. We do not need this. Robot framework test suites allows me to add a list of test suites I want to execute. This takes the directory names of suites I want to execute. Remember this is the name of the directory we already have copied into the Robot directory. All the options you can see here are translated at runtime to Robot framework command line options. So you can choose another name for the suite. You can black and white list by tags. You can load variables, a variable file into the Robot test. And the last option is a special CheckMK option. This allows you to assign this result to another host defined in CheckMK. We don't need this at the moment. This is everything we set up here. The last two options define how the agent output should be encoded. If there is too much content you can also choose ZLIP to compress the agent output. And this is a setting which controls how many days you want to keep the log files on this host. I click on save. And now you can see that CheckMK offers me to bake a new agent version. I choose bake agents. And now watch this list. Currently we only have one version of the agent. This is the generic version which is applicable to every host. And now CheckMK will bake a special version for my Windows VM. This is what we now have. This is the MSI package with a new hash. And here I see the settings which come from the RobotMK rule. The next step is now to install this agent on the Windows VM. Now I'm back on the Windows machine. I'm logging into CheckMK. And in the VATO configuration I choose monitoring agents and download the new MSI package. You can also set up the CheckMK agent to do automatic unattended updates. I didn't do this here for demonstration purposes. So let's open this new MSI and install it. Remember this package already contains the RobotMK plugin and the YAML file. Okay. Let's check this. Here I have the CheckMK agent directory. And within the config directory we see a RobotMK YAML file which contains all information the RobotMK plugin needs to execute the test. And the key suites contains the directory names of Robot suites. The brackets here are a placeholder for command line options we saw before. For example variables and black and white listing of tags and so on and so on. This is enough for now. Now we switch back to CheckMK and do the first inventory. Okay, this is the service list or the checklist of our Windows VM in CheckMK. And now I want to see if the agent already can get data from the RobotMK plugin. I choose edit services to inspect the agent output. And here we can see that the agent contains RobotMK information which CheckMK doesn't already have created a check for this. I choose fix all missing vanished to add this check to the set of checks of this host. Save the changes. And now we have a new check SeleniumTest and if I now reschedule the CheckMK check which is responsible for fetching the agent data I will get a result. For now the RobotTest SeleniumTest runs continuously in the background on the Windows host. I can reschedule this because it's asynchronous check, I see this in the action menu of this service. CheckMK tells me that this is scheduled in an interval of 120 seconds and it's independent from the normal check interval of CheckMK. If I have now configured the basic notification settings in CheckMK I will already get notified if this SeleniumTest fails. Now let's say I want to get alarmed by CheckMK if the second test case SeleniumInputBarDemo runs for a too long time. Then I can use this button here and choose parameters for the service and click RobotFramework. We can see that for RobotFramework there's currently no parameters set, I create a new rule, a specific rule for this host and this service. And I choose runtime thresholds and as we want to limit the runtime of a test I choose test thresholds and click Add. This allows me to define a pattern and each test which matches this pattern will be set this threshold. I want 3 and 1 second for example. And this is an option which I would recommend. Normally RobotMK only shows the runtime in the service output if it was exceeded. If you set this to Yes you will always get the runtime of monitored nodes. I would suggest also to use this pattern for the performance data creation. We also get data to draw a graph out of this. This is an option which allows you to include the last execution date in the first output line also useful and you can choose this here to display error messages coming from sub nodes which then appear on upper nodes. If you want to learn something more about the options you can choose this book icon here. RobotMK comes with a lot of documentation in line also with references to the RobotFramework documentation. Then I click on Save, Apply and then go back to my host and reschedule the check-in-k service. This means fetch new data from the agent. And now the Selenium test turned into warning. Let's open the details and see what happened exactly. The first line contains kind of compressed information so the whole suite failed, it was last executed and it has the warning state. Then what follows is the reason. The reason is the test which we monitor the runtime was warning. Its runtime was this here and it was exceeded. Also we can see the warning state in the test element itself and we also get a graph of exactly this metric. And I am free to define more and more runtime thresholds and performance data patterns. Everything gets collected in this check. For now we have our first robot test integrated into check-in-k with the help of RobotMK. As already explained you can now get alarms for the state of this suite. This means that whatever goes wrong in this suite, one specific contact or contact group will be notified. This is not always what you perhaps want. Let's say that you have two teams and team A is responsible for the first test case and team B for the second. They do not want to receive alarms for the whole suite because the suite can also contain alarms they are not interested in. You on the other hand cannot or want to split these robot tests into two separate robot files. The so-called service discovery level in RobotMK allows you to solve this problem. For this we have to understand how a robot result is constructed. A suite in general is either a collection of test cases. This is why the RobotFrameberg created SeleniumTest which is the file name or it can be a collection of directories containing those robot files. This is here SeleniumTest with an underscore. So from top to bottom we have here the folder suite, the file suite, the test cases and the key birds which then can be nested. You will always by default get one check in K check per robot suite like here but you can change the discovery level for example to two to create checks out from the test cases. Let's try this. We go into the host and service parameters search again for robot and now we choose the last rule the discovery rule, the service discovery rule and create a new one. And this option allows to set a pattern. I leave this on the catch all and to define a level from where we want to create the services or the check in K checks in other terms. I choose to here to generate them from the test cases. I can also specify a regular expression pattern to blacklist some nodes which I do not want to have in the monitoring. The default prefix is always robots and a space. I can also change this if I want. Now I click on save and now if everything works, now I am doing a new inventory. I will see that the Selenium test we first have created which has its origin from the folder suite has disappeared. Instead we got two new checks for the test cases. The button fix all missing vanished solve this for us. So it deletes the old one and creates the two new one when I click on save and activate. I should now have two new pending checks. And now we can see that these two checks have their data from the test cases and look like this. You have seen now how easy it is to integrate robot framework tests into check in K. If you want give it a try and give me feedback and report issues on GitHub. So what's next on the roadmap? I am currently working on a special robot MK keyboard library which allows you then to kind of scrape metrics out of the system under test and to monitor it completely from check in K which currently is only possible for runtimes as we have seen. This will make it possible to monitor application internal values you probably won't have access otherwise. Another cool plan feature is to transfer not only the XML results but also HTML reports from the robot agent to the check in K server and then to link them to the check in K checks. This is an example of such a log and in case of an alarm the user can then directly open this HTML log on the check in K server. It's also planned to also selectively record the desktop and integrate those clips into the report so that it's better understandable what's going on and to debug failed tests. So thanks for your interest in my presentation and I'm looking forward to hearing your questions now. Hey Simon, thank you for your talk, it was pretty nice. Sadly we don't have any question right now but let's start. Do you want to mention anything that you didn't mention in your presentation? I don't think so. I have explained everything to my best and if there are any questions go on. You can just wait a few seconds or minutes. If you got any questions, write them in chat. So there's already applause in this chat. Thank you for your nice talk. I don't know if you mentioned this. I have used the bakery in the check in K enterprise edition but everything can also be done in the raw edition of check in K in the opposite source version. We saw that in your talk and thought about we should ask you this but we have googled something and we saw that everything is available on the opposite. And Robert Fremburg is open source also. I see one question by Martin. What are current projects which you are doing with Robert M.K. One thing I want to change or to expand is the information which are transferred to the check in K server. At the moment this is only the XML output of Robert Fremburg which contains all the runtime information of suites, tests and key words but Robert Fremburg also produces a nice HTML log file and this is only available at the moment on the test client and currently I am reorganizing the structure. I am introducing some metadata and there will also be a field for HTML content and this will be the complete HTML file which gets transferred to the check in K server and the check then splits the content and saves the HTML file into the file system and I want to build some landing page or something so that you can click on the action menu on the check in K service to go to the Robert Fremburg log and all the images and screenshots which were taken during the test are base 64 encoded within the HTML so it is transportable. And another feature I want to implement is some kind of meta monitoring so that there is some check on the client side which can monitor the staleness of spools files because if there is some Robert Fremburg test which didn't end successfully and there is no robot mk section in the agent output this is a situation you never should have and I want to have some meta monitoring which can report how many suites ran and how old are the spools files and how healthy the whole client is. Okay, studio is asking I worked with data.symptatic browser test on the past which also uses Selenium testing does robol mk offers the same possibility. Yes, I have used the robot framework Selenium library in my demo. So Selenium is only a part of the possibilities you can have with robot framework. Another very cool technique for testing websites with robot framework is the browser library. You should name it playwright library because it's based on the playwright technology, which is a kind of successor of puppeteer which I think is more famous. And I think this is more powerful and even more faster than the Selenium with Selenium you have to use a web driver which is kind of translator between robot framework and the browser and with playwright you are controlling the browser if you were the browser itself and you can do rest calls and monitor the network traffic starts different browser instances with certain permissions and so on it's very powerful. Yeah. Right, thank you. There's one more question by Alex. Can you give some more real world examples that you monitored with robot mk already? For one client I have written a test, the goal was to test a very old document management system based on Windows and for this I didn't use a library like Selenium or playwright because it's a native Windows application and there are many more libraries for robot framework to accomplish this and one very powerful library is the image horizon library which is based on a pure Python library and it works like this you are taking screenshots of a button and within the robot test you write wait for a pixel region which looks like this screenshot and if you have found it within 10 seconds do a click on it and this is a test which runs very long it's very complicated and yeah this is very stable. Another test I've written is for government IT system which it's a map based web application which shows up the weather forecast and snowfalls, water levels of rivers and so on for the whole Switzerland and this application shows the metrics of these measurement stations and the test is based on playwright and clicks through all these measurement stations and checks for the valid metrics. Okay, thank you Simon I think for your talk. I think you are available for questions in the next couple of days or so on. I am. Yeah and I want to thank you the whole community that watched our deaf room today. Thank you guys have a nice day and stay healthy. Yes, also from my side thank you all for visiting our deaf room and yes thank you to the whole deaf room team I think Stefan had a lot of effort in making this whole thing possible. So, yes of course. So again thank you everybody for being here and well maybe we can see each other physically on maybe another conference. Another conference next year. Would you agree if we see us? Yes, of course. Thank you guys. See you. Bye. Thank you. Bye.
|
Robotmk: How to extend the monitoring system Checkmk with checks from the user's perspective So you think you are comprehensively monitoring your business-critical applications? Are you using Nagios, Naemon, Icinga2, Checkmk et cetera for this? Then... let me say that, alas, you only rely on a hypothesis, because all these IT infrastructure monitoring tools have their natural limits. Robotmk extends the monitoring capabilities of Checkmk to the application level. It integrates the results of End2End tests done with Robot Framework - which is by far the most popular and versatile tool for automated testing. My talk first covers the basic concepts of Robot Framework and Checkmk. You will learn the added value by RobotMK and the different monitoring strategies it offers. Robot Framework/RobotMK has been chosen as the E2E monitoring solution for the application landscape of the swiss government.
|
10.5446/52778 (DOI)
|
So, hi everyone and welcome to our presentation about Tola, a unified interface for communication with network devices. My name is Tobias Berdin, I'm going to tell you something about Tola and after that Mika and Niklas are going to give you a live demo and we'll answer your questions. So, what is Tola? Let me give you a short overview. You have your monitoring server, for example, idSinger or Nagios and you want to monitor your network devices. In this case, switch from Juniper or from other Windows, for example, Cisco. And you have some generic requests that you want to send to those network devices and you expect some generic response as well. But of course, the different devices from different Windows on different types only support some specific requests. For example, this Juniper specific request and then the devices answer with a specific response. This holds as well for Cisco and other devices. So how can we translate those generic requests to the specific requests and responses? And therefore we can use Tola. So Tola translates, knows how to communicate with those different types of devices and knows where to find your information, your data that you want to monitor on those devices. So why is Tola that awesome? We support communication for devices from different Windows, for example, Cisco, Juniper, Huawei and so on. And on the other hand, we support different types of devices. So for example, switches, routers, directional radio components and UPS, so uninterruptible power supplies. We also have an easy way for adding new types of devices. So you can just, if you have another device that we don't support yet, you can just add a new one. We make usage of different protocols, for example, SNMP or HTTP. On top of that, we also plan to add more protocols, for example, Telnet or SSH. We end up with a compiled binary, so in our case a Go binary, that you can just execute. So you have no external dependencies to other libraries or something like that. You can just execute the binary. You therefore have low resources due to those non-existing external dependencies. And last but not least, it's of course open source. So if you have any ideas for new types of devices or new features, then you can just write issues or pull requests and we will look at them. So how to use Tola. You can basically use Tola as a common line interface. So you can just type in some commands. We have various subcommands and you get your information. Tola also supports a REST API mode, so you can spawn a REST API and a very small Tola client binary can then communicate with this API. Tola outputs its check results as the check plugin format. So you can integrate Tola into your monitoring systems, for example, Itzinger or Nagyos. And those monitoring systems can read the check plugin output from Tola. We also plan the integration into other monitoring systems, for example, Prometoys, but this is coming soon. This is still in development. So let me give you a short overview over the modes of operation that we support. We have the basic normal Tola command and we have various subcommands. For instance, identify. This is just for basic identification of this device. So we get, for example, the vendor of this device, device class, serial number and so on. We then have the read subcommand with, again, various subcommands, for example, CPU load. So we can read, in this case, the usage of the CPU and other values that you want to monitor, for example, memory usage or interfaces. So you can just have a look what kind of interfaces do exist and you can read out additional values for those interfaces, for example, the traffic counter or operation status and so on. Then there's the check subcommand. This is probably the most relevant one. Again you can check, for example, CPU load. So you can also enter various thresholds. So Tola will give you a warning or a critical message if your CPU load that is currently on this device is above a certain threshold. Same holds for memory usage and interface metrics. So you can see if the traffic counter of interface, for example, is too high. Then we are currently working on the write subcommand, but this, again, is still in development and coming soon. So the first example of a subcommander store identifier, as I already mentioned, you can use it to automatically identify a device. So in this example, we want to identify the device behind this given IP address and then Tola gives you, on the one hand, the class of this device. In this case, it's an IP 10 device from Serragon and outputs as well serial number OS version. Then Tola read as the next subcommand, in this case, Tola read interfaces. Again, it reads out some special interface information from this device. And we end up in this view. So you can see this device has eight interfaces and some relevant information from these interfaces. Tola check. So Tola check outputs its data in the check plug-in format. And you can use this output in monitoring tools, as I said before, for example, Nagios or Isinga. So in this case, Tola check CPU load checks the CPU usage of this device. So we can also input these warning and critical values. So if the CPU load that we read is higher than 85, we get a warning. Or if it's higher than 95, we end up in a critical message. So in our example, the CPU load is at 91%. So Tola gives a warning. Then as I said before, Tola also has support for REST API mode. And with this Tola API command, we can start and configure our API. So the Tola API command basically spawns a server running on, in this case, port 8237, but you can specify the port on your own if you want. And if once you spawned the Tola API, you can use the Tola client to communicate with the API and you can basically let the API do all your communication. So we want to identify our device with our Tola client. And we therefore have to specify where the API is located, in this case, at our own system at port 8237. And of course, we get the same results as with the normal Tola binary. So how do we separate different types of devices? We therefore introduce device classes. So device classes consist of two parts. On the one hand, some conditions for a device to be assigned to this class. So when can we say that this device belongs to this class? And also we specify the available operations for this device. So what can you do, what can you read out from a certain device? And you can also specify what types of requests can we perform for some generic operations. Device classes are ordered in sort of a hierarchy. So we start with the generic class. This class just contains some basic information that are relevant for all types of devices. And the next level of our hierarchy is basically the separation of all vendors that we support. For instance, Junos for the JunoPAR devices, SeroS for Seragon and so on. But in some cases, it might be helpful to have even final separation. So some devices from the same vendor have some different behavior. And therefore you can specify even more device classes that inherit some information and introduce some more information or data. Device classes are written in YAML files. So the entire logic is in those files. They of course are easy to write and are not bound to a programming language. They also are embedded in the binary. So you don't have some external dependencies to some external libraries. They are compiled within the binary file. You can extend those classes with code. So if some device has a very weird behavior at some point, you can extend the YAML files with some code communicators. In our case, this is Go code. So this is one example for a device class. This case for the iOS device class, we see the condition. So when can we say that a device belongs to this iOS device class? On the one hand, the Swiss object ID might start with a certain value, in this case, this SNMP or ID or the description of the device matches some value. In this case, begins with the Cisco string. This is one example for model identification. So we want to read out the model of our device. We therefore just perform a simple SNMP get on this given OID. And we slightly modify this result that we receive. We just replace this regex, this Cisco prefix with just a null string. So let's now continue with a short live demo. Hello, I'm Mika, one of the main developers of Tola. And today we are going to give you a short introduction to Tola. I'm together with Niklas. Hi, Niklas. Hi, I'm Niklas. I'm also one of the developers of Tola. And yeah, let's go. So I have a monitoring environment and I have a new device under the IP 192, 168, 164. And I don't know it's Wendor or model. And yeah, can Tola help me there, Niklas? Yeah, like Tobias already showed in this presentation, there is a command Tola identifier that can be used to identify devices and determine some of their properties. Like we see here, your device is from Juniper. It runs the operating system Junos and it's a VMX with the given serial number. Okay, thank you very much. This is where we have the information. So now I need to monitor my new device. Can Tola help me with that too? Yeah, sure. Tola offers some checks that can be executed on devices if we just type Tola check. We can see which check modes are available. These are all the check modes that can be executed right now. For example, if we want to monitor the CPU load of your device, we can just use Tola check CPU load and NTA address. And here we see the current CPU load is 1%, so it's all okay. Yeah, if we want to add some thresholds, this can be easily done with command line flags. For example, we can say the warning threshold is 70% and the critical threshold is 90%. And if we execute this, we see the check is still okay. Same goes for memory usage. We just have to change our check mode, which is now check memory usage. And we see the current memory usage is 53%. If we now lower our warning threshold, for example, to 50%, we see our check is now warning because the memory usage is beyond our threshold. Okay, that's very nice. That's a lot of information I get from Tola. Can I also change the output format or how is the output format per default? Yeah, of course. Like we see here, our default output format for checks is the Nagios check plugin format. But we can simply specify it by using the minus minus format command line flag. For example, we can use JSON and here our output will be returned as a JSON string. Or we can use XML, for example, and we will receive an XML output. Okay, that's very nice for parsing and using the data elsewhere. Nice. So, I now need also not just need to monitor the CPU usage and memory usage. I also need to monitor my interface statistics like error counter and usages. Can I do that with Tola too? Yeah, we also have a check mode for reading out some metrics for the interfaces of a device. It's simply called Tola check interface metrics. And this will basically output all available metrics for all interfaces as the performance data like we can see here. So it reads out. Yeah, every available counter interface status and many more metrics for every interface. Yeah, that's really nice. So I need to get along and use Tola and my environment with its finger. So how are the best practices when it comes to using Tola in a monitoring system and environment? Yeah, it's the best way to use Tola is actually by using its API mode. So there's a special command, which is simply called Tola API, which will start Tola in its API mode. I will just add a flag for logging minus, minus log level. So we can see a little bit of output on our command line. We see here the Tola server is started and we also initialized a database for which is used for caching. So we don't have to identify every device at every request. Now we can simply use the Tola client binary to contact our API. For example, if we want to check the CPU load again, but this time using the Tola API, we can simply use Tola client to check CPU load and our API address again. Okay, this is not the IP address. It's 129, 168, 164. And we now have to specify a target API, which in this case is just HTTP localhost and part 8237. And we see the check is executed, but not locally, but it's sent to the API. And we see some logs right here, what exactly Tola does with the request. So that's really nice. So just one process is handling all the loads and the other ones are very lightweight to just send a request to this API. Exactly. Yeah, very nice. So I can just open one port in my monitoring environment and get all my statistics over just one port on which is Tola running. Yeah. Very, very nice. So do you also have an example how this looks in a running environment with Itzinger or something like that? Yeah, I have a small Itzinger environment running. We can have a small quick look at this. So right here we have two hosts, which are basically relevant. This is a Cisco switch, which should be monitored by using Tola. And we have two Tola checks running right now. This one is Tola CPU load. It's basically the CPU load check that I just showed. We see here Itzinger has the performance data, CPU load is 5% right now. And if you click on inspect, we can see it's basically running the command that I just showed. The target API is running on the IP address 10.002 on port 8237. And we also have this host, this IP address as a host in our Itzinger. We can have a quick look at this host as well. It's the Tola server. And we have a special check to monitor our Tola server, which checks some metrics, it checks if the Tola server is running. Here we see it's running since today. And yeah, we get some metrics like the total request counter, the successful request, and for example, the average response time which the server takes to a process request. Okay, this looks very nice. I think I'm going to use Tola in my environment. Thank you, Niklas, for your demonstration. And let's get back to Tobias. Bye. Bye. So, before we come to an end, I want to mention our GitHub page. So we are always looking forward to see your contribution if you have any ideas. Just write a pull request on issue at our GitHub, GitHub.com, inexio.tola. And yes, this is basically all I have to say. Thank you for your attention. And of course, stay open source if you do have any questions. Now it's the time that we can answer your questions. Thank you.
|
Thola is a new open source tool for reading, monitoring and provisioning (coming soon) network devices written in Go. This talk will inform about the current state of development as well as planned features, including reading out inventory, configuring network devices, support for other monitoring systems like prometheus and many more. It serves as a unified interface for communication with network devices and features a check mode which complies with the monitoring plugins development guidelines and is therefore compatible with Nagios, Icinga, Zabbix, Checkmk, etc.
|
10.5446/52779 (DOI)
|
Hello, my name is Thomas and today I would like to introduce to you the time series source for Isingar 2. Well, why a time series source for Isingar 2? Let's say you want to store performance data from Isingar 2 to a time series DB such as influx DB and maybe you want to store a lot of performance data from Isingar 2 to a time series DB. Then sooner or later you will run a capacity problem, you will run low on disk space and what can you do? Well, as always when you run in a capacity problem you can fight the problem with hardware I guess. So what will you do? You will split your data up into maybe two three or four nodes or what else? So for the first year you store your data in the first node, for the second year you store your data in the second node and so on. So the problem is solved you guess? Okay, that's only half of the method. The second half is reading the data out of the nodes. That is a bit of a challenge because you don't just want to get the raw data, you will also get some calculation, maybe some aggregation function, some bucketing of X days or what else? Some very weird, weird selections. So that's a challenge and it's also a challenge for us and we set the challenge accepted. So we implement an awesome time series server. That's no problem. The architecture behind it is you have your performance data which comes from Iscina 2 which is stored in the inflex DB nodes and you have your applications or your clients which want to fetch these data. So the applications they ask the time series servers please give me the data from start this and stop that date. That's all, that's as simple as it is. In one sentence the time series service is a web service that queries performance data from multiple time series DB nodes. What are the benefits? So first of all it's a restful API. It's scalable, flexible. No query language skills are needed, everyone can use it. The data is queried as synchronously if you query one node is nearly as fast as if you query two or three nodes. The data is queried easily across multiple nodes and the flexibility is a design of the software. With the flexibility of the software you can substitute the time series DB backend. So if you don't use inflex DB you can maybe use timescale or open tstd. So where can I get it? The current state is a work in progress state. So it will be populated at github.com. So stay tuned and thank you.
|
This is a lightning talk about an upcoming time series/Influxdb service open source project.
|
10.5446/52783 (DOI)
|
Hello and welcome to the open source tooling chain capability map session. In this session we want to talk about what is required to automate the most of the open source compliance tasks. My name is Jan Thielscher. I am from EACG, a consulting company specializing in open source consulting. The creator of Trust Source, which is a software-solution service that is focusing on open source compliance. Actually open source compliance came to me quite a while ago. Naturally I studied business and computer science, but also did a degree in law, which then gives more or less directly into the direction of open source compliance, because this is a typical mixture of these fields. That's also one of the reasons why in 2015 when we had very large projects and we were delivering to our customers, where we used tons of open source that drove us into the consequences of the understanding of the need for something like Trust Source. However, I'm not here to talk about Trust Source. There are other talks. There's open source tooling available. A couple of links are shown there. But what I want to talk about is what is required to automate your open source compliance. To get towards that, it's good to have a clear vision of what your goal is. In the end, this is a picture from the open chain. The idea that you want to create an outbound artifact that accompanies your supplied software and describes what is in there. In an optimal world, everything that leaves the company has such a machine readable bill of materials attached so that everybody who is taking on the stuff can understand what he gets and what is in there. Unfortunately, we have to recognize that we are on our way to get there. Five years ago, there have been other difficulties that we face today. But still, some tasks have to be completed. We are not really done with this job. There is a lot of things to do. That's also why we have this tooling chain work group, which is actually addressing the automation requirements here. Try automation. Whoever is familiar with the open source compliance task sees that it's essential to have open source compliance relevant people that are looking for it. But even if you do it right, it's nothing where you can win good reputation. In the end, it's kind of a hygiene thing. It needs to be done, but you won't get an innovation award for it. It's something that is absolutely essential. I mean, one of our customers put it once saying when we were talking to the workers council to tell them why do we have to introduce now tools and measures and audit logs and so on. He said, well, guys, there is no opt-in or opt-out. It's just good manufacturing practice. You have to provide this material. And that's, I mean, you can't win anything if you do it, but you will lose or could lose a lot if you don't. And so that's why actually open source has to be understood as something that has to be there. And because nobody really wants to do it, because you can't gain good reputation in doing it well, it's something that you should automate. And there's also another couple of tasks that automation is required, because, I mean, you all know that continuous integration and development, continuous deployment are the coming things if they are not there already in your organization. And whenever you do software development, you will push it through immediately to either a repository or a test environment or probably even production. But you know that there is with all these package managers, there's always upgrades and things that go into the software and into the build that require a lot of attention. And if someone would sit there manually, the total CI CD idea would fail. So there is a huge demand for automation. And this is why it's so important to understand what all goes into the topic when you want to automate the thing. So it's like designing a system. I mean, five years ago, when we started thinking about this, we had a very rough understanding of what should be done. So all it was clear, there is something that should go on the outside. So there is a result and we want to have this documentation done. That's pretty clear. But the first thing that came up to our mind was, oh, we have to scan the things. That was first an understanding of scanning sources, because we need to know what's in there. Then there was a thinking of, oh, what about actually the parts? That we are not building by ourselves? Probably there's something else that we should look at and what is in there. The same applies later on. Since a couple of years now, we have the topic of containers. So containers come with a lot of stuff. There might also be something in. So it's all about scanning. But from the scanning to the outbound artifact, there's still a way to go. So we need to collect this stuff. We need to build a kind of a situation. It requires an understanding of what are we going to do with it? Is it actually relevant to build a kind of a compliance artifact? Is there something that we have to take into consideration when we are looking at the stuff? So we have to understand that we see, we have to assess the stuff that we are using, and then we have to put it into a kind of a situation project, module, whatever we call it. And this then someone should review in a kind of an approval saying, OK, that's good, or that's not good, please do again. So that we can be sure that it's correct. So we have a kind of a requirement for an approval. And whenever we talk about approvals, it's absolutely essential that we know who is doing what so that we need a kind of users. And we need probably to decide between those that are producing something and those that are giving the approval. So we have roles involved. And also essential, if you do an approval, it's relevant to audit that this approval has been has happened. So it doesn't make sense if you cannot follow up later on who has given the approval. So audit is absolutely essential as well. This gives already a couple of requirements for kind of a solution that you have to build. And then it happens that one project has a component, a second project has a component, and probably there are more projects using all the same components. So probably it's a good idea to have a kind of a package or data repository where you store data that you have once curated or identified and curated there. And so you do not need to do the same step over and over again. I think having such a repository is good. But the data that's arriving from the scan might not necessarily be complete. There is probably a declared license with something that is not essentially what is the one that is inside the repository. So how can this really resolve? You need more metadata. And what you can do to get more metadata is to use some kind of package crawlers, crawlers that are running around searching the Internet for additional information that can be added to this package data. Also it might be relevant that you have an option to look up for older versions. So it's a kind of a source archive that you probably should include into this kind of repository. So you can look up data or binaries or things that are probably older and then you can assess them in a separate approach to decode and understand what's in there. Following up this, it turns out that it's also a good idea if you once identified that there is something you don't want to use, for example a component that is out of license and therefore is not capable to be reused or doesn't give you the right to use it actually, then you do not want someone else to put it in his project again. So it makes sense to have kind of policy and rules capability that prevents people from doing things or pushing things into an approval flow just to hear that it's already denied by someone else. So this policy and rules thing is something that really might help you to get on and to improve the process. And whenever things are to be improved, a good base to understand how much it is improved is to have certain kind of reporting and analytics. So this should be something that is inside this kind of function as well. Now we start to get away from the basics into the luxury parts. Also cool to have would be something like a generator for all these kind of compliance artefacts because it's pretty, I mean when we have a list of 100, 200 or 1000 components, it's not really fun to assemble this all manually. So having a kind of compliance artefact generator is something that is pretty helpful or could be pretty helpful. To assemble the right things into this kind of compliance artefact it might also be relevant to have a kind of an understanding of what actually is required. There is one option to say, okay we do always everything and do the worst case scenario deployment. But there is also an option to say I want to take the circumstances and only provide as much as required. So I have a kind of a solver that is capable of understanding the licenses and the situational context and brings this together as the input for this kind of artefact generator so that it can create suitable stuff. And in addition to that we recognize and this is also why we put it as a new open source solution into the market. We saw that it's only few sources really have the information available or in a place put together that is required to cope with the challenges for publication. So the publication requirements, attribution requirements, it's often required to mention the copyright holder, required to put in license stuff and so on and so forth. But this is not necessarily exactly or simply to be derived from the package so sometimes it makes sense to have a more detailed view into a repository to understand what are the licenses and the copyrights attached. This is done by such a kind of a license and copyright scanner or could be done. It could also be done manually but having it automated might be a much better idea. When you already are doing all this for the open source world we've learned that there are some regulatory things that might want you to do in the coming off the shelf management for third party components in the same place because handling it a little bit different sort of components does not really make much of a difference anymore. Also nice to have is the snippet scanning. There is a real source code analysis for already used parts so that if a routine is taken from somewhere that this is flagged and it says, oh look, this is something that we know from these tools or probably this might be a copy and this might not be allowed or kind of modification. But we saw in our practice that it's pretty hard to get this into CI CD because the amount of false positives is so high and the base can be very narrow. It's hard to really put this into flow, keep it in time and so often the outputs then are there and if then else construct that is reused which is partly not too much to protect. However, given all this, if this is not all done by one tool, it might make sense to have something that's orchestrating it, keeping track of what's happening so that you can have a status over all of this. And now we're done. This is more or less the capability model that we have developed and that we were generating over the time of the last years and which is now as a kind of orientation map available. You can use this to focus on particular parts. For example, if you look into the outbound components, if you go back to our basic model and say, okay, this is outbound to be, for example, open chain compliant, I need to have this ordered log, I need to have the artifacts and if I want to automate it, I need to have this artifact generator. You can look on the inbound side and see, oh, there are different sorts of scanning types that I can execute, probably in my case, just the source is relevant. In other cases, it might be that there is just the container is relevant, whatever. Or someone wants to really make sure, he often wants to make sure that the packages used are really attributed correctly. So probably they want to do this kind of license and author scanning, which is a very different task from resolving transitive dependencies when you do the kind of package scanning. So there is a lot of different things and it's good to understand the differences between because they are easily mixed up and that's something that we really understood over the time when we were talking to customers that they get confused by these different tasks. And that's why we put this model out there. It's available. I will put on the links later on so that you can follow up and dive deeper and all these tasks that I have described, they are described there and laid out and you will find responsibilities and things broken down for this stuff. When we were talking about inbound, there's one thing that I really have to go about. I put it in an extra bucket because it's not, wherever we go, we find people just taking containers as they are, even if you take a component, a software component, it's difficult, it's dangerous. So please, whoever is going to reuse things, make sure that you understand what's inside, either by scanning or by just decomposing it manually, understand what you are using. It's so important. And these containers, there are so many other, they come with stuff, you won't never like it. But pay attention what's in there, it's absolutely essential. Same with the components, but for containers, it seems like, I don't know, people seem to believe that something is so ready to be reused that they don't need to spend the time on looking into it. I don't understand why it's that way. But it's absolutely dangerous. Then the next thing that we want to look at is the inner part. Here we see actually two important things. The first thing that, whatever you do, you should make sure that you resolve the obligations dynamically, because there is a context, it's your situation, it's your IP requirement, it is the business model that's behind it. There are different things that influence the interpretation, and you have to make sure that you really look for the effective licenses, not just the declared ones. There's a lot of stuff out there that is declaring licenses, but unfortunately, it's just a high level view, and deep down there are more licenses, you have to understand this, this is very toxic, or could be toxic. It's not necessarily, but it could be. And finally, when you implement such a process in an organization, you must ensure the accountability. It's so important. That's something that no tooling chain can do for you. This is something that you have to set up aside. It's make sure that people have to stay, for example, or keep their project green. There must be an indicator that saying, hey, approval is done, if not, people will go around. And this is something that no tooling chain can ever provide you with. You have to make sure that people are accountable for the results they are producing, and the tooling chain can be the mechanism to do, to comply, to serve this. But there must be an organizational barrier that's preventing these people from jumping around. There are a couple of uses for this capability model. So why are you taking care of this capability model? It is very helpful, and a couple of meanings. The first is, it gives you orientation when you want to develop your organization. So it can help to implement. This is, for example, these are the dimensions that we typically put into our change projects. There is a process. There is the policy itself. There is also the different tools or tool approaches that we have. There is an OSPORT that needs to be established, and there is the change project itself. And this should follow certain goals. And we typically have a kind of a mechanism that's saying something like, okay, let's put certain qualifications in Q1 so that we can ensure the basic move, and we enrich capabilities. We grow in maturity and professionality. And finally, we step up and increase quality and seriousness of the overall process. So you may want to use this to decide how you want to focus your rollout. This is typically pretty much demand driven. So if there is a business need to protect this or that or that, then you will have different icons put in there than if another need drives the compliance effort. So orientation is one thing that you can derive there. The second thing is kind of understanding of what a certain tool actually can provide you. Here you see a couple of samples. That's pretty simple ones because they're pretty much focused on one or two capabilities each. This is something where we want to get more into. We want to use the model to describe different solutions. And we want to offer this to give orientation on where you could use what kind of service. And especially in combination with the map that we've seen before, the planning, it gives you the orientation or understanding when to look for which tooling. And so growing steps. And this is then the map or the record that can help you on that. It's also good if you want to compare solutions. I mean, you can have now, if you say you want to do a setup and you get these kind of comparisons then you can map the tools and get the promises from probably the vendors saying, hey look, what are you serving and which competence. Here's an outline of the competencies. Are you coping with that? And then the vendor takes the boxes and you can see better understand which might be the right thing for your implementation plan. So these are actually more the benefits. This is also why in the next step we want to, based on this model, we want to grow further in that section. We want to map more tools. We want to build a kind of catalog with this. And I'm happy to get more information. All the one I help are invited to join us and to put in also efforts to close these materials. Probably also it's important to keep them up and running. I mean, if I judge something today, it might be different tomorrow. So it will be an effort to keep this in place. And also there is still some work to do. I mean, currently it's a reference architecture that we've put in there. It's a capability model, overview of things that need to be done. It's a functional view and it can be mapped into different technical architectures. And also there are still, when you look into the repositories, then you will find also an area of comments on each capability description. And these comments sometimes contain questions because there might be different capability to be achieved. Or we could probably do it in a different way or tailor it differently. And these questions, they should be finalized. They should be answered probably or put aside in a documented manner so that we say, okay, we can be sure that this will or will not be done. It will continue to mature the model and probably one day we come up with the reference architecture. Instead of having one that is a version number and going on, probably we can say, wow, this is the reference architecture. Finally there's also important these efforts that they have happened within the tooling work group over the last one and a half years. And there's also an approach to start from the data model side. So having all these components is nice and you saw this tool orchestrator in the background. The tool orchestrator needs to talk to these components, to initiate something, to push an event, to trigger or to receive messages. And so it's absolutely essential that we understand.
|
Openchain is a comprehensive set of requirements allowing to cope with the open source compliance challenge. Recently it even has been accepted as ISO standard. However, compliance in todays world is not possible without tool support. To get a grip on the different tools, understand what they can do and where their limitations are, the OC tooling workgroup decided to develop a capability model. This model outlines all required capabilities to cope with the open source challenge and allows to map the functionality of tools. Thus the model can be seen as a map through the djungle of tools. In this talk, Jan will introduce the model as well as briefly outline the most relevant capabilities. Links to further resources as well as first maps will be provided. Open Source Compliance (OSC) is not a might anymore. It is part of good manufacturing practise. Since manufacturing and software grow closer, the insight that not only the Software Bill of Materials (SBOMs) become essential for maintainability and security of software but also the legal documentation can't be seen as a once printed and never read paperwork anymore. The acceptance of CI/CD as development best practise and the ever growing amount of components used from open source stacks and their imminent dependencies prevent further manual delivery of compliance artefacts. But what is required to cope with that challenge? Where do you have to put your efforts in? Which tools are the right tools to check for your purpose? To help you answer these questions, the Capability model has been designed. This talk introduces the motivation as well as the basic aspects of the model without going into details. Guiding thoughts and ideas will be transported as well as links for further studies will be provided. Finally the talk will conclude with a few mapping samples and an outlook in which directions the work will proceed.
|
10.5446/52784 (DOI)
|
Hello and welcome to the Open Chain session. In this session we want to give you an introduction into what Open Chain is about, what Open Source license compliance project is doing. My name is Jan Tieter. I am from EACG. EACG is a consultancy coming from the architecture part and is focusing on supporting organisations and getting a good deal out of Open Source. We came to the Open Source chain project in 2018 and it very much fits to our thoughts how Open Source compliance should be achieved. This is why we are pretty much happy to support the project and also happy and very enlightened about the latest achievements where this specification became recognised as an ISO standard, international standard. I only have a few minutes to make this complex topic or talk about this complex topic so I don't want to spend too much on talking around. The core idea of Open Chain project has always been to define the requirements that are relevant or an organisation should comply with to ensure that they are producing Open Source in a compliant way, meaning legally compliant in the first place. The idea is we have an inbound side of a company that is taking software as for example Open Source as inbound into its own procedures and processes to produce some artefacts that it then puts outbound into a downstream consumer. The idea is if this happens along a value chain, it is absolutely beneficial if you have a core set of training, policies and processes that allow you to manage all your Open Source in a very reliable manner so that you are able to produce an outbound product that is definitely compliant. If you think this away from the particular company into the value chain, then it comes clear that the overall benefit of a value chain that can trust or the inbounds it is receiving, because it is receiving it from a certified organisation that is doing or reliably delivering a compliant product that reduces the analysis efforts you have when you want to reuse the stuff or want to build upon it. This idea is quite a while ago achieved by particular companies and since then they have created this specification that is called the Open Chain specification and this recently in December 2020 has been recognised as an international standard under ISO 5230 and it is an open standard, it is available for everybody, it is something that you can really look into, materials are available and you can start implementing these things. So what is this all about? I want to give a quick overview, it is just a nutshell view so it is more complex behind but let us talk about the basic things. So there is the specification, it separates requirements into different domains, there is for example the foundational part that is requesting to have an open source policy available to make resources skilled that are capable of understanding what they have to do, that there is an awareness in your organisation that open source compliance is something relevant to take care of, you have to scope it in the sense of who is actually taking care of or involved in this topic and it should be clarified who is actually where, which part of the organisation is actually participating in this programme and also you should make sure that licence obligations are clearly understood and certain use cases can be managed. In the second domain it is about the capability to deliver, to assign the responsibilities to have people there on the outbound side that can take inquiries to search for, you have a process on how to search, that you are efficiently resourcing the people so that it is possible that the task can be fulfilled and it is not just a ticket that is hanging somewhere in a ticket system, it is about the review and approval within the project so when projects are using software, that they are producing bill of materials, that there is a clear process on how to document what is inside the software and that there is someone who is kindling the different cases when software is modified or understands what to do in case there is a certain business case to be fulfilled. Then it also should be ensured that the artefacts are provisioned, that everything that leaves the company is containing the responsible early requirements, fulfilling the specification and that there is an understanding of what happens, what I need to do, what are the policies when I want to contribute something back, seeing I have a fix or something, how do I handle this, how can this be treated. And last but not least, it is absolutely something that I did not understand in the beginning but it makes sense from the point of view in the value chain, if you declare your conformity, it is something that you on the one hand side have to maintain throughout the whole time because it is not a point in time thing, it is something that is very lasting, must be sure because and also it must be declared to build distrust. So these are the core domains of the requirements that you have to fulfill with this specification and there is a lot of material out there that is all freely available, look out at GitHub for the open chain project reference materials, they are there simple to access, they have been produced by international groups, working groups that are actively supporting further development and also building training materials and so on. The groups have been working on self-certification questionnaire that is available over the internet, you can go there, you can look into the question, see the status of your organization and will be able to either derive the need, what to do, where to close the gaps or with the help of some supporters you may use this to improve the organization that you are in. There is a globally physical distribution of partners, you find a lot of law firms that are in the partner program, there are two vendors that support open source compliance in the program, there is a lot of consulting companies that have specific knowledge how to drive your organization towards a compliant approach and there is third party certifications available. So there is a huge amount of knowledge out there that you can take and make your own compliance working. Be part of this, join us, there is a website, you can find a get started document and there is also a lot of material available, so whenever you have questions feel free to reach out to Shane Koffmann, he is the project lead from the Linux Foundation or direct your questions to one of the partners, for example you can reach me on OSC at EAC GDE and we will be happy to help to answer your questions and also I will be available in the Q&A session that is following this. Thank you for your attention and I am looking forward to your questions.
|
A short overview of the OpenChain project, its purpose, goals and the current state We will introduce the project, its current state, especially the ISO adoption, give a short overview of the requirements and link to the different certification schemes. Finally we will expose the working procedures and link the international working groups.
|
10.5446/52786 (DOI)
|
Welcome everyone to my session at the Friends of Open JDK Room at FOSDOM. My name is Steven Shin. I'm the VP of Developer Relations at JFrog, and I'm very pleased to be able to talk about one of my favorite topics. Today I'm going to talk about how everyone can support the Java community, and I'm a longtime Java community member as a Jug Leader, Java Champion, Java One Speaker, Java One Rockstar, Java One Conference Chair. I ran the Java community for a while, and now I'm pleased to be able to do the same at JFrog, sponsoring our activities for the Java community and other developer communities. And I think as everyone here knows, the Java community is one of the most vibrant and active developer communities with over 12 million global developers represented, a huge set of Java user groups with hundreds of active global Java user groups on all continents, and a huge community of advocates and folks who are quite active, support the Java community, and a bunch of my great friends who joined me on stage for the Java One community keynote you can see here in the slide. So what I'm going to do today is I'm going to talk a bit about how everybody can better engage with the Java community. These are tips which you may find useful yourself. It's a great way to share with colleagues and coworkers who you want to get more active in the Java community, and I think it's something which we should all be cognizant about how we can become better corporate, better community citizens and help to engage and support our fellow community champions. So let's start out with the one which I think is most obvious since we're all here at the Friends of OpenJDK room is how can you become more active in the biggest open source project for Java, which is OpenJDK, which supports the Java releases itself. And if you're not already a committer in this room, which I'm sure a lot of you are, it's very easy, go to openjdk.java.net. There is great advice here on how you can engage and get started, but basically find something interesting which you want to do, whether it's a bug fix, whether it's a feature, whether it's something which you can add value over time with. Discuss your intended change on the mailing list, become active, talk to other folks who are committers, talk to folks and get ideas about the best way to approach it. Submit a patch, start out with small things, bug fixes, things which are easy to review and fix. And over time, you'll build the credibility that you not only have great ideas about how to improve the Java releases, but also you have the engineering discipline to produce great patches and do the right testing and follow the process so that you can keep the Java platform stable and a great platform for other developers. I think a great success story for all of us in OpenJDK contribution is one of my good friends, Johann Voss, who is one of the primary contributors to OpenJFX, which is a sub-project of OpenJDK focused on JavaFX. He's also the CEO of Gluon, and he's got a very busy life, but he finds enough time to be active, contribute patches and make great improvements to the JavaFX code line, which is something near and dear to my heart. So if you're not already a contributor to OpenJDK, become one. If you have friends who you'd like to get to contribute to OpenJDK, this is a great way of introducing them to the process, and I think this is a great start for folks on how you can build and grow the number of contributors, which will make the overall Java platform more vibrant and better for all of us. The second one is, of course, we've all been stuck home during this pandemic, and I think I probably, probably all a year ago, thought it would be quite a bit different. This is kind of how I pictured myself after the Apocalypse and after, you know, travel was ended and we had to go forage for food just down the road with my dog, or apparently I have a cat now. But apparently this wasn't the reality for most software developers. Instead, we ended up being locked up at home with elaborate, increasingly elaborate setups. This is a little bit about my video recording setup you can see with my camera lights and fancy backdrop, which is behind my treadmill desk. And I think we've all found ways to be productive, to accomplish what we need to get done from our home environment and are quite fortunate as software developers that this is something which we can all do and be successful at and maintain pretty good employment with. But this isn't the case for everyone, and how can we better help folks to fight COVID? And some members of the Java community have figured this out and done great contributions with things like the COVID Data Explorer, which is done by the DLSC. And this is something which is actually written as a Java FX application. I'm going to show a demo of this running in J-Pro, which is a framework which lets you run Java FX inside the browser. The J-Pro guys do a really good job with this technology, so you can just pop it right open in your favorite browser. Here I'm running Firefox. So we have Java FX running right within the browser, and we can check out the COVID data on a bunch of different countries. So here we have Germany, Switzerland, and the United States. If this was a contest for a race or economy or who could sprint the farthest. As usual, being very competitive in the U.S., we are totally winning the COVID race. So kudos to us. Unfortunate for a lot of us, though. And this allows you to better zoom in on the data, get an idea of what the trends are. You can add in different countries, which you want to explore, like maybe check out how our friends in Japan are doing. So a very nice visualization. And as software developers, there's a wealth of data out there that folks can see and take a look at. Without good visualization tools, without the ability to actually look for trends and look for what's actually happening, it makes it a lot harder to make wise decisions. And I think one of the things which we can help as software developers and the Java community is both find building great tools like this, which help to analyze the public data out there on COVID and other health issues, and also to amplify on social and in other channels about the great work which the Java community is doing to help to push back against pandemics like this. So again, another great thing which we can all do to make the world a better place. And this is a great example of Java FX technology in action and the Java community kind of rising to the occasion. I want to take a quick minute here to point out that all of the slides for the talk are posted at the URL you see here, and also my employer, JFrog, is going to give out free t-shirts to FOSDOM attendees. So we're going to raffle off five t-shirts to folks who are interested. This is our limited edition liquid software t-shirt. So come say hi at our show notes page, which has all the info about the slides and stuff here. No need to take notes. Just either scan the QR code or pop in the URL bit.ly.fosdom2021.jfrog. So number three, I think this is something which is really important because we're at the Friends of Open JDK room, which is sponsored by the FooJay community, which is this is a great community resource, which has recently been built by great folks in the community who care a lot about this technology, like the organizers of this room. And it's designed to be a place where you can share information about different Java trends, the best use cases for using Open JDK, different polls, what's happening in the industry. And I think community resources like this are extremely important because this is how we bring the community together and we get folks to start having conversations about topics and things which matter to all of us. So join the FooJay community, go to FooJay.io, check out the content that there is. There's pretty much new content being published every day on FooJay today by a variety of different authors from the Java community. And it's a fairly young community, so there's also a great chance to engage. If you want to join, be active, start new categories, add your own content and posts or just be an active advocate for the FooJay community or even come up with ideas on things you'd like to see on FooJay. I think these are all things which are possible and we've definitely been also sponsoring FooJay from JFrog and have joined the board and are helping to push it forward and make it a better place for the entire Java community. Okay, so number four is join and or sponsor a Java user group. I think this is one which is very near and dear to my heart because I'm a user group leader as the Java Effect Silicon Valley organizer. Also, I've spoken at user groups across the world. So whenever I can, I travel and I try to get out there. In the pandemic, you might be wondering, well, aren't our Java user groups still happening? Are we still doing jugs globally? And of course we are. We're just doing the virtually. So you can see a bunch of my colleagues in the Java community at JFrog, Sven, Melissa, and Baruch all doing virtual conferences at various user groups around the world. And I think this is something which a lot of user groups have transitioned to pretty smoothly. They've chosen intentionally to wait until they can get back to in-person presentations, which I think is also a respectable position. And hopefully we'll have vaccines and we'll have other measures in place to make that safe and possible to do. But in the meantime, this is a great way to get out there and show your support for the Java community. And it's something which you can actually do globally. You can join and be active in Java user groups that aren't even regionally local to you, both the community. And also if you're in a company which has some outreach budget, most Java user groups are very interested in finding sponsorships. At JFrog we sponsor a bunch of user groups for their Zoom or online fees, for other membership things they'd like to do, giveaways. And I think it's been very well received because user groups want to continue operating. They want to continue to bring great educational content to the Java audience. And as members of the community, if we either by joining a Java user group and showing support by being there, or by helping our companies to sponsor and support user groups, this is one way which we can better grow the Java community and make it a better place for all of us. Okay, so the next topic I want to cover is joining a jug tour. And I think this is something which is extremely exciting and is a great way to get out there and show your support for the Java community. And let me just play you guys a short clip about one of the tours which I was able to do when I was at Oracle to show what we were doing for the Japanese Java community. So basically what we did is we traveled across Japan and specifically visited some more rural areas of the country where we would hit user groups where they weren't in Tokyo, which is a big hub, but they still had very active communities, spoke to their user communities sometimes with translation, sometimes they were also pretty patient to listen to our English and even over the language barrier, we had great communication with the local attendees. And I think this is another way where you can help to do outreach to smaller user groups and communities, make a difference in their ability to engage with their attendees. And if you think about it, lots of user groups just don't have the opportunity to have big name speakers or folks who have a very highly technical presentation plans. And it's great occasionally to have somebody come by who can bring some new interesting and innovative ideas into their user groups and communities. So we were able to travel all the way from Tokyo to the southernmost city, Kumamoto, where they had the earthquakes, but they've recovered nicely since, all the way to the northern island of Hokkaido, which you actually have to go through a ferry to go. So we took the motorcycles on a ferry. And I think we're very warmly received all across Japan and greatly supported by Java user group groups like Jjug, which is the Japanese Java user group community. Now you might be saying, well, it's kind of hard to do this today because of the pandemic. So we know and we've been working with them. And that's the community is a process on organizing a virtual jug tour. So this is something which is, I guess I'm officially announcing it now at Fosdom. But the board members of Fuji, so Azul, J-Frog, Pyara, Datastack, Sneak and others are all working together to organize a virtual Java user group tour where we can visit a whole bunch of user groups virtually, March through April. This is something which I think is a great way to give outreach to not only large Java user groups who have transitioned nicely to the pandemic and want to engage with us, but also smaller user groups or folks who are trying to build their user base. And I think the unique thing about the pandemic now is it's kind of normalized geographies. So if you're a developer and you're in a remote region or you just, you can't physically make it out to a large city or you don't want to live in a large city, you now have the opportunity to see great content and presentations online on any night. On any given night, there's a virtual user group happening somewhere in the world which you can learn and engage with. So I know that in the Bay Area between San Francisco jug and Silicon Valley jug and the Java effects Silicon Valley Java effects user group I run, pretty much every week you can go to a different Java user group and find a great speaker. Now you can do that globally where you don't have to be in a big metropolis, you can actually be anywhere and we're taking that on the road and we're going to do a whole set of unique content, different presentations, a couple each week for a couple months and you can follow along, join the tour, we'll have a page up on Fuji to highlight the tour and if you're interested to either participate as a speaker or as a user group host or just find more info, contact my colleague Ari Waller who's one of the folks who supports the Java user groups at JFrog and he'll send you more info. So that's super exciting, we're very pleased to be able to announce our virtual Fuji jug tour. I think the next big thing which I'd recommend everybody do right now if you're not already following a Java champion, there's a lot of great folks who you can follow and engage with on Twitter and this is again a great way not only during normal times of during the pandemic where you can be super active support folks in the Java community like my friend Johann Voss who I mentioned who is a core contributor to OpenJFX like Heinz Geburtz who is one of the founding Java champions and also the organizer of a great conference. Trisha Guy who is a developer advocate for JetBrains and again one of the great community members I visited her in Spain when I was traveling through a motorcycle tour and she was a great host. Simon Maple who runs the virtual jug which is again one of these great resources where you can always, they were doing virtual before virtual was cool and now they're still doing virtual and they're the best at it. Kirk Pepperdine who is the world expert in Java performance tuning. Andre Amore, Fabiani, Linda Vander Paul and a whole host of Java champions so I'd recommend go to the Java champions Twitter handle and then see who the Java champions Twitter handles following should be no surprise it's mostly Java champions and go down the list and follow a whole bunch of Java champions and this is a great way to bootstrap your social presence in supporting and engaging with the Java community and being kind of that taking your career and your basically your social networking and your job growth to the next level. The next thing I'd highly recommend is we are all pros now at Slack. We use it for work, we use it for conferences and you should be a member of a Java Slack channel if you're not already and these are three great open Java Slack channels that anybody can join. The first one, Foojay is the official Slack channel for the Foojay.io website very community friendly as I said growing community so you can have a huge voice. This is where all the decisions get made on the content on Foojay things like the virtual jug tour which we're organizing and it's a great way to engage and just kind of be part of the community. The Java specialist Slack was started by Heinz Kabutz. It is the largest collection of Java developers globally so Heinz has done a great job curating and bringing forth one of the biggest Slack channels in the entire Java community. It's also the place to find out about cool conferences and to talk about random Java topics and just to pick the brains of geniuses like Heinz and others who regularly answer questions and engage with the community. The last one I'll put here is the virtual jug Slack which it's the discussion area and kind of how you engage for virtual jug but it's also a great resource to just engage with other folks in the Java community. I would highly recommend joining all three of these Slack channels. This is something where the more engaged you are and the more ways you find to interact with the Java user group community by kind of joining different efforts like this, the bigger the growth is and encourage both you and your colleagues to join, engage and you know if you have a technical question all three of these Slack are great places to ask it. There's experts who are happy to answer and it's a really open and friendly environment. Number eight is joining the JCP, the Java community process. I think this is something which a lot of us know about but we just overlook that it's entirely free to join as a member of a Java user group and as an individual. I think this is something where if you're a jug member you should definitely join the JCP, join the process, have your voice in elections, review the proposals and the specs which are upcoming and this is a great way for you to engage with what's upcoming in terms of new releases, new JSRs and new specifications which are going to define the next standard for Java and the future of how things go forward. So again this is something which I think everybody can and should get involved in and it just provides a lot of opportunities for everybody to have a voice in the community and affect the future of the Java platform. Something which I think everybody can and should do on a regular basis is just be active in sharing your knowledge, writing articles, writing blogs. You can start this really simply by micro blogging, sharing things on a personal blog site, sharing things on websites like dev.to which are open for anyone to post on and hopefully you'll get picked up by and some of your articles will appear in places like Dzone, InfoQ or Java World and one more place which I think is probably the best place to get started right now in the Java community. Shameless plug for our favorite new community project, Foojay. And Foojay today is a great way to put your blogs out there, very open to recent occasion if you just want to repost blogs which you posted on your personal site and I think this is a great way to be heard by a larger voice and to contribute to the growth of a new and budding Java community. So get out there, share your knowledge, write articles, write blogs and share them broadly. Don't just hide them on a personal website. Share them and syndicate them on some of these great resources. And the final one which I'm going to recommend everybody here do is participate in an conference. So I think this is perhaps my favorite way to engage with the Java community which is we're at a conference right now and I think FOSTA is kind of unique in my opinion in that when you come to FOSTA it's a very open environment with just the level of engagement and all the folks who are in the room are involved in open JDK are active in projects and are extremely, they love the Java platform. I think it produces an environment where everybody is a part of the conversation so even though there's one speaker often the conversations in the room especially towards the front of the room are actually more interesting than what's happening purely on stage. And I think unconferences bring this to the next level where everybody is a speaker. So at an unconference you have a collection of folks who come together, anybody can join so it's open, you just have to invite yourself in, sign up early, show that you want to be active and engage with the community and you'll be sitting with the likes of Heinz Kibbutz who is the founder of J.Crete and runs one of the biggest unconferences on the island of Greece each year with Jay Alba in Scotland which is another eight great unconference which kind of was an offshoot from J.Crete. Jay Spirit in Germany which was started by Sebastian Daschner, a Java champion and this is run in a refinery near the border of Austria and Germany and again a really great unconference with of course great spirits and great spirit from the attendees as well. And Sebastian and I also started an unconference in Japan called Jay Olsen which again has a great following and a lot of active engagement and Jay Alba also made the switch to virtual and is currently accepting registrations for the virtual unconference that they're going to conduct in May. So if you can't wait for travel restrictions to be lifted you can attend a virtual unconference and hopefully soon we'll all be traveling and participating in physical unconferences and I hope to see you at an unconference somewhere in the world. So I hope through these tips you've learned different ways which you can engage with the Java community which you can show that you really care and support the Java community. One thing I would actually highly caution against is you should definitely not write a book. So I'm half joking there are some really great benefits to writing a book and I think the satisfaction of sharing your knowledge should be worth it but if you think that it will make you money then you'd probably be better off flipping burgers. So I think there's its own satisfaction writing books and I was joined by Johann Voss, James Weaver and a whole host of great Java effects community members including Gail and Paul Anderson, Bruno Borges, Tony Apple, Weichi Gao, Jonathan Giles, Jose Pereta, Sven Reimers, Eugene Reizikov and William Antonio Sierra. And we put together what we think is the definitive guide to Java effects. It's a great community resource. It's written by the Java community and hopefully you find this book valuable. You found the talk valuable and as I mentioned earlier you can find this presentation at the link that you see here. You can also win some awesome JFrog t-shirts and with the limited minutes I have left I'm going to open it up for Q&A in the room so we'll switch over to Q&A for maybe the two minutes or so. I managed to shave off my talk and I'll join you guys in the hallway afterwards for anyone who wants to join me and chat more about the best ways to engage the Java community. So thank you guys very much for watching and enjoy the rest of the Friends of Open JDK video. Bye.
|
Foojay is all about the community helping to take Java forward, so as an attendee of the Friends of OpenJDK FOSDEM devroom you are already on your way towards making the Java community better! But what can we all encourage our friends and colleagues to do in order to make the Java community more vibrant, active, and welcoming. In this presentation, you will learn all of the insider secrets on how to support the worldwide community of 12 million Java developers.
|
10.5446/52787 (DOI)
|
5 tips to create secure Docker containers for Java developers. But first, we need to talk about containers. Because what is a container actually? Well, if you look at a container as a real world thing, a physical thing, it is a receptable for holding goods. Basically it's something that holds an object or a liquid or whatever. It is a portable compartment in which freight is placed. Basically that's how you describe a container. If you look at containers in real life, it can be anything like a soda can for holding your sprite or your coke. It can also be the airtight container that you use to put your leftover food in. However, it also, and that's what people think of containers, can be a shipping container. You can ship all sorts of goods in a shipping container and it's placed on a train or a boat ship, I have to say. Or for instance, a truck. All these things are there for the same purpose, to hold the stuff that they contain, hold that in the exact same or as good as possible way as they were when they were put in. Think about it. Your soda can or your beer can is there to preserve the flavors and the bubbles in it. Same holds for the airtight container to keep your food as good as possible over time. So you can store it, you can ship it. For the bigger container is the same thing. It makes sure that if you, for instance, have your household in it because you're moving from one country to another, you will receive it in a unharmed way. So basically it keeps the containment safe. It's a layer of protection. And if you look at it from that way, it is some sort of boundary from the internals that you want to preserve towards the outward. So the influence from the outworld, outworth world cannot get into the content. For all three the same things, it holds the same. However, you need to pick your container accordingly. For instance, putting your beer without a can into a shipping container doesn't make much sense. So a container is there for a specific reason. If you see this picture, and that is what a lot of people think, if you put a, if you think on it as a software concept, we can put software in a container. However, the difference is that although containers here are there for safety, a container, putting software in a container by default is not so much safe. It doesn't mean if you put your Java program in a Docker container, for instance, by default, it is not safe. However you may think of that. And let me get to that because this is that container with my Java program in it. There is only a difference because this was the airtight container. And normally what we want with that airtight container, we want to make sure that the oxygen comes, not comes in contact with the food that's in it. However, if we have a Java program in it, most of the time we need to interact with the outside world. We need to make sure that we can connect with a database, with a data source, with the user to get the user input. So there is two way traffic from the content to end the outside world. So what I want to say is you need to pick your container in a good way, but also putting software in a container doesn't make it safe by default. My name is Brian. I'm a developer advocate and longtime Java engineer for currently working for Sneak. I'm living in the Netherlands and I'm doing a bunch of stuff for the community as well. I'm currently one of the leaders for the virtual jug and for the local Utrecht jug, which is one of the cities here in the Netherlands, and I'm also co-leading the DevSecond community, which is a community that looks into application security specifically to make applications more secure from a developer perspective. Last but not least, I am an Oracle Groundbreaker Ambassador for a couple of years. But let's go into containers. And I specifically go into Docker containers because Docker is still growing. Docker grew tremendously and over the years, Docker is the go-to source to container as your application. It currently has about one billion weekly downloads of container images that you can use to build your own container on top. Although there are many other ways to build containers, I will keep it to date to Docker, but many of these rules can also be are also applicable to other ways of building a container. Again, I'm specifically looking at the combination for Docker containers and Java development. So this talk will give you five tips not to fill your security if you want to Dockerize your Java application. Let's go on with the first one. As you know, we built containers on top of other things, and we call that the base image. So we need to choose the right Docker base image for your Java application. And that sounds quite logical. However, well, let me just show you. In most cases, this is the first line in your Docker image or in your Docker file when you build a Docker image for your application. It's from something. In this case, I say from Ubuntu. And that means I'm picking the Ubuntu, in this case, without any tag. So the latest Ubuntu image from Docker Hub, and I build my stuff on top of that. And just like with a house or a building, the foundation of that building is important. So is it with building your own Docker images? Last year, we did some research and we pulled the last we built the latest or the latest version of the 10 most used images from Docker Hub. And we scanned them with the tooling from my company with Sneak to see if there are vulnerabilities in there. And as you can see, all of these 10 have vulnerabilities. And it goes from 31 from Ubuntu to 567 on the node image, node base image. It were the latest images. And that means there is no specific tag on it. What happens is, for instance, if we take the node image, the node image was now not so much vulnerable itself, but it's also based on another base image. In that case, it was a Debian image, a somewhat older Debian image. And most of these vulnerabilities could be traced back to the operating system layer. So if we look at the operating system layer, and we look at vulnerabilities in the operating system images, this again is already a year old, but it just paints the picture for me. You can see the source in the bottom, shifting Docker security left. But if we looked at the latest version of different images from different operating systems, you see there is a bunch of different things in it. So for instance, the Debian latest at that point had 55 vulnerabilities. However, if I took the specific stretch slim image from Debian, it already decreased the amount of vulnerabilities. This means looking at your operating system layer is the foundation, and it's just as important as looking at your application vulnerabilities if you want to ship that container into production. If we mimic this to Java images, and we look at the images from a friend's from adopt open jkk, you see that if we look at the different flavors of that image, the amount of vulnerabilities that come with the operating system heavily differ. If I choose the latest version of open jkk 11, which is based on Ubuntu, we only have 25. But if I specifically check the Debian one, I will have 75 vulnerabilities. And these are all from binaries that ship automatically with that operating system layer. The question you have to ask yourself is, what do I need? Do I actually need a full blow and operating system to build my application on? Probably not. So picking your the right foundation is essential for creating a good and secure Docker image for your Java application. But let's get into that deeper because use only what you need. If we look at a Docker file, for instance, and I do it the naive way, and I will build it up to a better way, trust me. Typically it looks something like this. I'm building a Docker image for my spring boot application. And the first line says I'm taking a random Maven 3 open JDK 11, which I found on Docker Hub. As you can see, I can create or I create a project directory. I copy all my sources to that. I'm setting it as the work directory. I'm calling Maven package to make sure there is a package and I spin it up by doing a Maven spring boot run. Okay, fair enough. This works. This works perfectly fine. But if I look at the size of that image, that image is over 800 megabytes big. And of course, that spring boot application is already large. But I bring a ton of binaries in that I probably don't even need. Actually, I'm not sure what is actually in that open JDK 11 base image. And that makes you think. The build image doesn't need to be the same thing as the production image. I do not need Maven in my production image. Think about it a few years ago or maybe a decade ago, you would build the war or the air and you deploy only the air into your web server. Why are we building Docker images in a naive way? Hopefully you're already, you're not doing that. But you don't need Maven. You don't need the JDK. You just need a RIA Java runtime environment. So why not slim your production image down to what you actually need? This Maven full JDK and your source code are a liability if that is the stuff that is in production. If somebody gets in and can alter your source code and Maven is already available plus the complete JDK, we can, while that thing is running, we can rebuild the stuff probably or eventually and make it available to your customers. That's not what you want. All we do here is we create a multi-stage build. We use the Maven 3 Open JDK 11 base image and what we did before as our building block, as our building image. We create the artifact that we want and if you see in the second part, we take an Open JDK 11 JRE Java runtime environment image based on Alpine, which is the smallest one. And what we do is we create a directory and we basically copy only the JAR file that was in that first image to that second image, to my final image. From that point, I only need to set the work directory and go to and just call a Java minus JAR argument on my executable JAR. And this works perfectly. Now I know that I do not have Maven in it. I do not have the JDK in it. I do not have my source file in it. And I have an Alpine image, which is a very slim debian or a very slim Linux based image with the JRE so that all the binaries that are there by default, for instance, from a Lubuntu or a Debian image are not there in my production image. The result is if I check this one on my computer and I see how large it is, at this point is just slightly less than 200 megabytes. And the last one was over 800 megabytes. That also means that faster startup times and it just does not contain stuff we do not need. Think about it. If you get a package from Amazon and your package is all filled up or the box is all filled up with bubble plastic and all that sort of stuff, it's a waste. But in this case, it's not bubble plastic. It are binaries that can be used. So don't having a binary in it, having a binary removed that you're not using can also not harm you. If you think about it, that of everything that is in the top part in the building image will stay there. So for instance, if you need to do a Maven build and you need to do a username and a password for a internal repository, you do not want to have that somewhere in a cache in your production image. This way you are sure things in your build image stay in your build image and you only copy the things you need into your production image, which is a very, very safe way to make a minimal based image. Talking about minimal or least, let's go to the next one, which is the least privileged user or apply the least privileged principle in this. What does that mean? Well, again, we're going back to that Docker image I showed you before. All right, I already have the multi-stage build and that is perfectly fine. However, in my production image, so let me just highlight my production part, so gray out the build part, I am not doing anything with users. By default, this means that my Docker image runs as the root user and that is probably not what you want. Probably there is a property files in it with the credentials towards your database because that application needs to make a connection or you do not want to have your user to have more privileges than needed because that can turn out quite weirdly. You can do that by just creating another user and I do that over here. Okay, let me highlight the lines that are important here. I get it. So the yellow lines are important here. Note that this is an alpine image, so it might be a little different if you want to use an Ubuntu-based image because, well, simply the commands are different. But what I do here, I add a group called Brian Vermeer and I add a system user and that system user doesn't have terminal access and it's also called Brian Vermeer and connected to the group Brian Vermeer. All fine. I set the work directory, I copy the jar over and then I make sure that my newly created user is the one that owns that project file. So what does it mean that only my user can do something in that project file? You can do it differently if you want to have more or less access in that case. Then there's something important. The last yellow line which has user Brian Vermeer, in this case I make sure that I call that user first and that all the points after that are called by that user. I specifically select that user. If you're not doing that, you're still running as root. So make sure you select that user before you end or you run certain things that might not be, that must not be run by any other user. So create a new user with smaller privileges, make sure that user can access and actually do something with your new program and make sure that you call that user. So the entry point in this case will be called by my newly created user that only has privileges on that project setting. Well, we're talking about containers and a lot by containers, but we also need to talk about our application and we need to scan our, both our Docker images and our Java application during development if there are any problems with it. And these problems can be okay now, but it can be wrong later if a vulnerability, vulnerability will be found over time. If we look at for instance, things like build packs, well, let's skip this one. Let's, well, as you can see that different build packs, which are also base images, they differ from how many vulnerabilities they have in the beginning, but I already showed you that. But we also asked in that open source security report in 2019 and in 2020, we asked them, we asked people like, when do you scan your Docker images for operating system vulnerabilities? Unfortunately, 50% of our respondents say we do not. And that is, that is an easy, that is a thing that can be easily solved with for instance the tooling that SNCC provides you. Of course there are other tooling, but everything I will show you today is free of use. So you can try it today, tomorrow or whatever you want. But I'm just getting into that you should know what is in your image. So for instance, if I do, if I pull an image like like the open latest open JDK image, I can scan it like this. Let me just show it to you. I think that is even easier. I'm just scanning the adopt open JDK, open JDK 11 latest image. And by doing this, I already pulled the image. So that's already on my local machine, but you don't even have to do that. As we can see this base image that we're using has 27 found issues. And we already see that there are high severity, medium severity, and even low severity if you go further back to the top. So by investigating on forehand, we can see that this might not be the one we want to use. If you are a Docker pro user, you already get this info from Docker desktop as SNCC provides that info to you. However, if we look at that, we can also check our own containers. So if for instance, I created the container already that I showed you before with that multi-stage build, and I do a SNCC container test on example version two, I can also make sure that the file is there. So my file, that's the Docker file that is that is connected to this container. So we can give you, if needed, give you remediation advice. And what we do over here is it will scan the Docker image to see if there are key binaries and base images that have problems and if it's possible to remediate them. As you can see, it tested my container and it didn't find any vulnerabilities because, well, it was based on the Alpine, Alpinejerry image. So what we can do as well is we instead of only testing our stuff, we can also monitor our examples or our images. By doing this, it will analyze the dependencies we have in our Docker container and it will share it with, it will connect it to your SNCC account so you can see it over time. If there is a new vulnerability found, you will see it in your console. So what you see now is that we have your container now monitored and if there are new problems, we can actively pin you. But that's not all because next to the Docker container or the container, whatever you use is your application. Your application is also an attack factor because if this is your application, how much of that application is actually the code you wrote? It's probably somewhere like this. Right? This is the code you wrote. You reviewed it, you had pair programming on it, whatever you had, a lot of eyes working on the code you actually created. But the rest of the code, of the binary of that jar, is probably heavily depending on frameworks like MySpringBoot framework, brings in a lot of dependencies and have no clue if these dependencies do have problems or not. On top of that, we have another circle that is your container but we also, we already talked enough about that container. Because what we need to know is that new vulnerabilities are there and there are more each and each years as you can see and it's growing. And if we look more closely to that, you see that in the middle one is Maven Central, which can also look at NPM because these are two biggest ecosystems. Most of the time, the problem is not in the direct dependencies but in the indirect dependencies. Though your dependency is bringing in other dependencies, bringing in other dependencies and deep down, there is an issue. And you might think, yeah, okay, but I am secure, what can possibly go wrong in my application? Let me show you that. So here I'm having a Spring Boot application and that Spring Boot application is not very interesting. If you look at the Spring Boot application, what it does, it is a grocery list, a grocery list that has milk for 50 cents or a bean for 50 cents and milk for $1.09. Interesting, right? Not at all. Also the item is not very interesting because the item just is a pojo with an ID, a name, a cost, as some getters and setters. However, if we look at the item repository, then it starts to get interesting because I am using Spring Data and Spring Data REST. With Spring Data, I can use the, well, I can extend the crud repository, which gives me a bunch of power so that I do not have to write all the crud logic myself. Simply by inserting a find by name like this in this interface, it will help me with the parameter name. So I can just give it a name and it will find my grocery by name. That's cool. But with this annotation and with Spring Data REST, it will transform my crud repository into a REST repository. And that is nifty because now I can use it to, well, as a prototype to go on. But as we know, prototypes will not always stay prototypes. And I will show you what can go wrong over here. I will get redirected to my Hell Browser, which just shows me what a certain endpoint can do. For instance, if I do the endpoint items slash one, and let me enlarge this a little so you can see it, you will see that it will bring my first item in my grocery list. It works with two. It will give me my second item. Also I can do searching like over here. I can search, use define by name from my crud repository and give it a name, beer. By doing that, it will return me the result like beer for 599. Interesting. However, there is an issue with this specific version. If I'm showing you this curl request, I'm doing a curl patch. With a content type, which is perfectly fine, it's JSON patch. And the buddy is this part. Until where is the end of my over here. And as you can see, this buddy is just based on JSON. Only within this JSON, I am utilizing the Spring Expression Language, SPL. And with the Spring Expression Language, I am able to interfere with objects. But I'm also able to create new objects like the runtime, like the runtime, get runtime. And what I do, I execute something. So I execute the value, etc, slash patch wd. Basically what I do, I do a cat on that A, E, T, C, pass wd. And I redirect the input stream to an output stream so I can do that. So I can show it to you. But that curl request is fired up on this endpoint. And this endpoint was the endpoint to show me the first item in my grocery list. Interesting, right? So if I copy paste this, and I execute it, you will see that this is the internals of my pass wd file. So even though you might think you're not vulnerable, we now see that by using a certain library from a version that was vulnerable and already fixed like long, long time ago. Don't get me wrong. But because you're using it, you didn't write it yourself, you are vulnerable. And the attack factor goes on and on. Because if I do this in my container, and I did not have, or I run it as root, I will have the same result. But if I have a more privileged user, or a user with a smaller scope, then this would not be possible. This means that you do not want to only look into your application or only into your container. You need to look at both. Because if a vulnerability exists, or if a hack exists, it's not just one thing that went wrong. It's a domino way of things going wrong. And people will find out over time that there is maybe a new exploit. So you should patch in all sorts of levels. It's scary, isn't it? By just having in the wrong version of that library. You can get around that by doing a sneak test on your application. As you can see, there are a bunch of issues in this application. And some issues have direct fixes and some don't. This thing, you need to build your application to be rebuilt. So build to rebuild. And what does that mean? Well, say we have your Java application in your Docker container. It's the entire container. It's a Docker container just for the sake of argument. Well say we do not have one, we have three of these applications. We have three pods or three instances running, and that's all cool. Now we found for some reason we find out that the third instance, there's somebody in it. It's hacked. Or we see that things go wrong. It's weird. The first thing you want to do is to make sure that single instance is demolished. It's gone because, hey, the immediate threat is gone. So we just blow it up. But we must be able to blow it up. And yet again, automatically, we want to spin, maybe we want to spin up a new version. Or create, fix the stuff and deploy a newer version like this over here. This means that individually, your pods or your instances need to be developed in such a way that you can rebuild. You can easily tear them down instantly and you can rebuild them if you need it. So at that point, the easy threat or the immediate threat is gone. So if you have a Java application that contains data and stores it in a database, make sure that it's not part of your container. Also things like files or log files shouldn't be part of your container. And again, if you use caching, it is cool caching, but be sure, be aware that the cache will be gone if you terminate that container. So it should be, all of these should be outside of your container. Or for instance, if we talk about cache, it should be automatically inserted again. For instance, if you use a in-memory data grid like, for instance, Hazelcast, I have no stock options for them. But the point is, make sure that your container is autonomous and it can work. You can demolish it and you can spin it up every single time and it doesn't infect the data you need to serve your clients or you need to get from your clients. So make sure that you can rebuild this. 20% of the Docker image vulnerabilities can be fixed just by rebuilding them. And that is because we are using things like latest. If you look at this, we're using the latest. The latest now, latest function now is not the latest tomorrow. So you can do two things, or you can do a specific version. So you put the hash on the end that connects to a specific version. So every time you build it, you rebuild that image from scratch. It will have the same base image. Or you rebuild it over time. The same runs with if you're using app get or something like that. So if you're doing this, even if your application did not change, you need to rebuild your container often. If you do it like this, you need to rebuild it because the latest version of Ubuntu or whatever version you're using, it has changed and probably has fixes in it. And if you use a static version, you need to update that version as well to see if there are newer versions on it. And well, with sneak monitor, you can get alerted for that. But that's another thing. If you rebuild your application, make sure you skip the cache. So Docker built dash dash no cache. Make sure that even if you have cache over there, you make sure that you, you, you skip that cache. So you always have the latest version from Docker up. All right. Small recap. Five things that we discussed today was. First of all, we need to lay the foundation correctly. So choose the right base image, not blindly choose it, but do some investigations. And I showed you how you can do that with with the sneak tooling to check out a specific specific base image if it contains vulnerabilities or not. Make sure you only pick from that base image or the build image, what you want, use these multi-stage builds and use the J, the Java runtime environment and not the complete JDK maven and your source file in your production image. That is not needed and is an attack factor. Of course, don't run your application as root, but make sure you have a privileged user that can only do the things you need to do or that user needs to do. I showed you how that you can scan your applications and on top of that, your images and make sure you scan and monitor them during development, but also when they are in production. So you can actively paint if there is a new problem available or a new solution. And last but not least, if you design an application to be cloud native or for a Docker or for in a Docker instance, make sure you can easily rebuild that application. So no storage of data in that container because that will be lost if I terminate it. Make sure that's outside of the container. So build to rebuild and rebuild often. All right. This was my talk. Thank you. Everything I used in the tooling over here, you can use it for free. Go to sneakdo.io and if you have any questions, feel free to ask them or to pay me on Twitter. Thank you. I'm Paola Can Become kanbanal. Thank you.
|
Docker is the most widely used way to containerize your application. With Docker Hub, it is easy to create and pull pre-created images. This is very convenient as you can use these images from Docker Hub to quickly build an image for your Java application. However, the naive way of creating custom Docker images for your Java applications comes with many security concerns. So, how do we make security an essential part of Docker images for Java?
|
10.5446/52989 (DOI)
|
And now, we will have a final talk. Aurora is going to give a presentation on Galina Balisova, who was the main designer and main architect of the Soviet space program. Aurora, the stage is yours. Thank you all for coming here to this talk that was just scheduled yesterday, so I'm really surprised that so many people showed up. Just in case you're surprised, I gave exactly the same talk last year, and I'm just doing it again because people were asking me to do it again because it was on a stage at Calisone and it was not recorded, so a lot of people were sad that they missed out and they wanted to see it again, so I'm just doing the same thing as last year, same procedure as every year. Yes. So, the talk is about Galina Balisova, who is an architect and who worked for the Soviet space program for close to 40 years and designed all the major space capsules and spacecrafts of the Soviet space program. And this is a quote by her. She says that space stations are architecture built amidst zero gravity, so I'm going to tell you a bit about her life and then there's going to be a lot of nice concept art of the spacecraft and some pictures of the finished products. So Galina Balisova has a very ordinary Russian life. She was born in the early 30s. Did I write it down? Yeah. She was born to an aristocrat family that lost everything in the revolution, so she grew up in working class conditions, but because her father had a background in more classical literature and art, he sent her to painting lessons from a pretty young age and while she learned to paint from a formerly famous painter in pre-revolution Russia, she discovered that she really liked designing things. So after she graduated from high school, she decided to become an architect and she was one of the few women who got into the Moscow architectural institute, so the main architectural university in the Soviet Union. And she was mainly accepted because the professors thought that her art was really good and they really liked the concept. And so the thing is how did she end up designing the space program? She married an engineer who was working for the space program and at some point she and her husband moved to Korolev or back then Kaliningrad, which is one of the secret cities of the space program. It's commonly known as the Star City. It's near Moscow. It's where all the cosmonauts and nowadays also astronauts go for training. And she was looking for a job basically because she was staying with her husband and they were in the secret city and she needed a job and she got a job as the only architect at OKB1, which was the experimental design bureau of Sergey Korolev, the main figure behind the Soviet space program at the time. And in the beginning she was employed as an architect, so she did the usual things. She was basically a whole city and a city needs planning, so she designed things like the cultural hall, normal apartment buildings for the workers. And yeah. Then we got this. This is an American spacecraft. I think this is the Gimini. And after the first space flights by Yuri Gagarin, the space programs of the Soviet Union and the United States were competing to bring people into space for longer time periods. So this Gimini spacecraft was in space for 14 days. And you can kind of see the problem. You have two people in there for 14 days. And this small thing, they can't move. There's no space. It's really not a nice experience to be stuck into this thing for two weeks. So Sergey Korolev, the chief designer of the Soviet space program, decided that the cosmonauts needed a more normal life, so yeah, that they could be more comfortable in space while doing their work. And they basically designed this, which we still see in use today, the Soyuz. The Soyuz is compromised to three capsules. You have the technical module in the back. You have in the middle, it's the landing capsule. It's the thing you usually see when the cosmonauts or astronauts return to Earth. That's the thing that returns. And the important thing for this talk is the thing on the left, that round thing is the orbital module. So the cosmonauts start in the landing capsule. And once they are in space, they can open a hatch between those two. And then they access the orbital module where they carry out their work, and which is also designed to include some leisure space so you can relax while you're in space. And that was Korolev's idea. So he sent his engineers to work, because they were only engineers working at this design studio. Also I think almost all of them were male engineers. And they came up with something really terrible, which you probably know from open source software, because engineers are not really good at designing things sometimes. Even though they think, well, this makes perfectly sense. This makes perfect sense. It's all there, but it's terrible ergonomics and all that. So basically, sadly, there's no picture of what they came up with, but we are told it was very symmetrical, and they painted everything bright red. That's not really a relaxing atmosphere to spend several weeks in space. So yeah, Sergey Korolev got mad. He screamed at some people, and he sent them to get him somebody who could do this job to design this thing properly. And that's when one of the people working for him in the bureau remembered that they already had an architect on their payroll in OKB1, so there's no need to hire somebody else. So they got Galina Balashova and asked her to design the interior of the orbital module. And she says it was very hard for her, because nobody really told her anything, because everything was top secret, and she had to have her meetings in the staircase of the building, because she was not allowed to go into the bureau itself, because she did not have the security clearance. So she spent one weekend, and she came up with this, which is very different from what the engineers had in mind. She describes the thought process as, well, you obviously need a divan, she calls it, to sit down, and you obviously need this cupboard thing on the other side to store your things and to have an economical surface. And yeah, not as much ridiculous, but Sergey Korolev, he really liked it. The only thing he didn't like, redesign, and so she made a redesign. This is version two. This was almost approved, except for some small changes. We can see here, this is the final design, which is just some fabric choices and some rearrangements of things. And this was actually built in 1964, and this thing flew to space, because it's a really good design, if you think about it. And the interesting thing is that nobody had done that before, because nobody had been to space before, except in those really tight and cramped space capsules. So there wasn't really any reference point to say, how do we actually design an interior for zero gravity? And her idea was this basic spherical design, that we have this cupboard, where there's the instruments, where there's the equipment, and there's the food storage. And on the right side, we have this divan, where the cosmonauts can lie down, which also has storage space, and there's equipment hidden under it. And another important thing is, you need a very balanced weight distribution, because you can put all the weight on one side. So you have to have it kind of symmetrical, but not 100% symmetrical like the engineers did. And she was also thinking about the psychological effects. So you need to choose more friendly colors, and you need to stick to a color scheme. So, for example, the floor is a different color than the ceiling, which you might think might not matter if you're in outer space, and there's no gravity anyway. But your brain needs a reference point, and she figured it out pretty early on. NASA hasn't figured it out until today, I think. So as an anecdotal thing, there's a lot of cases of space sickness with astronauts. Space sickness is basically like sea sickness, just the other way around. So your eyes detect movement, but your body says you're not really moving, something's wrong, so you get sick, and you puke, and that's not really good, because puke and zero gravity is not a nice thing to have. So she came up with this, and this really helps, because you can orient yourself and the effects are less severe. And here's some more sketches. You also need to design for space requirements, if you have one cosmonaut, or if you have two cosmonauts or three cosmonauts, and those are her proposals for these things. So these are all original drawings that were reviewed by the design bureau and approved, so all of these went into production. Problem is we don't really have a lot of things, a lot of pictures of the final result. I'm going to show you some later of the finished solar sceptiles, but most of these were prototypes, and they did not really survive. Yes. So she did everything in watercolor, and this is a very basic design of the cardboard part, where you can see the instruments and the navigation equipment in the upper right corner. There's a drinking fountain to the left, there's storage space for the camera, and there's also storage space for the flight manuals. This is the other thing with the radio and some controls and a lot of buttons, and I don't really know what they all do. But yes, this thing went through several design iterations. So this is basically the design of the first solar spacecraft. It flew to space in that configuration, in that design. And yes, Korolev really liked it, and this was her first job designing a spacecraft, and they were not really sure if they needed her anymore, but they decided to rehire her for the second project, which is what came after the Soyuz. It's the moon orbital spacecraft, so the spacecraft that was designed to fly to the moon from 1964 to 1968, the whole project was abandoned in 1968 because the United States won the space race, they landed on the moon, Korolev died, the whole project didn't really work out, but the spacecraft itself already had a finalized design, they were finalized prototypes. And this time she was not just a consultant, but she was really hired and got a fixed position at OKB1 as a senior engineer in the design team. Problem was that she was not really an engineer, but they did not have any design or architect positions in OKB1, and it's pretty hard to just come up with a position like that if you have to stick to five year plans and all that, so she was hired as an engineer, and to make it seem less suspicious she also had to do engineering, so she had to work and do the load calculations and things like that and design position of instruments. So at that time she was a full-time senior engineer working for the Soviet space program, and she stayed in that position until I think 1991, so yeah, 40 years. These are some more pictures of the moon orbital spacecraft, where you can see that she stuck to the basic design that she came up with for the Soyuz, but the whole thing was a bit bigger and you needed to transport more equipment, and yeah, I'm just going to show you some pictures. This is another design proposal for the moon orbital spacecraft, which is a bit different. The interesting thing is that these things look very 60s because obviously interior design on Earth reflected interior design in space, and yeah, she worked alone on this because she was the only architect in the whole thing, and she had a lot of freedom in what she did. In her words, a nice thing about the job was that none of the engineers didn't really understand what the hell she was doing there, so they didn't really interfere because to them it was irrelevant and they didn't really think what she did was important at all. In their mind, she only had to select colors, so she says the engineers were always really mystified when she came to them and asked how are the dimensions of the whole thing, what do you have in mind, what equipment do we need to put in there, because they figured oh, she's just doing the carpet and the thing and nothing else matters, while in fact she was designing the position of everything, the economics of everything. So she had a lot of freedom. Engineers did her last word when it came to technical aspects because that's their job, but apart from that she was pretty free to do how she pleased. And yeah, again this is another sketch for the Moon Orbital spacecraft, so because we have no gravity, she designed this whole thing without clear floor and ceiling so you can access most things from both sides, no matter if you're like from this viewpoint, upside down, but still there was this distinct color scheme so you always knew what way you were looking. And yeah, this was, oh no, that's the wrong one. Sorry. Yeah, this was the Moon Orbital spacecraft. The prototype was built, the final thing was never built obviously because the Soviets never made it to the Moon. And yeah, after that she was rehired, she stayed in the position and got a new project. A new project was the Soyuz T, which is the second generation of the Soyuz, which you can see here. Again, she stuck to the basic design, again with the cardboard thing, she called it a cardboard, it's just her words, and the divan on the other side. And again with a very clear color scheme. And yeah, she was also in charge of choosing the materials, so the whole thing is padded in fabric so that if you hit your head, yeah, it's soft and not just hard metal. And so part of her job also included calling all the fabric factories around the Soviet Union to find a matching fabric that had to be non-flammable, non-toxic, so B1s for the people who did build up here. And yeah, also she decided to use enamel for the metal surfaces because it cleans up easier and it's a more pleasant design. And yeah, in her words, the engineers never considered that cosmonauts really need to sleep in space and might need the divan or that they need a cardboard to store things. Damn, it was just this technical thing of, okay, we build a spacecraft that can fly to space, that's all we need. We don't really need ergonomics, we don't really need storage space. But she was working closely with the, did I skip one? Yeah. She was working closely with the cosmonauts, so every time the cosmonauts returned from space she would talk to them and ask them whether there are any problems with the things. Is there anything we could do better? One example is this divan thing, so it's built for the cosmonauts to sleep on because even if you're in space, just floating around while sleeping is not very good because you will eventually hit a wall or something and it's not very comfortable. So the first iteration of the divan was very rough fabric that Velcro would stick on, so the cosmonauts had soon Velcro pads into their space suits and they could just lie down and they would stick to the divan. Yeah, the cosmonauts that came back from that first space flight told her that, yeah, it was nice but we had some problems because we lost our pants because the pants stuck really good to the Velcro and so the second design iteration was just normal belts, so you would lie down and put a belt over yourself so you wouldn't float around. Yeah, and also she designed the whole interior thing, so for example also the ways to, how can we affix the cable covers in there and how do we affix the fabric. The engineers at first just glued the fabric down all around and she came up with a way to make that whole thing easier and lighter by just using tape. So she made the Sawyer space module 9 kilograms lighter just by coming up with a better way to affix the fabric which would have given her a lot of money because all the engineers would have promised a lot of money if they could make this thing lighter and she made it 9 kilograms lighter but she never got that money because the engineers were like, well we did that. She's just an architect, she's not the engineer, it was our idea. So yeah, it's just a basic sexism thing that we also had in the Soviet space industry in the 60s. Yeah, this is the final design for the Sawyer's tee and again this thing went to space, looks exactly like that. Afterwards she was also hired to do the landing module, so not just the orbital module she did before. Same thing, she came up with a basic design with these improved design for the bucket seats so they already had the bucket seats but the color scheme and the ergonomics and arrangement of the instruments are all her work and it's still pretty much used in the same way today. Yeah, just gonna go on here. This is something special, this is the Sawyer's Apollo orbital module. There were several missions where Soviet space crews and American space crews met in space and docked their vehicles together and it was a pretty big thing for cold war relationships when cosmonauts and astronauts would shake hands in outer space. So this needed some design changes because the whole thing was going to be broadcast live from TV. So a lot of things would change to have better lighting and to have better cameras in there and also the colors were changed because the first design was with red colors but that did not really look good on camera so they changed the fabric to green. And as she recalls she was really proud when Alexei Leonov is a famous Soviet cosmonaut, he was the first person to take a spacewalk and he was also the commander of the first Sawyer's Apollo mission and when he came back he gave an interview in Moscow and he told the newspapers that well, now that I've seen the Apollo spacecraft and I really like it but I just think our Sawyer spacecraft is really way better, it's so much more structured, it's so wonderfully thought out and she was really proud of that because she was this famous cosmonaut praising her work and saying it's so much better than the Apollo. So now we're getting to the pictures of the real thing, this is Galina Valashova in the finished Sawyer's Apollo orbital module and she did a lot of work in those modules, she had to test everything out before it went to space because it's really not good if you find bugs in your design while you're in outer space and yeah, like I told you she always listened to the feedback from the cosmonauts, they were complaining about other things besides the thing with losing their pants, there were problems with the toilet, it was really uncomfortable because you basically have a small cardboard compartment down there where there's this suction device because you don't want pee floating around in space, that's really not nice and the cosmonaut said it was really uncomfortable so she designed that also, redesigned it and she added mosquito nets because cosmonauts wished for mosquito nets because small things would float around in space and hit you in the head which is not nice and also they wished for a better place to store documents like flight manuals and technical documents because the original place was okay but not really good to reach if you need some book really fast because you're having a situation, yeah, this is her testing out a new belt system for keeping your pants on in space, yeah, also you have to keep a lot of things in mind so it might seem like a simple job but it's not, you also have to choose your materials in a way that no condensate can form because the water from your breath will eventually condensate somewhere and it's really not good if it happens in some electrical modules where you're in outer space, yeah, the other thing is that she also tried to choose colors that would make a friendly impression on the cosmonaut so this is a nice place, I want to be in this place, not everything clinical white or bright red, yes, now we're getting to something that's still in space, that's still designed by here, this is one of the first sketches for the MIR, core module for the MIR space station, from 1967 to 1987 she was on a project to design the original Salyut space stations and this thing was mostly designed by engineers and she just got to this, no, sorry, yeah, this thing was designed by her, she also did the work for the Brurane which is the Soviet equivalent to the space shuttle program and for that one she only did the fabric and color choice but for the MIR she did a lot of things and again to compare to NASA's work at the same time the first Salyut space stations went into service, NASA had Skylab and Skylab didn't have any architects, they thought about hiring some but there was a critique from the astronauts because the astronauts were like we don't need architects in our space program, we don't want them, what the hell is wrong with you, so they did not hire any architects and yeah, just going to, the quality is really bad I think but this is the interior of the MIR core module, she decided to create two very distinct working spaces in this module so this is the salon as she called it which is more like the area to hang out in, to eat, to do your normal life stuff while you're in space and so she chose a blue and green color scheme to have a pretty warm color scheme for recreation areas and this is the other part of the station, this is the work area, this is more of a blue color scheme because yeah, so you can focus while you're working and her first idea was that she wanted way more windows in this thing but the engineers did not really like that because there are two ways you can design this core module and one would have been with the way it was designed with this longitudinal axis, the other one would have been to make basically several floors of it but it would have been really impractical to build but if that way would have been chosen you would have been able to make windows on the outside but yeah, so the final design just got some very small windows and yeah, she also had this engineering a little bit job where she also had to do engineering to, yeah, so she could stay in her position as a senior engineer so for example she designed the position of the stabilizers which is the picture you see on the left so that the space station would be stabilized by an outer space and another thing she did because she was the only artist and architect working for the space program she also did the typography design for the whole thing so whenever you've seen a Soviet flag on something the placement was carefully chosen by her on the Soviet capsules on the MIR on the Buran and this for example is a typography of the Buran program and another thing that she did was she did these watercolor paintings because she used to do watercolor paintings in her free time and she decided that the cosmonauts would need something to remind them of their home so each Soyuz spacecraft and also the Soyuz stations had a small watercolor painting by her in a small frame on the wall somewhere so you could look at it and remember how nice it was back home in the Soviet Union and yeah there were landscapes, there were snowflakes, the Black Sea and she was the first artist who got her pictures to outer space which is quite an achievement. Another part of her work included the flight penance for commemoration of the space missions because again she was the only architect that the only artist that worked for the space program that already had all the security clearance that knew what was going on and also they didn't really want to hire anybody else because we already have one so she can do everything so she also did these for 40 years and she also did logos for the missions this is the Soyuz Apollo logo which got really famous and was used a lot for a lot of merchandise. She's still a bit bitter about that because the administration forced her to sign over the rights to the administration because it's socialism after all and we shouldn't have like one artist get all the praise for this this is obviously a community thing which in the end led to some American guy claiming the copyright and getting a lot of money for it and her getting nothing. Here we have some more flight penance for different intercosmos things for cooperation between Japan and the Soviet Union and one for the Miya space station one for the Soviet. I'm going to end this thing with some pictures from the ISS because you can see a clear difference here because this is the interior of the swester module on the ISS even though she was forced into retirement after the collapse of the Soviet Union so in I think 1991 because Roscosmos didn't want to keep her on the payroll. Her design still made it to the ISS because the first module of the ISS was a repurposed backup module for the Miya so she still designed that one which you can also clearly see even though it's like pretty cramped after all these years you can still see there's a distinct color scheme with the floor or yeah it's not really a floor because you're in outer space but the one part being a different color than the ceiling and the walls having a distinct color thing. Here's another picture of the other side of the module yeah again you can see there's a clear color scheme going on and now for comparison this is the ISS destiny module designed by NASA where people often get space sickness because yeah it's way harder to orient yourself in this so yeah she worked for the program for 40 years and her designs are still in use and because she was the first to ever do something like that and she came up with a lot of really good stuff that we still use for the Soyuz, for the Miya, for basically everything we designed today for outer space still is inspired by her and I'm just gonna end this with a quote by her. So she's saying she's living in a prefab building or a Plattenbow as you call it in Germany and she says it's basically the same thing as living in the Miya or the ISS because yeah it's designed following technical requirements set out by engineers but what you do with it and how you decorate your interior and how you make the most of it this is done by artists so yeah the Miya space station and her living room basically have something in common yeah that's my talk thank you for listening. So thank you, thank you for your talk. Are there any questions? Ah over there. I can see anything that's bright light. Is there maybe an ability to actually witness what one of those divas might have looked like in real life on this very area here? Yeah we built one. So if you're interested we built a replica based on original designs it's over there with chaos zone. It's not really the same thing but we try to stick to the original designs somewhat so that this achievement can be more visible even at congress. Any more questions? I have a really bright light in my face I can't see anything. Hi, I was wondering you said because of physiological things that it was still some like quite earth centric so you had a floor, you had a ceiling and it was everything built like also to be used in gravity. You had a diva which was flattened out. Wasn't it a, and it seemed for me in space where you don't have gravity so where you could use all surfaces more seemed quite like a big loss of space to not use it. So is this like this space sickness thing such a big thing that you can't say okay we can also put some panels on the ceiling or stuff like this? Yes so it was done more for psychological purposes so you don't get disoriented. So yeah I mean some of the things are designed to be used in any direction basically but the basic design was purposely made to be kind of like gravity centric like I said. It was not used in gravity because the orbital modules would only be accessed once the spacecraft was in outer space. So yeah mostly psychological reasons I guess. One other question is there some kind of style guide or design guide where her essence is brought into for current development of spacecraft or something like that. Some heritage of her. But really the problem is that all this work was like state secrets till the collapse of the Soviet Union so it's really hard to find anything on it. There was an exhibition I think six or seven years ago and you can get a catalog from the exhibition which has lots of original drawings in better quality than this presentation and also a very long interview with her and her recollections as her time working there so it's the closest thing you can get I think. Any more questions? Hello. How wonderful. Is there a reason why the NASA didn't adopt the Soviet design language for the newer spacecraft? That's kind of complicated. One of the main reasons is that the Soviet space program in the US space program had one fundamental difference. Most of the astronauts of the US space program came from military backgrounds and were test pilots before or something like that and they always wanted to have absolute control. So like you maybe saw on the picture of the Gemini spacecraft I had there's really a lot of controls and levers and everything while the Soyuz has way less. And the difference is that NASA spacecrafts are really designed that you have to do everything by hand while the spacecraft were designed that most of the things are really automated so the cosmonauts can't really do that much. Of course there are emergency things they can do if something goes wrong but the basic principle is that almost everything from start to reentry is automated and you don't really do a lot. And the astronauts never really liked that. NASA tried at some point to do it more automated because of course you can also make a lot of mistakes if you have to do everything manually and some mistakes happened where they were like almost disasters because somebody forgot to push this switch or close this wall which would happen automatically in the Soyuz modules which is also one of the reasons why the Soyuz has a better safety record. But the astronauts of the US program are really skeptical about the soil we need to automate it so they are also very skeptical about letting artists or architects work on this because to them everything has to be very engineered and mechanical and like no art, no covers for your cables, no homely feeling in space but just this more military thing. So if it looks like a fighter jet they feel fine. If it looks like a nice place to stay for two weeks they are like uncomfortable because I don't know where that cable is going or things like that. So I think it's partly because of the psychological reason. So NASA tried to hire architects at some point in the early 90s again I think but again there was really no support in the structures because the astronauts think no we don't need that. We are fine, we are doing fine. Thank you very much. Hello. The space shuttle was very similar in the Soviet Union and America at least from the outside I think the Soviets had something very similar where the interiors very different because the Soviets have a different approach. Probably the problem is that we don't really have any interiors to look at because the final version of the Puran was never built so there were some flight test versions and one version that made one automated space flight but they never really finished the interior for the thing so we are not really sure what it is supposed to look like. I couldn't find any pictures of it so Galina Balaschowa said that she did not do that much for it. It was mostly designed by engineers and she just chose the color scheme for the whole thing. I would be curious to know how the, like is this whole topic like the interior design and being a bit more conscious and aware. Is this something that is nowadays caught up by the private firms that are building spacecraft and are designing spacecraft like SpaceX and the others also on the American side of things let's say. Are they considering this? Are they referencing this? You know, is there a bit of a shift? I don't really know about that. I haven't really checked. I really hope they do but my guess would be they probably don't because they are engineers and if it works, it works. No need for it to be comfortable. But I guess they are going to do something if they want to do it like for tourists or for the public because then you can't really have this fighter jet interior they had in another program. But I don't really know. I'm sorry. So I don't see any more questions. So thank you very much Aurora. That was a really nice talk and give her a round of applause.
|
Galina Balashova was the main architect and designer for the soviet space programme. She designed the interieurs and visual identity of spacecrafts such as the Soyuz, Buran, and Mir. »Space stations are not only technical structures but architecture built amidst zero gravity.« Galina Balashova (b. 1931) is a self-described "architect-engineer" who spent almost 30 years working in the Soviet Space Programme. She designed the ergonomics, the colours & style and the typography of most soviet spacecrafts, including the Soyuz, the Salyut space stations, the Buran shuttle and the Mir space station. Her pioneering work in the field of zero-gravity architecture is still referenced today in the ISS and other spacecrafts.
|
10.5446/53005 (DOI)
|
Next Cloud 18, it will become much easier to build your own components and automate your task. How you can do this will be presented by Bliss in his talk, Building Next Cloud Flow. Have fun. Yeah, thank you very much and good morning to day two. So yeah, about Building Next Cloud Flow, first maybe one sentence about Next Cloud in case you don't know it. It's an open source platform, meanwhile that was initially built around syncing and sharing files and now it's sort of like an open source 365, well, yeah, alternative. And Next Cloud Flow, this is, or the ambition is to have a flexible and user defined event based task automation. So what we try to do with this is have maybe something like, this is a stream. Yeah, of course, sorry. Where is it? Yeah, yeah, yeah, yeah. Okay. Okay, that's better, I guess. So if you maybe you know stuff like if this and that where you have components that triggers some event, something happens and you have some criteria and then something else should happen. This is what we try to do here. And previous to Next Cloud 18, which will be released in January, we already had something in place but it was far more limited. It was limited to files and only administrators could set those flows up and we were having basically four use cases or four actions that we could do with that. It was blocking files, it was tagging files, converting them and running a script. So and yeah, also the UI was a little bit scattered. The mechanism that was providing interface for different apps that they could implement it and if you would have then such an app enabled in Next Cloud, you would have one, two, three, four different or more entries in the settings. So this was not so very nice. So this limitations we wanted to break and the goals or the objects and where to have a nice user interface that's kind of not too complicated where you can rather quickly click those flows together that you want to have. And not only for administrators but also your end users should be able to configure such flows. Maybe not everything, not just running arbitrary scripts but on the other hand maybe they should be able to write something in their conversations in Next Cloud Talk. Still the rules that were already set up that should continue to work and should be extendable because we want to kind of provide more and to give you the opportunity to have different triggers that can be acted upon. And so this is the state here from one to weeks ago I think. How the interface looks like. You are locked in as a regular user and the flow settings and here this is the event. This is, yeah, should kind of cause this flow to work. And here we have now check. So the constraint and what will happen eventually is kind of to write to a conversation in Next Cloud flow. So in this demo, file creation for instance was configured and then it should be written into. And this will be now shown here. So the user now creates this event themselves but this is of course limited to owners of such files. But if it's also shared or publicly shared, this would also create this event obviously. So here this editing happens and in a few seconds we will go to Next Cloud Talk. That's yeah, this chat or yeah, conversation solution and here the bot was writing the lines into it. So this is how one flow basically works. And this is, yeah, already in. And further I want to go down into details how this works internally. So that's why we have the demo up front and we go then deeper and deeper. And we'll also see how it's possible to create own components for Next Cloud flow. And right. So everything's based of course on Next Cloud server and we have this workflow engine how it's called that kind of provides or not together all this different parts. And these are separated into this three things. The entities represent basically the action, yeah, the trigger of that's causing the flow to happen. The checker there are therefore to, well, check the constraints, yeah, the criteria that you configured and the operations eventually the action. So these are the terms that we use in the code. But basically it's yeah, action and events that we say. And on top of it, of course, there's a web interface and a Yakuza to my colleague, you'll use to do everything that's actually user visible on the front end. That's the magic, the magic. Okay. So flow cycle kind of to have it also visualized in this way. So we have an event that happens that's represented by this entity. Yeah, this is fired and the workflow engine that becomes aware of it, right? There was an event listener activated and then it acts that instantiates these different components. And yeah, the operation candidates, it's also set are configured for that event that was fired. And so yeah, in stage it happens. The entity sets a context, so it gives more information to this service that's called rule matcher. And then the next step, the event handlers were called and they also get access to this rule matcher. From there, they request the rules. So the rules that are the whole sets that are configured in the UI. And the rule matcher, when it was requested, it does the execution text. So that could be which is the correct text, it's a request time correct or does the user have actually access to that object that fired the event. So this happens and it filters the possible operations and it returns them. And whatever's left in there, that can be actually executed, like written into the conversation room. Okay, so the engine is the cure part. It's a central point that kind of takes the registrations of the components. We have a couple that are, yeah, well, built in, but everything can be extended by their own apps. Like the example of writing to this talk conversation, this is also in the talk app. So that's not built in itself. Yes. The listeners are set up automatically, so that's always kind of listened to the post event. So if something has already happened, then this listener will become active. There are some special cases where an operation kind of needs to become active earlier. This is a case for attacking or for the files block access because, yeah, you should block a file before it's being accessed or downloaded. And we come to this a little bit later again because it offers a flexibility to go beyond this. Yeah, the engine provides the information and takes care of the heavy load, the complicated stuff or also the boring stuff. Yeah, the housekeeping. So yeah, let's start then here on the right side what you have seen before in the screenshot. So, yeah, the action that is being then done that's defined in an interface. It's the I operation. And in this implementation of it, we provide, yeah, the part that is being visible for the users, the scope. So the scope means here at the moment is it's a rule that's defined by the admins or is it a user-based rule. And yeah, the validation takes place here in the event handler. So yeah, on event when it kind of should do its stuff, then this is being called. And yeah, there are two refinements of it. The I specific operation. This one is, yeah, limited to a very specific type of event. And yeah, this is, for instance, the PDF converters can only act on files. So that's limited to files. And the I complex operation, this is a little bit more advanced. It's, yeah, does more stuff of its own, like its own listening logic which is set before it's necessary for the files block access, like a firewall thing that prevents you kind of to have the files outside of NextCloud. And yeah, this does the stuff by its own because it needs to interact already deeper in NextCloud. So here we have an example, a code example of the PDF converter. And this is the first part of the implementation of this interface. Get display name. Well, it's clear. There's a description. And it also has an icon that's being presented. It's quite straightforward. Then the next part is the available scope. And and it is being passed in here. And there are constants, admin or user one, and it simply returns where it is available for. In this case, true. So it's, yeah, can be set up by both administrators or users. So regular users to have contact samples, the external script runner, it's only available for administrators and posting to conversations only available for regular users. And here's an validator. So it gets some parameters. You see here the possible modes. That's what comes into the back end from the user configuration. And it just checks whether that's properly configured. And if not, an exception is being thrown. And if everything's fine, then yeah, it just passes. We have the on event as the listener itself. And this is the real task, the real logic that the operation is actually being run. The PDF converter in this part first gets the notes somewhere. That's a bit trivial or boring. So the note that's a representation of a file. And then it just makes some check what it should be done. And in the end, it adds a background job that will be then later called by or executed by Cron because yeah, PDF conversion can take a little bit. So you don't want to have it in a regular user action like an upload or yeah, editing a file in the web. So that's why this is being delegated to run a little bit later as a background job out of the user request. Yeah, it's a nice specific operation. So this is only available for all the files. And a different example is the access control, right? So that's things that can block the files from being accessed outside of next load. And here it gets a trigger hint, which is then shown instead of the event shows or it says when the file is accessed. And on event, this can just be a no operation. And in this case, it's a no operation because it does things in a different and a deeper layer of next load. So that's why first it's not being called here anyway by the workflow engine. And that's why we don't need an implementation here. But the magic then or the logic happens in somewhere else. It's then very custom to the app. Yes. And of course, it needs to be, oh yeah, that's a registration. Here, it's a connect talk on the file system. This is where it happens and where a storage wrapper is being registered. So that's what the file access is doing. So yeah, it goes a different way. And this is one of the flexibility types and option and mechanics that Flow Engine is being offering to you. But all operations have to be registered so that we become aware what is actually there and that it can be also shown in this plate in the user interface. And yeah, this snippet is what it's doing this. And yeah, AdListener against this event, that's a constant as well. And yeah, when it itself kind of fires this event because it needs to be presented on the web interface, then it just gets other class. And in the last part, the JavaScript part is also being loaded so it can show its option later in the user interface. So yeah, and this is the front end part. Also this is, yeah, very straightforward because there are different defaults that you can use and don't need to do much more. So that's kind of to be a little bit comfortable for the developer. But again, here it can be also overwritten by some, yeah, own necessities if you have them. Yeah, they also do their validation. They can have different parts and everything is view component here. We have also here snippets for this. That's the first part. It's an automatic tagging operation. It says I have an icon of my own and I have a color. That's the green one here. So this is what's being then shown in the selector. And further it has the operations there are none because it's just a picker. It could be, I think, yeah, let's skip it. We have the options here. That's an own component of the tag and that kind of loads available tags that are there in the system and offers you the choose it to pick them. So this is what it looks in afterwards. Right? Yes. And when should it be done parts or the action or the trigger that should or that is can be selected. So the I entity, the file again is the most common example. In this implementation you also need to provide some user-facing information, display name, et cetera. You can also give the events that you're compatible with and provide some context. The file entity we have an example here. It's a name and it's an icon. And here are descriptive events and they just get one, again, one label that is being presented and the actual event that is being listened to. So all this file objects, they throw different events when it was created, written, et cetera. And we divide them by a namespace and with the actual name. And similar, it is for the mapping event. And so you see we have here just generic entity event and this is an implementation of that one. It's an event name and display name, just what we gave there to this class, this generic entity event which is also convenience, implementation that's available in the public API. So in the next plot we have in our namespace, there's this part, there are three, yeah, three routes. OC, that's all the internal values. OCA, that's all for applications. And OCP, that's the part that's available in the PHP API that can be consumed by all the apps. The rule measure itself is being also prepared by this entity because it will provide the context. Here we get first the file node dependent on the type of event that was being called because there are some differences. So we get the node, so the file representation, we add it to the rule measure and then also the operation can take advantage of it. And for those, that's also an old function that we need to take with us. We set the file information. In most cases, so at least if it's not about files, you don't need this line but just the upper one. And if, yeah, file is not found and obviously disappeared, so we just silently pass it because it's usually not a narrow case that needs to be handled like it could have been deleted in between or so. Or made unavailable so it doesn't make sense to cause more work or awareness to administrators. There's a legitimation check and it will figure out whether the user that's passed by UID here is allowed to have access to this entity and that case of the file, first we check whether it's an owner and if not, whether it's a shared recipient. In that case, we say, yeah, it's fine and all. And it can also provide additional information. So that one was especially needed for the speech, for the talk operation because it would be very cumbersome for every, yeah, end operation to kind of figure out what text should I write here and what are the specifics. So this logic, we also placed into the entity. So it can get the display name or it can get some display text. So this is implemented for file just a little bit longer so I took out the get URL function and it will provide the file URL, the internal file URL. That's basically it. And then, yeah, the operation that is working against this can take advantage of this information. The last component, the checkers that are those that set up the constraints, they are defined via the eye check interface and, yeah, they can or they should offer the option to test against certain characteristics of the entity. And so first method is to be implemented which identities are supported. File is one case but you don't need to add anything. You can just have, yeah, a generic checker. For instance, everything that goes against the request because it's available via next cloud right away and it can be against any event you can check, yeah, anything about the web request that is being done. And also here, for which scope is this actually available, admins or users or both as we had them before. Eventually, there's the validator that gets to parameters. One is operator as you see and the other one is the valid is being passed. And, yeah, here we have one example of group one, the lower one where that's in a certain group. This is being checked here and the first one is just, yeah, very basic is or is not. Yeah, that's a validation and then executor itself, yeah, this is being the logic. So let me make it a bit more clear. So the validation that only checks whether the configuration itself that user enters in the interface, whether this, yeah, matches or this is fine with what we know and this one is actually then the test that is being run when the event happens. So first, before we had checked whether the group is already existing so whether we don't have an outdated value here and when a flow is being evaluated, we will check whether the user in question is actually in the group. Yeah, and this part is also exposed in the web interface so also this one needs to be registered there and also here again it works as a Vue.js component and also here we see how it's being registered. This one is a file size check so that's all really code that we have in next load and it has its own operators. So that's not an is or it's not a match or match not which works against strings for instance, but this one checks against the file size and yeah, it has a name that is visible and one that we use in the back end code to check against. There's the placeholder, so a hint, yeah, what you could offer and the validator itself for the front end so that before you try to save it, you already know in advance whether it's going to work or not. Then it looks like this. The file system check that has a custom component works as an is and is not operators otherwise it looks very similar but the difference here, it doesn't have an only component but the file system check it does and it itself just implements in this multi-select dropdown and takes care of loading the values itself so that's what's more special than the simple file size thing. Yes, so yeah, actually that's already from my part. I was quite faster than I thought I would be but if you have any questions about this then I'm very happy to answer them. So everyone who wants to ask a question, raise your hands. I will come to you. Thanks for the great talk. What a small question for my understanding. The workflow will be in a way that the person can click it together or does one have to program it like you just showed the single events? So for the, well, there are two sides of course for the end user it should be clickable. Like I was showing in the video here, so this is already how it's the presentation for the end user. Now it's for a regular one, for admins it looks similar so just the choices are then different depending on what's in the back but it's done in a way that you can extend by yourself as a developer. So we already have also interest from others who want to take advantage of it. And what I'm working on currently but Dania is also one thing that could set up end points so that you can have requests or web hooks from outside coming in and work on that. So it's, yeah, the user part is, yeah, it should be easy to click together and for the developer part it should be also comfortable to get their own components, one of those done. Yeah. Any other questions? Yes. Just probably a very simple question. Can directories be observed? You can observe directories ideally by tagging them. So you create a system tag and assign it to the directory. And then whatever happens inside it can be watched, yeah. Okay because the background is when I hear workflow solutions I think of things from print and publishing and there are a lot is done with observing directories and converting from one directory to the other and that would be something that would be expected from when they hear workflow solution. Yeah, in the backend it's like this or in general it's like this that if a file is somewhere in a directory structure changes, that also changes the modification times of the directories. So it also then issues the file has changed event because directories also files, right? Everything is a file. And yeah, so it's easy to select them by having a tag but then yeah, the file changed event that will be also then executed for directories of course. Thank you. You're welcome. Can you maybe name some scenarios that you have in mind and can you like list some actions that can be taken place? Can mail be sent or just to get an idea of what actions can be chosen? Yeah, so right now we didn't have much time possibilities kind of to extend it over different entities or actions but as I said it's being worked on and we have already said that part that kind of writes into a talk conversation. And one idea that we also were having but it's not done yet is that if you, yeah, what would one goal would be if you mentioned someone in a conversation and would check against the calendar if the person is actually present or maybe on vacation or in a meeting and then would just give you some informative line in the conversation. This user is currently not available. So there could be one thing about collaboration. With the web request staff it could be that you configure on GitHub or any other service or web hook that triggers if something happened wherever that next code will be notified. And then it can, yeah, right now it makes only sense also to write into a conversation but you could also have a log file for instance that is being then written into it. If you imagine this or, yeah, or you create a file or something like this. So that's not implemented but this can all be done. So the idea was, behind it was to have it so flexible that you could basically do anything with this, right? Yeah. What you can see on other solutions there, there's more common that you have a very fine gradient things that you can pick like an issue was created in GitHub and then write an email here to us. And we don't do it so fine-grained because, yeah, we just decided kind of we had more rough so in this case it would be also an external request and then just need to configure this somehow to do something. But basically anything would be possible with this approach. And if you want you can talk to us later. We have also a boat here at this Open Infrastructure Orgret and he's very good in kind of telling you what really is possible and he's also having a talk later. Hi. Thanks a lot. I just wondered once you've clicked together one of your workflows, how does it look like? How is it stored? Can the user actually take it with him? So if you set up another next cloud, do I have to click them all together again, my precious workflows, because these things can become quite mission and critical. So I would expect some kind of script. I can read and copy and maybe even code. So I don't have to click it if I know what I'm doing. Yeah. That's an interesting idea. It's all saved all in the database that's next cloud operates against and there are I think all together there are three tables where the rules are defined, where the constraints are defined and where the scopes are also defined. But this web and this configuration interface that all works with an API calls. So you could indeed create a different client that would read out or write workflows that's possible and that also would make it possible for you kind of to take out your flows. So yeah. So there's no nice way right now to do it, but there's a programmatically doable way to do it. Okay, then there are not other questions. Thank you. Blissful the talk. So give him a big applause again. Thank you very much.
|
Nextcloud Flow is the overhauled workflow engine in upcoming Nextcloud 18. This talk describes how it evolved, how it works internally, and especially how own components can be built, so you can set up automatized tasks in your Nextcloud.
|
10.5446/53010 (DOI)
|
So the graphics is there to illustrate it's supposed to connect people with solar power in remote places. We've already done that. Around the refugee crisis we started to think about, hey, how great would it be to have a solar relay station that is mesh-capable? And so I started to build these and we got some funding from the Ministry of Education and Research. So we, together with Knud, we made a system so you can monitor these devices and you have a web app that shows you, hey, everything is fine. Probably you don't have enough battery capacity or your battery starts to wear out. In general, to build such a system, I guess most of you already know, you need a solar module. The picture shows a 10-watt module which is not sufficient in our areas to power such a Wi-Fi device like a TP-Link, but it would be sufficient to run this device in client mode. In order to run it in access point mode at full power for the entire year, you need a 20-watt solar panel. The picture shows the previous design that I made. It's still around and it's going to be around, but it has only an AVR 8-bit microcontroller inside, so it needs a serial interface to the router in order to deliver the data to the network. A screenshot with some of the options of the web app. On the left-hand side, that's what you see when you connect to the web app. It gives you status information and the problematic nodes are on top. In order to have this feature, you need some little device like a Raspberry Pi would do. You cannot run it on the device itself, also on open WOT. Since the device is interconnected with all these measurement data, you can make these fancy graphs where you can see with a look if you're interested how the system life or system health is. That's a photo of the new hardware. The features, yeah, it's an ASP32, so it has all the features of an ASP32. The fault, it runs at 160 megahertz. I've tested it, the throughput between two clients at full load is 1.3 megabytes per second, but with debugging turned on, so without debugging it's probably faster, and at 240 megahertz it's probably faster too. The device fancies 520 kilobytes of RAM. You see the sky is the limit. Well, it's not that amazing if you consider that open WOT today, once 64 megabytes of RAM minimum, so 128 times more RAM. Yes, I mentioned it before, it's a maximum PowerPoint tracker, so it gets more power out of the solar panel. It runs as an access point, as a client, or the combination of both. It has Bluetooth support so far, I'm not using it. In theory, there's a proprietary mesh protocol by Espressive, the makers of that chip, but it's only a spanning tree protocol, not a real mesh, so maybe we can come up with something better even though there are these limited resources, but it's not implemented yet. The device has one RS232 port at 3.3 volts for programming and debugging. You can also flash a different firmware if you wish, and you have a second port, for example, to connect another router and read that serial data and communicate with that as well, or an Arduino. These ports can also be used for other purposes, like I2C, as far as I know. You have three extra ports to connect sensors, and also a temperature sensor. At the moment, I'm using a PTC, a positive temperature coefficient device, but a DHT22, which is popular in the makers scene, can also be attached. It's a little bit of modification of the firmware. Yes, I also mentioned it. It also operates as a fixed maximum power point tracker, like those cheap maximum power point tracking controllers that are already out there in order to recharge the battery, even though there is no power left. And the system is designed for lead acid batteries of the V voltage regulated type, so AGM type batteries I'm using lead, since lithium ion or lithium iron phosphate cannot be charged at freezing temperatures. You will destroy those batteries. So even though there is these fancy types of batteries for this climate, they're just not useful. You will ruin them in the long end. Yes, the applications, you can use it just as I intended to use as a low power Wi-Fi relay in a network of poor people, or if you just want to bring connectivity to remote places, or you can use it to power a more powerful Wi-Fi gear and remotely control it, because there is a power port that can be switched on and off from remote. Yes, you can also use it as a general purpose solar charge controller with maximum power point tracking feature to charge a battery in a caravan or on a boat or whatever where you see it fit. Or you use it as a sensor node. Well, I've only have this free GPIO pins exposed for sensors. I think you can already do a great deal of things with it. If not, I can consider to add more GPIO or expose more GPIO ports in the next design. There are obviously some limitations. It's a microcontroller after all, so we only have limited resources, and the access point can only handle up to four clients, unfortunately. But for a relay between villages, two clients or three clients are just enough. As far as I know, Wi-Fi timing cannot be adjusted for Wi-Fi long shots. So when you have distances where the Wi-Fi protocol and the slot timings are critical, then you will see some duplicate acknowledgments, for example, that a typical problem. And so at the moment, you cannot adjust it. Maybe somebody has a hint how to do it, but at the moment, it's not available. One problem I run into is that the analog digital converters that I use for measurements are very low precision, actually. I calibrated one of them, and I calibrated the reference voltage, but still the, yeah, it's not very accurate. So probably all ports need to be calibrated. All the time is running, so I'll just show this to you. That's the positions of the components on the board and the blocks. Below we have the power horse, the 95% efficiency DC-DC step-down converter. We have an extra P-channel MOSFET to control the power output. It can switch many, many amps, if you wish. We have the ESP, the programming interface. There is a DC-DC step-down converter that you can modify, so you can actually undervolt the device, if you wish. I experimented with 2.8 volts. It works perfectly. And since the ESP has linear regulators to produce its core currents, you can undervolt it, and therefore the power consumption goes down even more. So if you want to save some power, there is an option to play. I'm just showing you the schematic because, yeah, well, I'm not going to explain it. There's no time. And yeah, the result of it all is like, at full load, I mentioned it before, it's 540 milliwatts, or 0.54 watt. As a client, it's a quarter of a watt. And you can still undervolt it to bring the power demands down. I also mentioned already, NodeMCU is the firmware. You can change it to whatever you wish, but then again, the stuff that I'm providing, you have to re-implement in your favorite tool. Yeah, I guess I mentioned already all of them. One limitation, expressive for some reason. They have patched out IP forwarding out of the IP stack, TCP IP stack. I'm going to add it again, and maybe add a minimalistic mesh protocol so these devices can actually interconnect on their own. There is already a proprietary protocol, as I mentioned, but it's only spanning three. Here is the status page. That's a page that the device generates. So you can, even without the fancy management system, you can go to the nodes individually and check how they are. They will show you, hey, I'm healthy. And it provides a status code. Well, it's probably not that interesting for a human user, but that's what the management system looks for. And it also generates a list of CSV entries. For example, if you generate one every five minutes, you have the status of the last 25 minutes, because the list, in order to save some RAM, I only have five entries in that CSV list. You can just pull them with curl or regate, and then do whatever you like. Yeah, I mentioned it. You can program it with a simple USB to serial converter at 3.3-volt logic level. You can use NodeMCU tool or NodeMCU uploader to upload or erase Lua files. And with jumper cables, you can update the entire firmware with, yeah, make flash if you use the SDK of your, yeah, development environment that you're using. So that's it. How's the time? No time for questions, I suppose. No, we're actually very good in time. You still have at least eight minutes. Okay, eight minutes for questions. Can you go around with the mic? You mentioned 60 euro. Is it including? That's a target price for the complete system. Including the solar panel. Well, yeah. Okay. The battery, this type of battery for an access point costs 15 euro, 15 to 16. Solar module about 25 and then about 15 euros for the device in mass production. The hand-solder prototypes are of course costlier if I consider like I work for like, yeah, you know, they're hand-soldered. I have six PCBs. Next question. How did you implement these other features that when there's not enough power, that these PCBs powered, is there like another microprocessor on it or? Actually the maximum power point tracking works with an operational amplifier and that circuit is analog and I've designed it in such a way that if there is no control from the ESP, it operates at 16 volts. So even usually the ESP would control a reference voltage that is compared to the voltage coming in from the solar panel and when that is not influenced by the ESP, so if that DAC is off, it operates at 16 volts. So as an emergency mode. We still have time for a few more questions. Come on. You said one can attach other solar modules and uses as an MPP tracker only. What's the... Sorry, can I attach what? You said we can use this device also as an MPP tracker only, right? And power something, not use the Wi-Fi functionality of the ESP. Well, then there's other devices but probably not in that class of power because it's already attractive I guess for the amount of capacity that it has, like 70 watts. But of course if you have Wi-Fi on board for example, yeah, if you operate it in client mode and there is an access point close, yeah, it can sometimes send you an update about the status of the system if it's like a boat or a caravan. So you get 12 volt regulated from the system or you just use battery voltage or... It's battery voltage and the battery voltage, it floats between typically 14.8 maximum down to 11.7. That's when the low voltage disconnect by default would kick in. But consumers that are designed for 12 volt systems, they're all fine with that. They usually can handle up to 16 volts and they start to fail below 10 or 11 volts. And you can also, if you want to, you have USB power then you just attach one of these cheap USB to 5 volt power converters. So no, it's not regulated power. The only regulated power you have is 3.05 volts or whatever to power other devices and it's efficient. So I took the step down converter and I took some effort to make it efficient because that's one place to save, to conserve energy and this is all about conserving energy in order to make the system cheap. More questions? Okay, last question. Did you do range tests in production? No, I trust that data is accurate that I saw from other people with these devices. Well actually, experience from the 80266 was range up to 6 kilometers. I did my own performance tests but with debugging compiled into the firmware. So I was looking for how much throughput I can actually get from such a low cost system and it's 1.3 megabytes per second between two clients. Overclocking it, turning debugging off will make the yield greater. Yeah, but I didn't do a range test. The range is probably pretty good because I'm not using an internal antenna so you can attach an external antenna. So if you have an antenna with high gain, well you will get some decent range. Okay, now for a well done presentation despite the technical difficulties, please close. And also the OIO stage would like to represent you one of these, either a mate or a sweet. What would you rather have? I think you can have both actually. Now that we've got the technical difficulties and everything, let's do that. Thank you very much.
|
The Independent Solar Mesh System (ISEMS) has a new hardware that will bring the power consumption, space and cost requirements of low cost WiFi relay stations down: FF-ESP32-.OpenMPPT In order to build the cheapest energy autonomous WiFi relay possible, we have designed an update to the previous Freifunk-Open-MPPT-Solarcontroller. It has now integrated WiFi and Bluetooth, so it doesn't need an external WiFi router anymore, but you can still connect one. Technical data is preliminary at the moment, but power consumption will be between ~0.3 at low load and ~0.6 Watt at maximum load. So a 20 Watt Solar panel and a 12V 7Ah battery will be more than enough to keep the system running and relaying traffic all year.
|
10.5446/53011 (DOI)
|
Hey, everybody. Welcome to Open Infrastructure Orbit, a whole assembly of assemblies dedicated to, you guessed it, open infrastructure. So I'm very happy. We're speaking now about maybe a sister community or a very dear community also for everybody who's engaged in open infrastructure and wireless networks in the Hamnet amateur radio network. And we're learning more about everything that has to do with the infrastructure that's powering this. So I'm very happy to give you the floor. And if you have any questions, of course, you can post them on Twitter using the OIO stage hashtag. And in case you're not here, but hello in the stream, you also can use the IRC. And you also can, yeah, if you're just wanting to post some comments on it and chat with each other, that's of course always possible. And in any case, if you have any questions afterwards, we have some time here for Q&A. And now I'm very happy to introduce you to Lars Rokita, who will speak about Hamnet and who has all your attention now. Thank you. Thank you for coming today. Lots more people when I expected. I'm not the most expert in the case of Hamnet, but I think it's far too large, far too much important Congress here to not speak about it. So I made a little talk on English. And later tomorrow I have a more detailed talk about specific kinds of Hamnet on specific bands, more in depth. So this is more like an overview. And we can put a little bit depth in the questions. So first the question is, what is Hamnet? And Hamnet stands for high-speed amateur radio multimedia network, a little bit complicated. Sometimes people call it mesh network. And it's mainly used on free bands. The most common you lose band is the 6 centimeter band, or also known as the 5 gigahertz band. And as we can see down there, we have the two mostly known 5 gigahertz and 2.4 gigahertz bands. And the 5 gigahertz bands, it's really nice overlapping with the commercial available bands for VLAN and Wi-Fi and ISM. And on the 13 centimeter bands, we have more like our own frequencies. And whereas also 9 centimeter, it's a little bit more special the band. The hardware is a little bit more expensive. But it also has a benefit that's not that much used. So it's a lot less noisy and has its own beneficial. And we also can see we are a little bit limited by law. So we have limitation on the bandwidth. And now it comes to the point independent from the internet. It's not really independent because at this state, we are in at the Hemmed now, it's lots of islands and the islands needs to be connected. And what do we use to connect them? It's VPN tunnels. So we are still using kind of the internet. But the traffic isn't routed to the internet because there's a general law that amateur radio can't transmit data for third parties. So only amateur radio can use this net because if it wasn't the kind, all the ISPs and other companies would get mad that we can use this frequency for our stuff and use them on their own terms and we pay a lot of money for the frequencies. And we don't want to play this game, play the game of the ISPs. So we have our own net to play around with. And here's a little bit of history. All of the amateur radio network started in the 80s with a so-called packet radio network. And it was real slow, around 1.2K boat. It's not that fast. After some time, it was 9.6, but it's also not that fast. And in 2005, an Austria was made first test on 2.4 GHz. And a little bit time, talk its place when the name Hamnet was established. And around 2010, it's got a little bit more international out of Austria. Germany was in 2009, so it's a little bit spreading, spreading, and it's going on. And now a little bit about routing. The routing is mostly BGP, but now it's the interesting part. We are having our own IP range. It's shrunk a little bit. But in Germany, for example, we have the 44.148 slash 15. So it's around 131K of IP addresses. If we compare to, we are around 60,000 amateur radio guys in Germany, so everybody gets two IP addresses. It's kind of comfortable to have a lot of IP for four addresses. So IP addresses aren't the problem, and IP for six isn't that kind of theme in the community yet. The more the problem is with the AS routing, the 16-bit AS are kind of limited. And so we get every region, get one of the AS addresses, it's listed down below. So we have a large region that gets a 16-bit AS address, and that's split it up to 100 local AS cells with 32-bit addresses because the 16-bit private AS addresses are quite limited. And so it's a little game of split up, split up, split up until we reach the user access point from the backbone area and where we use subnets of around 20 IPs directly. Here's a little bit of the map. This was, this screenshot was taken, I think, five days ago, and where you can see a big, large red area in the Netherlands. So we had a little bit of problem. So the Netherlands were disconnected. Now they are back up again, but we can see the net is quite spreading, and we see also one problem. Here in Leipzig, we have a little bit of a hole, and we need to work on it to get this hole closed so we can connect Berlin to the rest of the net so Berlin doesn't have to use a VPN anymore. It's quite large, but it's still growing, and I'm holding this talk in English because if you look at the map, mostly it's German-speaking countries, so Germans mostly know of it. Now it's about the IP distribution. We often, I talked about it, what the IP range shrunk. The amateur radio community had the whole.44 block, slash eight, but with the recent shortages of IP44 addresses, the owner of these IP addresses took up the course and thought, if we sell some of them, we can get quite a lot of money. So we sold the slash 10 area, one fourth of all the addresses, and got a lot of money from it. Now it's up to them how we use that money. I think we sold it to Amazon, so as the German amateur radio community uses IP addresses now in this area, we have to relocate them to a new area, but it's not that kind of time critical because these IP addresses aren't routed to the normal internet, so it's kind of independent. It's not that nice to use these IP addresses twice in the internet, one in the private network that isn't really routed to the internet and one from Amazon. And from normal IP distribution of overview, we have an area like a state in the country or something like that, or maybe a big city, and these areas mostly get 1,000 addresses for the users and 500 addresses for the backbone. And every node, if it's a normal node, we have kind of really big nodes where we use lots of more capacity, normally gets around 16 user IP addresses, or for the backbone, so we can have four links and some IP for local services like Raspi, some data aggregation and something like that. So that's the normal thing to address the IP distribution. The IP distribution is done in its own way, so we have a German amateur radio IP organization who does all that kind of stuff. And now it's to the link planning, which is the longest link in the Hamnet, it's 216 kilometers at 5.7 gigahertz. It's quite nice, it's working, but it has a little problem. And the problem is if we look here, that's the data of the SI, and if we look down here, we can see the uptime. Now the uptime isn't showing that kind of large, but it's only going up to two days, because ever so often it starts raining, and at 200 kilometers at 5 gigahertz, rain is enough to keep it down, but this link is kind of special, normal link is around 50 kilometers long. It's much more easier to find link path is red short, because if you go longer and longer, we know Earth isn't kind of a flat, so it matters to find high elevation points to connect. Now for a quick look into the hardware, most of the times we use commercial hardware with all firmware before the US laws change, or with alternative firmware, because the US laws nowadays dictates that you can't change frequencies out of band, and what's what we kind of do with it. And for the links, we use high gain directional antennas like dishes, or here we have multiple antennas in that kind of plane. And for user access, we use more like sector antennas or omnidirectional antennas. Down there we can see a user access coverage map of Vienna, and where you can see how quickly mountains become a quite important thing to consider. In the city even more high building, if you're living in a building and are not on the top, it's quite of difficult, but that's the point I will address later. Here we can see a standard build up. We have a router, mostly PoE, so we can get quite easy access on the top of the roof with short cable lengthers, because at 5.7 gigahertz, every meter of cable counts, every meter loses power. So it's important to keep short cable lengths, and prices, these prices are a little bit older, have dropped a little bit, but hardware cost is not the problem. We have some other annoying stuff. In Germany, as amateur radio, we have to abide the amateur radio law, and every link station, it's operated automatically, so it needs to have its own license. If I have a station on my home, that's not matters much, but if I have a link station sitting up on top of the tower, connecting to other links, it counts as an automated station, and it needs a special permit, and as the government is, if we want something from the government, the government wants something from us, and that's cash. The other thing is we have a 10 megahertz maximum bump with per link. If we use the amateur radio frequencies, that's a way around it, just use two links or three links, but it needs its own hardware, so it's a little bit annoying. And the last thing is, as amateur radio is open speech, so it's no general encryption allowed. You can encrypt for setting up links for configuring files and something, but we have no transport encryption, so everybody can hear what you're doing over it, but it's also counting into the direction it's experimenting and not an appraisal meant for internet. You don't want to do bank work over it. You shouldn't even be able to do it because the bank and amateur radio guys, so you shouldn't be able to talk to them. And the last big problem is mountains are hard to move. If you want to make a link and mountain or building is in the way, it's hard to get around it, and even harder is to get access to a high tower to get the permit to put some stuff up there. Here are some example links, how a link in the Hamnet looks like. Sometimes simple IP addresses is used, and other times you see the AMP R, it's Stanford amateur packet radios, kind of old kind of link. And as a third example, we can see a routing aid in it, and the amateur radio network has all of its own stuff, own DNS, own web search, and there's lots of possibilities to use this network. And I thought it was be kind of quick put into this theme so we can do a little bit more of questions and answers to get more to the points of interest you have. So, microphone turnaround or what's the deal? Yeah, I have a microphone here, so if you have a question, you can come to me or I give it to you. So, if you don't have encryption, can you have authentication? Yeah, what's one thing to work about, how to make and secure authentication, what's the person who talks to you is really the person you're speaking with. It's an area we are working on to get this kind of right way. I mean, it's really easy to get an authentication kind of the wrong way, just use a tag or something, but it's kind of easy to replicate it from another person. So, that's one area we are quite working on to get it done right so nobody can easily replicate the authentication. So, if I may make a follow-up, what's the standard protocol that you use for shell access? Do you just use Telnet then? It's normal Telnet, it's just, yeah, it's own address space, so it's normal Telnet, it's normal, it's like the normal Internet, normal DNS and all that stuff. I mean, I'm not that kind of a routing guy, I'm more like the hardware and link setup, but we have sitting down there in the car's wave, so if you have more questions, I can do a little bit of dick up and hope to help you out. So, yeah, also about encryption, I feel like it's a bit subjective, I mean, you said no general encryption, but so it means, for example, I cannot expose the website using HTTPS, I guess? Yeah, it's not many sites allowed to use HTTPS because that's kind of, it's point of encryption, so only the config sites of your link hardware is allowed to use HTTPS and all the other sites aren't just using normal HTTP. But then, right, it's a content network, so I can put content there, I can encrypt it in a way that nobody knows about it, how can they know it's encrypted? Yeah, you can always, I think, misuse the system, but as the general, I mean, I can also take my normal amateur radio microphone and put some encrypted data link over it, you can do it, but you aren't supposed to do it, that's the spirit of amateur radio. Okay, thank you. One more question here? Hi, I was just wondering if you have much or any communication with the Ampronet people, you know, they basically, as far as I hear it, just sold off your addresses without telling you anything? Has there been any better relationship there? Are you going to get any money from them? That's quite an interesting question, yeah, where I talked to us before, but we will be sold the addresses, we are getting new addresses, so it's one to one replacement, more like, so our slash 15 in the area of sold IP addresses now move to a new area, and on the other hand, the money distribution thing, I think it's still in the works, so nobody knows what we're going to do with it. I hoping we do the right things to fund the network, like also the AID network in the US, maybe link them up together to get an even bigger network, or do something really good with it, but I'm not the one to tell them how to do it, and only hope we do the right thing, and it's easy to screw up with lots of monies, I think it's around 60 to 100 million in the talks, but I'm not sure how much it really was. So we're hackers, you're like abusing things, so what's the enforcement on the you shouldn't encrypt, because the obvious answer is either the microphone hack that you just proposed, or the steganography are hiding content encrypted in images, like there is many ways to do it, who's going to enforce it, and what's the possible legal challenges here, do you know? The more easy way is from the backbone side of things, so if after a long time you see we're doing some strange stuff, you can kind of fence them out, so try to not let them into the network. I mean, the areas where we're getting in, it's limited points of user access, and most likely you know from every direction the people come and can see, oh, this new address, normally you know if you have a user access point, it's around 10 guys per access point at the moment, so you really know the guys who are doing on it, and Kainer can ban them, but I mean, you can get a new MAC address and try again, try again. I mean, as always, the guys who want to mess around with some stuff, sometimes it's hard, sometimes even it's brought to court, and we are sued to misuse the frequencies. We hope we don't have to do that stuff because it's just annoying, so I mean, just don't stretch the band too far. I mean, if you play around with it for some time, little time, it's okay, but don't stretch it too far so you damage all the system, trying to gem all the people or something like that. It's quite easy trackable, and you can see from where they're coming and hunt them down, it's a lot of work, but it's doable, and when it gets to the court, to the money, it's not that good thing. We have a question here in the back. Hi. You have said that, of course, access to high buildings is quite essential, so do you have any tips convincing those people who can give you those access that they want to give you the access? The way what gets the most responses is if you can talk to them, it's like an emergency network. All this stuff is quite low power. You can put up a 50-watt solar power, a solar cell, and around one kilowatt hour battery, and the system will run kind of forever without no external power, and so you can talk it to the government or the other people, but it's an emergency network. It will work when everything else fails. I mean, in Germany, I think a normal cell tower fails after 10 to 30 hours of no power, and what's one pitch to put up the Hamnet and the other pitches at university or other buildings? What's kind of science experimenting with it and doing that kind of stuff? Great. Thanks a lot. I'm now coming up with something sweet, because we have a little presence for all the speakers on stage. Thank you again. Do you want some chocolate or some mate? I think I will need the mate more. We will be at the cars wave, and I have to work on the next talk. It's not finished yet, so it's about Hamnet on 70 centimeters. That's the easy access, because 70 centimeters gets through the buildings, and it's not blocked on site, and what's the next talk, and we have hardware we have to play around with. Great. So everybody meets you at Kaußwelle. And thank you all for coming for this talk. It's really interesting. Thanks again. No applause.
|
We take a quick dive into the Highspeed Amateurradio Multimedia NETwork the wireless backbone of the European Amatuerradio Community. It’s uses mostly commercial hardware on it’s own frequencies beneath the 2,4 and 5 GHz wifi bands. The net is routed with it’s own ipv4 private network consisting of multiple 44.xxx.000.000/16 blocs. A short overview on what the Hamnet is and how it came to be. Not forgetting all the challenges of technical and legal kind that come with running and building the Net.
|
10.5446/53016 (DOI)
|
He already gave a talk last year, which many of you probably remember. Now with a dozen more things you didn't know about NextCloud. Yours from NextCloud, obviously. This works? Okay, cool. So first of all, I feel rather lonely here on the stage because it's really full. That's a bit odd. I mean, I gave a talk last year and I'm not sure how much of it I should repeat. So my plan was to talk this time about hopefully things, well, that existing users don't know yet. But at the same time, I'm fairly certain I should also cover a bit of the basics. So I tried to at least add a new angle to that basics element. The first part of my presentation will still be about what's wrong with the cloud. But I'm going to try and put it in a little bit of a bigger perspective than I did last time. And after that, of course, I'll talk a bit about NextCloud. And in the second half, I get to the stuff that hopefully existing NextCloud users will find interesting. So again, my goal was to find a couple of things that you didn't know. And well, I'd love to hear afterward if I succeeded at that. I've been asked to try and keep questions until the end, which means remember them. I know personally I never can, so I would write them down. But some people actually have working memories. So in that case, good luck to you. So let's start with the first part. I don't know how many of you saw this article from our friends at the New York Times. They found a cache of data online out of which they could extract the movements of President Donald J. Trump, which I'm sure are kind of supposed to be secret. But the cell tower data is kind of out there. And they could track minute by minute pretty much where he was going and where he got his favorite happy meal. And this one is also another little nose item from a while ago. Google made a security camera. And when people pointed out there was a camera, that there was a microphone in it, they said they didn't know that. I still find that fascinating. Maybe they did know, maybe they didn't. It's, but the most important question is of course, what was it recording? Another interesting news item from some time ago. This is a heat map from a fitness tracker, which kind of accidentally exposed a secret US army base in Afghanistan, because that's what you're looking at. Yeah. And you know, this was not a bug. This really was a feature. This data was supposed to be online. And you know, you should just turn off your fitness tracker if you're running in a secret army base, which of course didn't happen. This is a bit less funny, I think. So Facebook obviously advertises itself to people who need to advertise to other people, right, to businesses. And in Australia, they advertise their ability to target teenagers when they're at their most vulnerable. The example they gave was, let's say you're a 14-year-old teenager and you go out on your first date coming Saturday. Facebook knows that, of course, because it knows almost everything. And it will know when you're at your most anxious and worried about your date, which is the perfect time to come with an ad about, I don't know, a nice leather jacket. And you know, of course, tell you that it'll arrive on Friday and there you go. We can sell stuff. I think that's pretty nasty. How many of you know Target? It's an American chain, so not the Target thing. Yeah. So they sell all kinds of stuff. Shampooes and well, stuff for pregnant women. And they had automated targeting algorithms that tried to figure out, you know, if one person buys X, then later maybe they will buy Y. And this algorithm on its own kind of figured out that when women buy certain shampoos, they are likely pregnant because pregnant women have stronger sense of smell, so they will buy less strong smelling shampoos. And if the algorithms would see a change in the behavior of somebody in the shampoos they buy, they will start sending them advertisement for pregnancies. You can imagine that first of all, sometimes those people didn't even know this themselves. But also, there are family members who figure out that suddenly they're pregnant without telling anyone. It can lead to some embarrassing conversations. There's a whole lot more, of course, that I can talk about. And I mean, a lot of you follow the news, I'm sure you're quite well aware of what all this data tracking is doing in society. And last time I talked a lot about why privacy is important and why it matters for us as human beings, people who are being surveilled, change their behavior if they know it, obviously. I think privacy is a human need. It's a very basic human need and it's wrong to interfere with that. And I mean, the examples go from just a bit scary to just downright criminal. What happened in Myanmar, if you don't know what happened in Myanmar with regards to Facebook, you should Google it or use DuckDuckGo, probably a better search engine. It's truly scary and problematic in so many different ways. Now I want to, as I said, bring this to a different level as it did last time, because last time I talked about privacy and about personal freedom and how important it is to protect that. But this time I want to look a little bit more at society as a whole. But we as a country, and a bit of companies, but also as a country, what the impact is of, well, essentially these guys and of course many other companies. So I'm going to talk a little bit, well, economical here or business-y, which I know this is exactly the perfect audience perhaps for this. But in the end, we still live in a capitalist society. I know many of you might not be big fans of that, but it is what feeds us. And there are certain patterns that you see in such systems and a capitalist system. And one of these I want to talk about here. Now these companies, they do kind of an interesting thing. If you're a hotel, you have to pay about 25, 30% of the money that you get for a hotel room to one of these if that's the way you sold that room. That's a lot. I mean, that's really a lot, a third almost. I mean, just imagine that you would be making a product and you have to give away a third of what you earn. Well, that's actually what happens on the Apple App Store and the Google App Store too. Now these percentages are quite okay, I guess, or at least we're used to them in the digital world. But if you know what percentage a supermarket earns on what they sell to you, that's low single digits, way less, a 30% margin on running a website with, well, a fairly small investment relative to the number of visitors. I mean, you can imagine why these companies are making massive amounts of money. But they have another effect too. If hotels have less money, they can't invest that money in other things. So what you often see if a company or a society in general has a surplus is extra money, it can do other things. If you go really back in history of mankind, well, when we were building cities, if you do agriculture, you can produce more than you need per person. So you can have, well, people who, I don't know, look at the stars in Vennstaff or our priests, whether that's useful, you can debate, but at least there's a surplus in which you can do something different. And for companies, this often means that they can try new things, innovate, build new products, make something nice and new. And well, if these companies are roaming off all the, or taking away all the extra profits from hotels, there will not be a lot of innovation. Now, okay, hotels, what do they innovate? They need to put up new curtains occasionally, and that's probably it, fair enough. I mean, a nice example of this model is, of course, a franchise. And this is kind of what these hotels are becoming, because in the franchise model, all the innovation happens at the headquarters. The people who do the work, the franchise taker, the actual restaurants, they just earn some money, but most of the money is made at headquarters. And that's then used for advertising and for, well, improving the product and making new things. And it's an okay model. I mean, if you decide to be a franchise taker, that's fine. You clearly just want to do something, run a restaurant and not spend too much time trying to think about the menu, that's fine. But a hotel, obviously, didn't really sign up for that. Now think beyond this. I mean, okay, hotels, they're important, but our economy here in Europe, especially, doesn't run on hotels. I think you can make quite a strong case, though, that these companies are pretty important for our economy here in Europe. I mean, I know we're all not huge fans of them, mostly because they really are failing to move to electric. But aside from that little snag, they are providing a huge number of jobs, not just at the companies themselves, but other companies that deliver tools to them. And a lot of these companies, they are doing pretty innovative stuff. I mean, a modern diesel engine, whether you like it or not, and yes, it sucks for the environment, but it's a pretty marvelous piece of technology. And a lot of that technology isn't just owned and developed by BMW and Audi and the other companies, but is developed by all the other companies that live apart for it. I mean, even the pins and the cables and the types of metal and the buzjis and God, I know shit about cars, I'm sorry about that, but those pieces are pretty impressive. And these companies, they innovate because there is enough money going around in that part of the economy. But we're all probably very well aware that cars are becoming more digital. So a lot of the interesting stuff is starting to happen in the digital sphere. I think of self-driving and all the data that's needed for that. Well, where is that data going? Well, it's going where pretty much all our data is going. It's not staying here. I heard today a percentage from someone, which actually then this graph, which I put together myself so it's completely made up. It's worse than this. That's actually the thing. So apparently only 4% of our data that we generate and have here in Europe is stored in Europe. And that's more than 4% there in the middle. So it should actually be maybe only two balls or three. When I say I'm made up, I took some percentages and then made it, well, I bolted the graph. So in that regard, it's made up. So the proportions are actually worse than what this kind of shows. So the data that we need for the future of our economy here in Europe is there on the left and to some degree in the right. It's not where we need it to be. So if we don't want to all become a franchise here in Europe, we need to take back control over that data. That was it. I hope that brings the subject to a different level than usual. Thank you. So now let's get to the next slide part. I don't know if the screenshot is very readable, but NextLoud purports to be a safe home for all your data. It's a self-hosted open source content collaboration platform. Content, your stuff, files, documents, calendars, notes, contact data, metadata, comments, et cetera, collaboration. You can share documents. You can work together on documents. You can have a video call with somebody else. You can have a chat with somebody else all on your own server. It's a platform which means there are a lot of apps, a lot of extra functionality that you can add to NextLoud. Now the last year, I gave a talk here of the subject, I think 200 things you can do with NextLoud. Well, first of all, there's obviously a lot more you can do at the same time. Well, there are 200 different apps that I went through at that time. I'm not really going to try to complete that, but the idea there was, of course, there are more than 200 apps developed for NextLoud that add all kinds of things. That was the point I was trying to get across. My idea this time was that I'll essentially pick up where I left off and go over a bunch of other things that you can do with it. Let me first continue and get a bit of an overview of this NextLoud thing. This is a normal web interface that you're confronted with when you use it. You have your files in the middle, on top you see, yeah, we try to be smart. I don't want to call it machine learning because it's bloody statistics, but it shows files that have been recently edited, recently shared with you, or otherwise somebody commented on a file. They get put on top. Hopefully, they're the files that you're interested in at this point in time. On the right, the sidebar, you can share files and see who you shared the file with. You also have activity here, which essentially shows a history of things that happened to the file. You can comment on the file. There's some more stuff that I'll get to later. On the left, you have a navigation bar, essentially, different ways of getting at your files. You also have a picture viewer gallery application that does the same thing as the file view, except it focuses more on pictures. You have a lot of security stuff because when you put your whole life somewhere, you should feel secure and it should be secure and protect your data. There's a whole host of two-factor authentication things. Other security stuff, I'll get to a few security things, but I don't think I would repeat too much of that. You have notifications that will come in on your desktop but also on your mobile phone, because, of course, there are mobile apps for NextLoud as well. This is kind of a timeline of everything that happened to your files. I'll throw a few slides with words at you. Sorry for those more visually-oriented people. I put icons on the bottom for you. One of the main, most important things I think that's at NextLoud apart from many other attempts at trying to build something similar is that it's easy to use. At least, we try to keep it easy to use. I mean, a lot of people here are techies, but even most technical people appreciate when a user interface is designed to try and get out of your way. Now, of course, it's really hard to do that perfectly, but we do our best. I think we've done a pretty decent job trying to keep NextLoud as easy to use for an end user, but also for the system administrator. It's pretty important for us to keep NextLoud really easy to deploy. We use the LAMP stack with all its pros and cons. It's a system that's relatively easy to deploy. We have PHP. We're on Linux with Apache, MySQL. We even rewritten a couple of applications from other languages into PHP, which, again, perhaps if you're a language purist, you say, why would you write from Go to PHP? But the thing is, now you can install it with one click. It scales the same way the rest of NextLoud scales. NextLoud scales impressively well. Our largest deployment has about 20 million users. That's a single deployment. There's one page you go to. You log in. Obviously, your data is not on one server because that does not scale. It's a cluster. It's more like a cluster of clusters, actually. It's called a global scale infrastructure. But, again, in terms of scalability, yes, it works on Raspberry Pi. Yes, it works on, well, we call it global scale for a reason. We think it, I don't know. Maybe it's a bit arrogant to say it has scaled to 7 billion users, but I'd like to think it can. All platform access, there come the icons, of course. Linux, Windows Mac, and Android iOS, but there are also some people who develop and maintain apps. Well, there was somebody maintaining an app for Windows Mobile. I don't know if that's still very relevant. There is one of our developers, at least now, working on the talk app for YOLA. So YOLA fans, there you go. There is some work happening there, too. More bullet points, no pictures. I'm sorry. Yeah, another important point for an excellent attitude fit in your infrastructure. So especially as a private user, of course, you run it on one computer, most likely. A lot of people run it on old laptop or Raspberry Pi, as I said. But if you're a company, you want to connect it to user directory in L-Lab, you want to connect it to your NFS file storage, or even your SharePoint, if you really want to do that to yourself. And that's all possible. Yeah, NextLoud can get the files wherever they are, and you're not limited to one of those. And for example, Renat Air, that's what it called, French university organization, not one university. Like, the collective of all the universities in France, they're kind of represented and supported by one organization, that organization is called Renat Air. And they are setting up a NextLoud instance for everyone, all the universities, at least they can use it, they don't have to, of course. And what they wanted to do is that every university can manage their own users. Now, ideally, of course, not via a separate user database, but they should all just be able to connect their own L-Lab to NextLoud. But that would, of course, be thousands, tens of thousands of L-Lab connections to a single NextLoud instance. Well, that's the plan, and that's what we're doing at the moment. So it can scale pretty well also in that regard if you want to. Fancy features, well encryption, there's a whole bunch of those. I can go into details at some point, but that might be better for at the boat. And versioning, sharing with public links, passwords, the whole thing that you expect if you want to work with other people together on the Internet. Nice features, server to server sharing. So if you're on one NextLoud, other users on the other NextLoud, they can share a file from one user on one NextLoud to the other user on another NextLoud. And the NextLouds will then exchange the file the moment it is requested, but the file stays on the original NextLoud because we want that original user whose data it is to stay in control. And also the data is then not cast by the other one, it's just delivered whenever the user requests it. A lot of NextLoud apps, I mentioned it already. Security, can talk a lot about this, let's just skip to the last point. If you find a security issue in NextLoud, 10K is yours. Go nuts. That's completely open source. Add the whole thing, top to bottom. A lot of users, tens of millions, I mean, we know of one really big installation of 20 million, there are about 300,000 different NextLoud servers on the Internet. That's an estimate because we don't track them, not unsurprisingly. But it's a pretty decent estimate, I think. It's somewhere between 250K. I mean, you can see how many downloads there are of some apps. And I know, for example, the Calendar app has about, what was it, 180,000 downloads for one version. So unless people repeatedly downloaded the same version, which, well, you can't really do, well, you could, but why would you? I think that gives a pretty good idea of just one app. And of course, that's not used by everyone, that's just an app you could install if you want to. A lot of contributors we have, there's also a company behind NextLoud. I work at the company, there are about 40 people at the moment. And our business is completely focused on helping organizations like Renateur or the German federal government or the French Ministry of Interior or Siemens or other small and big companies run NextLoud. So essentially, it's a redhead model, they pay for services. We use part of that money, obviously, to provide them support and part of it to make NextLoud better. And that's really for everyone. People, we do a conference in Berlin every year. We haven't announced a date yet, but it's going to be most likely in September this year again. Some of our customers favor it or not. And an old picture of the team. I skip through this and I'll get to the fun part. Of course, NextLoud is not the only project that does cool stuff. Lots of others do too, I always like to show some friends. NextLoud.com slash, you know, you think of it. So that's NextLoud. Okay. If there are questions, write them down. Let's get to the fun part. So my colleague Arthur has already been talking here today about NextLoud Flow, which is a new feature that's coming in NextLoud 18 in, yeah, three weeks approximately. It's pretty cool. I found it fun that at some point people asked him too many questions, I guess, about it. And he said, you know, you better ask yours about it because he can talk better about NextLoud Flow. And just to be clear, because I was then asked, you know more about it? No, obviously not. I just talk. He knows more about it. I just talk more about it. That was the right characterization. So the idea of Flow is there are certain triggers in NextLoud which can be provided by apps, any app. And then there are certain actions that can also be provided by apps. And the user can connect these. So I say user, the admin can do it, but also the end users can. And then action can be post something in a chat message, do a notification, run this through a script, send an email, create a task in the calendar, things like that. Or even hit an endpoint on another server. Send some XML somewhere. It can be anything. And the trigger can be, well, an obvious thing is files. So file is put in a certain folder with a certain tag at a certain time by a certain person in a certain group. And it can also be an alarm for your calendar or somebody said something in a chat or, again, an endpoint on NextLoud got triggered by an external something. So in theory, once this is done and has more of the knobs and buttons to play with, yes, you could use NextLoud to run your home automation. No more Google stuff for that. However, again, that's not done yet. But this is done and this works. So I already have a nice video maybe Arthur already showed it for those of you who were here where you put a file in the folder and then you get a text in a notification in a chat and one of the chat rooms that says, hey, you know, this file was put into that folder. I personally think that's already pretty nice. And the PDF converter, I mean, the most obvious thing with the PDF converters, of course, you know, if you assign attack convert to PDF, then it becomes a PDF. But you can, of course, do much fancy stuff there saying like, hey, you know, files put in this folder by somebody from that group, turn them into a PDF, and then maybe create another flow that then emails that PDF to somebody, et cetera. Now these are the kind of things we're looking for. That's my dog. And I think the point here was to show there's now a site bar in the PictureViewer app and of course an opportunity to show my dog. She's very cute. So this is also a nice work in progress. In the site bar, you have a chat if you have NextvileTalk installed, that is. So you can ping people and, you know, have a conversation with them right there on the file on the spot. And this kind of integration is, of course, very nice, especially when you're editing a document with somebody else, then it's really nice to have a video call or a chat. Calls are coming as it says this is a test version, so the calls don't work yet. But if you're editing a document with someone else, it's nice to be able to talk, of course, to the other person you're working with. And all of this works on mobile, as it works on your phone, although they have a kind of tiny screen, of course, and certainly on like a tablet, an iPad or something. Well, there's the phone screenshot with a not working shadow. That's great. Another nice feature is that you can actually share a file directly to a chat session. So I'm typing here the name of a chat. You see that the auto-complete which goes to the top because of the screen resolution auto-completes the name of the chat session. So the file then comes in there. And that works on your phone too, because here on the phone I am actually selecting the files to share in a chat. This is not our file app, but our talk app. Other things on the sidebar is a projects thing. So projects are a feature that we introduced earlier this year. And what it allows you to do is to connect objects, things. So it's hard to define what you're talking about when you can have chats and deck cards and calendar items and mails and files, things. You can connect them with each other across apps. So for example, you can connect this file to another file, or you can connect it to a chat or indeed to a deck task, or to a chat here. And this is an example. Project Starfish on the right has connected to it a file, another file, a folder, another file, another file. Not very creative. And the chat. Let me see if I have something better. So there's a card in deck. Deck is a Kanban app. So you have stacks of cards to do in progress done. And you put the cards between them. And one of these cards here, or I think actually the whole deck, is connected to this project as well. So there are two other of those connected to it as well as the chat and the files. Now what happens if you share a file that's part of a project? Well, the file gets shared. But a project too, but only the things that are shared. So if you share half of these files with someone else, they have access to those files and can see that they're part of this project Starfish. But they can't see the other files. So they are still your private files connected to Project Starfish. And that way, if you click them, you go directly to the file. So you keep this connection between different places, different apps, and the thing that you're working on. I'm going back to talk again. There was something I wanted to show. It's kind of nice. And I really hope more people are going to build these things. God, I completely forgot the name of them. Commands, very creative. So yeah, you can run commands, which is essentially, you know, you do slash and you call one of the different commands and you give it parameters. I don't know, the hacker news one and the Wikipedia one are pretty obvious, I guess. But we're hoping people will build other ones. This is, of course, a bit similar to what Slack has, which is something we try to compete with because, you know, on premise. Another thing that's relatively new, ACLs. So again, normal home users usually don't need this stuff. But companies are still very much stuck in the 90s, I'd like to say, or maybe it's a bit nicer to say they work differently. A lot of companies, they have like a couple of folders they need to share with everyone in the company. And then they like to control the access to parts of that folder between different groups. So for example, you have one folder structure in the company that is shared with everyone, but there's a sub folder in there for the finance department where only the finance department can edit files and only management and finance can see them and nobody else can get into that folder. Now for this, you have ACLs. And for NextCloud, we didn't want to add ACLs to the normal files because we have a different way of sharing. Someone needs access to it, you share it with them. And not one big directory structure and all that stuff. We work with a flat sharing model. So we have group folders, which kind of brings this hierarchical sharing to NextCloud. And with a group folder, you can create one or multiple of these company-wide organization-wide shares, share them to groups or the whole company or subgroups. And then in there, you have ACLs. So per sub folder then or per file, you can give access rights and take them away. And you can also let sub administrators manage this. So you have, in NextCloud, a concept of group administrator, somebody who can administrator a group of users. And this group administrator can then also administer the access rights via the ACLs in the group folder. Another nice feature. I mean, it's old, but I think, I don't know, a lot of people might not know it. You can create guest users in NextCloud. You just type a name and if it's not a known user and you have the guest account app installed, it will pop up this account when you click create guest account. You give a name and an email address. You click on save and share. And then the account is created. The person whose email address you entered gets an email and say, hey, you know, you've got a new account on NextCloud. Please log in, set up your password and maybe your two-factor authentication. And the file is shared with them and they have access to it. So guest users have a number of limitations. Essentially, they can't really upload their own files. They have zero storage space. But of course, if you share a folder with them where they have write rights, then they can upload files in that folder. That is obvious. And as admin, you can then whitelist applications that they have access to. So you can give them access to talk, to the calendar, to contacts, to mail, to pretty much any of the apps, if you like, including, of course, collaborative document editing, et cetera. Which is nice. Another new thing, you know, we have a lobby in talk. So if you share, you know, a talk room with someone or a number of people with public link, you can enable the lobby feature. And then the people who join the link will stay in this not very pretty screen until the time runs out and the room starts. Or one of the admins in the room says, let's start. And then they join, was looking for another opportunity to put in the dog. And I found one. There's also a nice feature. So you see I've set a password here. And there's a button below it, which is called password protect by talk. The fancy marketing term that we use for that is video verification, because it alliterates. And what this does is it allows you to verify the identity of the recipient of the share before you give them access. So think you're a doctor, you're sharing the results of a test with your patient. You want to make sure, of course, only the patient gets it. Now you can do the traditional stuff. You send them a public link. You send them if you're smart over another channel, the password. But you know the kid or a spouse might have access to the phone from that person. So while you have quite some guarantees, it's not really waterproof. And if it's really sensitive information, you might want to be 100% sure and not 95%. Now that's what this does. Because a recipient, as usual, gets a screen that says enter the password, but you don't send the password. You keep the password, and they have to click request. And at that point, a video call starts, and your phone will ring, or your next slide interface will ping you. And you have to have a video call, and then you give them permission to access the files. So you know who it is, 100% sure, no chance at someone else, if you know them. Nice. Next time you need to set up a next slide account on the mobile app, check this out. You go to your security settings as a user. You click on password at the bottom, and you show the QR code. You just scan the code from the login screen on the app. There's a button right there with a QR code icon. You click it. You immediately get the camera, use it, and you're completely set up, and it's syncing right away. It saves a lot of typing, long passwords, and second factor authentication, and all that stuff. Quite nice. We managed to hit a buzzword bingo, because we actually use a little bit of real machine learning in NextCloud, if you have this app enabled. So what it does is it trains a neural network on your logins, and then it warns you when it sees a unfamiliar, weird login location. If you always log in at nine o'clock at the office every day, five days a week, and suddenly in the early afternoon there's a login from Shanghai, assuming that your office isn't in Shanghai, you will get a warning. And then, of course, you can kill the session or change your password, et cetera. Quite nice. It takes, of course, a little while to train the data. Sixty days, I think. After that, it will start warning you. And it's pretty accurate, at least for me. I rarely get warnings, and when I do, it's really because I'm traveling or something. So it can also mean that nobody tries to hack me, which is good. So this is the list of your sessions, by the way, and if you then see that there's a suspicious login, you can, well, revoke the keys of that session and essentially shut it down. If it's not a device, you can also wipe it. So we now have the ability to wipe like your mobile phone or your desktop client remotely from NextCloud. Administrator can do this, too, but the administrator can only wipe all devices from a specific user. So not per device, but as user, you can say, only wipe my phone, only wipe my desktop client, et cetera. Now I'm into security stuff now anyway. So we have watermarks, which is nice because, of course, you can configure it that the guest users always see watermarks or something like that. Keep your stuff under control. Yeah, that's setting up TOTP. I guess I run out of ideas at this point. So we can do two things. I have a few more screenshots, but we could also get to questions. I think that it's a good idea to get the questions by now before everybody gets bored. Yeah, questions. Please take a hand in the air, arm in the air, and then you will... So question round, there's me, the Herald, and the signal angel, so two people with microphones. Just raise your hand when you have a question, and we will come to you. Hi. Hi. Thank you. I want to ask, I'm hosting a NextCloud instance since two years. And I want to connect it with my local NAS. This is possible. So I'm hosting it on my own server in the web, and I want to connect it with my local storage. It is possible. Yeah, that should be possible. So that's what external storage is for. At least if you want to use your NAS just for the storage, and you have your NextCloud, well, also at home, that's probably the most performance solution. But even if you have your NextCloud in a data center or something, it is possible to add your local NAS as an external storage. Depending on what connection you can make, like it supports Samba, it supports NFS, those are probably the best options. Web Dev is also possible. Yeah, go to the set, look for the external storage app, enable it, go to the settings, add external storage. It's reasonably, well, I don't want to say obvious, but you should figure it out together with the docs. Then we have a question over there, please. Thanks for the talk. It was really great. I'm using NextCloud myself on my own server for a long time. I use it for everything. But one thing bugs me, because if you want to simply upload a lot of files to NextCloud with the default files app, and nothing like Weftapp or the sync client, and one file upload times out or something, you cannot repeat it because you don't have a list of the dedicated files that are getting uploaded right now. So are there any plans to have a list, like in the Android app, of the files currently being uploaded? Because if you have a lot of files and you don't use an app like Flow Upload, it can really get a problem. Yeah, I know what you mean. It's a bit of a workaround, but what you could try, I actually use it myself sometimes. You have the ability to create a public upload link with file drop feature. So you probably know it, but for the benefit of other people, I will explain. When you create a folder and you make a share link out of it, and you allow writing to the folder, that means people can upload files on it. They have an extra feature that essentially lets you hide the current content of that folder. So the recipient of the link just gets a screen with your avatar in the middle, and please drop files here. And this shows you the files that are being uploaded and have been uploaded, I believe. That might do what you want. It's worth a try. But otherwise, honestly, I think this is exactly what an app like Flow Upload is supposed to do. I think it's fine to just use it for that. I mean, making things more complicated for a case that's pretty rare is, I think, well, that's better to be done in an app. So I would stick with that. Next. If you have more questions, still raise your hand, we will get the microphone to you. Can you tell us more about the advantages or disadvantages of setting up a home server with NextCloud or renting some web space instead of performance and security? I run NextCloud on an old desktop at home. I like it that way because I literally know where my server is and where my data is. Plus it's a good use of my old hardware. I mean, yeah, if there's a performance, if you rent a server at something like Hetschner, it's probably faster. I mean, obviously, their upload pipe to the internet is slightly bigger than what I get from Kabel Deutschland. So on the other hand, for me, it's good enough. I mean, honestly, the upload is usually the limitation at home. I mean, the speed of the server is not a problem. I mean, for me, it's just not an old desktop, so it's more than fast enough. It's really the upload speed. And I have, what is it, six or eight megabit up? So when you're remote and you're, I don't know, browsing images, it's fine. If you're remote and you want to download files right here from a private server, it's also not that bad. One megabyte a second, you know? That's okay. And when I'm traveling and at night, I put my phone on the charger so that it uploads all my pictures to home. Well, that goes really fast because then the 100 megabit download at home is actually perfect. So for me, it's not a bottleneck. Of course, if you're on a tiny upload, then yeah, then it's not nice, yeah? Because that's going to be the bottleneck, I think, mostly. Yeah. Hi. Yeah. Hello. Yeah. So the next route is really great. But I was wondering, have you ever thought about integrating it with GitLab? To your lab? I think with GitLab. Sorry. I don't see. Wave at me. A little more than that. Ah, jeez. Sorry. Further to the left. Okay. Integrating with GitLab. So do you mean mostly for the Git part of it? Yes. So that Git is used for... You would use GitLab storage or something. Yes. But next cloud for file storage, for example. Next cloud for file storage. And GitLab then fork is a bit of a... For version control. Yeah. Okay. Yeah. Well, the thing is, I actually had a really long Twitter discussion with somebody about it recently. Yeah, I know. Twitter. Don't get me started. The thing is that Git is really not nice for general files. It's for source codes. And next cloud isn't very good at source code. Right? You're doing two very different things. For one, Git will merge changes. In other words, it screws around with your files. Which as long as it's source code, you're happy with because you want to merge the changes. But when it's, I don't know, Photoshop files, aside from the fact that it will simply not do that anyway. In that use case, you don't want it to do that. In the use case that you have some large files and you have some code that works with these files, you want to have, for example, files on the next cloud but code on GitLab. Okay. So you mean to separate the two a little bit. Yeah. So make them work together, for example, with same users or something like that. So yeah, I guess that could make sense to some degree. But I think there's a lot of disconnects between the way you use the two that would create issues. I'd rather not go into it here. It's rather, you know, workflow specific. But that's fair because we also don't have a lot of time left. We have one last question over here. All right. Yeah. So there's the brute force protection built in. As a next cloud admin, I have now and then the problem that I have to go on the server to clean up the database for locked IPs. I really wish to have an admin setting to clean up blocked IPs. Okay. Yeah. Is there a feature request already? I mean, create one if not. That's fair enough. Okay. I think that's it. With this, I would like to have a one round of applause for yours. Thank you. Thank you. Later
|
With Nextcloud you can sync, share and collaborate on data, but you don't need to put your photos, calendars or chat logs on an American server. Nope, Nextcloud is self-hosted and 100% open source! Thanks to hundreds of apps, Nextcloud can do a lot and in this talk, I will highlight some cool things. Consider this a follow-up from my talk about 200 things Nextcloud can do last year! An update on what's new and some cool new stuff. What, what is `Nextcloud`? Let's see. A private cloud is one way to put it, though that's a contradiction of course. It is a way to share your data, sync your files, communicate and collaborate with others - without giving your data to GAFAM! Keep it on your own server, or something close (like a local hosting provider or data center). Nextcloud is a PHP app that does all that, and more! Easy to use, secure (really) and fully open source of course.
|
10.5446/53018 (DOI)
|
Now, I'm very, very happy to introduce Yuka Pekka Haikila and Will Scott, as you see here, who will be speaking about modern life in North Korea and how it is for youth, for example, to live in Pyongyang. And they will introduce themselves. So I will just hand over the stage to you and wish you a good talk. Thank you. Hi, everyone. Warmly, warmly welcome also on my behalf to the talk about observations on the societal and technological changes in the DPR game. My name is Yuka Pekka Haikila. I used to work in North Korea. Now I'm an Academy of Finland Fellow and Visiting Scholar in Stanford. And this is a co-hosted talk. And with Will. Hi. Yeah. And a few words before we get going on what's going on in North Korea in both terms in society and tech. So what is the experience that we are speaking from? I used to go to Pyongyang between 2000 and 2017 as much as up six months per year. I've been teaching management, international business, international management, the first courses in the country, done a couple of startup events and lectured also outside the university. And Will. Excuse me. I was teaching computer science 2013, 2014, 15. Talked about that here at CCC. So thinking more about the technical side of things. Yeah. So there were a couple of talks actually on the previous. Yeah. And this content is brand new, obviously. And the way we are going to do this is that first setting the stage a bit, living in Pyongyang, how it is, what's different, what's similar, then going towards observations how North Koreans in the country perceive Western concepts of entrepreneurship and also the economy itself. And then it's up to Will. We switch the decks and see about a bit of tech. And then we have time surely for any type of Q&A. Sounds good? Okay. Let's get going. This is the most common question you usually get when you talk about North Korea. How did you end up there? How did you end up there? I called email the school. I did the same. How did you call the email? How did it? I sent an email to HR at poost.kp after seeing a YouTube video of someone who was looking for computer science professors at the school. And they managed to get back to me. One of the other professors was based in Portland, Oregon. And so I drove down and got coffee with him and convinced myself that it wasn't completely crazy and went from there. And in my case, I was doing on the side of my PhD in China, I was doing an MA on political science with a master thesis on North Korea. And I found this also the address, Gmail address, and emailed. And kind of replied that your PhD expertise on international management is interesting and that we just want to make sure that it's voluntary work. And you're welcome to come. Six months went by. The school was just established. And then off I went first time 2012. And the way it's done, you arrive to Pyongyang, your passport is requested to give to the authorities, and then next day you teach. And that's how it started. And so basically our home was the Pyongyang University of Science and Technology campus. And you live there. And if you wish to go outside the campus, you always request permission to do something. What were the things we did? Go to restaurants, see museums, go hiking. And go to play pool. And so basically, but you always inform it advance. You respect the local environment and play according to those rules. The school itself has 600 students. Used to be all male, not anymore. And there are three departments, which is international finance, international management and finance, then agriculture and life sciences, then electronic computer engineering. And the management department is the only western based department in the country. And what else on this school? Do you always speak any Korean? No, sorry, yeah, that's a good question. So all teaching is done in English. So students are graduates, often from the most prestigious universities or sometimes from the countryside, and then they apply to the university first they study English, then they specialize, and end of their studies, they attended courses like international management. So to answer your question, no. I took one semester crash course in Korean before I went in the summer. And from that when I'm able to pronounce the phonetic alphabet, words that are loan words from Chinese I can understand. I can say simple phrases like I want to go to a restaurant, but I'm not going to have a real conversation. And basically, the daily life went on that you were teaching either on the mornings and afternoons and then what's very unique in the university setting is that it's very rare place or maybe the only ones where you can really interact with the locals, especially the lunch and dinners. They're always very interesting venues for discussions. And there are a couple of short videos. This is before the lunch. This is with some attend, some don't. And you still sound? Yeah. So as you could see, what's different, what's similar? Different is that it is imagine students in Germany of marching in the campus. No. That's very different, that it's very disciplined. What's similar is, for example, the content that I teach and communicated and it was exactly similar than I would teach in all the university in Finland where I'm based. And of course, the pace would be much slower, especially on ideation. But then again, and the content would be a bit adjusted, but nothing was ever censored. So that was what was allowed to teach and bearing in mind that entrepreneurship itself is illegal in by law. And so yet the first course happened 2014. And this tells a lot about the atmosphere. So now when we are getting into the mindset and about the perceptions on the economy and the social change, the atmosphere in the classroom was very warm. And of course, in the beginning, it was very strict. But then we started to talk about dreams and ideation for solving the problems in the countryside. And it went towards a very open and addressing the problems in the country, which was one was recycling company. So we had a pitching competition after every course where the winner would get chocolate or a football or something like that. So in a way, it always was about ideation. Now the font is. And then 2015, it was a healthy fast food. What I mean by healthy was that students got often sick on the street food. So they wanted to industrialize the street food. And so basically again, addressing what's going on in the economy, in the society and how to improve that. And it's all the that people are more and more busy. So same issues apply in Pyongyang than here. Like people have same worries, love, time, money and very, very common problems. And what happened in this was there was very interesting marketing strategy. So we often engage in discussions that how do you do marketing in an officially socialist setting? Well, you need to work your ways out to what is allowed, what is not. It was that the group wanted to develop a rumor based system where they would acknowledge that the chefs in their venture would have blood type B. Why they would say anything like that? That we have blood type B chefs. Because the common folklore goes that blood type B women are good chefs. However, everyone knows that it's a false. But yet you could spread that kind of rumor. So they really played what is real, what is assumption. And so it was always kind of localized addressing the local environment. And then 2016, there was a Pyongyang startup week, which you can see this is rather symbolic picture. And there we had prices again, we have footballs and I had a team of six professors with me. And now we are going to see a short video of what happened. Let's start from. North Korea is known for its vibrant culture of entrepreneurship. But these we've even it's narrowing it might be. This is startup week in town now. There are a lot of people who are interested in this. Western business school professors have flown in to teach for a week at Pyongyang University of Science and Technology, the country's only private foreign funded university. The participants are being mentored on how to write a business pitch and put together a PowerPoint presentation. Instead of real products, they are using modeling clay to make props. And innovative business remains a long way off. The country's Stalin's economic system means real startups with real investors will have to wait. But the excitement of possibility for Bayes the auditorium is to make believe businesses make real investor pitches. One startup says chemical and nuclear and food protection gear made out of crab and shrimp shells. The William King was an insupposable sound in the cycling venture. Startup week is a draw run for the real thing. So going back to this, what was also mentioned in the video, modeling clay. First of all, taking modeling clay through the customs of China and customs of DPRK when they resemble explosive. I'm very happy that they made it to the country. And then what was really surprising was when you bring out a playful element, when it's a sensitive topic, humor and play quite often weighs out into a safety. And for example, this particular is building that is used to cure mental issues, mental illnesses by acknowledging that people might have those is already a great achievement. And then adding a bit of playful element into it. And it was good fun. And there is more like if you want to read, we just published earlier this year an article about it and it's not behind a pay ball, no. And then there is an article. Going towards the ending of Moisloth is how is the economic reality based on 2012, I've been there end of 2017 last time. And there are people who have been there in the audience who have been there more recently who can then engage with the discussion is that. So there are marketplaces that are not official, but becoming semi-official and being taxed. And which is kind of hybrid model of socialism and capitalism. Of course, it's not talked about as the term of capitalism and it shouldn't. It's a hybrid. And there are chains of policies on how to, what's the alignment between being an entrepreneur and state-owned enterprise. So it's things are slowly changing there even although we hear very little about what's going on. And then as I said earlier, it's of course, keep in mind that yes, the things we hear about the country, yeah, true, surely and at the same time, it is a place where there is a lot of fear. Yet, you can see quite a lot of hope. You can see that people dream about the future and they dream about love. Love is always a beautiful topic to discuss. It's also, it brings the hope and then discussions often were about the money and the mindset of quite often a common question was, hey, Yucca, why does the rest of the world hate us? Why do the rest of the world make jokes on us? Then I said, well, I'm not a politician. I shouldn't be in talk with politicians, politics, but your country might be doing something that the other country is perceived as wrong and fair enough. But the mindset in the country is that it's victimized, that the country has failed. That's commonly acknowledged. There is no the utopian nation anymore that it's the greatest country on the planet, but it's the fault of imperialists, particularly the U.S. And it's the northern European, like why certain things were clearly opened, allowed to teach and allowed to observe us, that the northern European countries seem to be not part of the imperialist clique. And the mindset is also on very eager to watch learnings, to watch new knowledge. And then what's going on in the economy is 2012, 2013, they weren't that many taxis on the street. Now there are how many? Eight companies? Six. Six taxi companies. And so like you see these and traffic jams, you see these developing little by little and that it's now this is from spring. These companies planning to cross Sinu choose first for a known special economic zone, which basically means a massive step towards privatization. And also already the border areas, you can see a lot of change in there. And of course, Western countries can impose as much sanctions as they wish. But the studies, the research in Stanford, the research in Princeton, best universities have concluded that the sanctions don't work. Instead, they make the misery of the people at the countryside much worse. And it's when you look this, this is import and export from China and import and export from other countries. So you see like how it's going on. And these are not reliable statistics, might be much, much more. So keep that in mind when these type of policies are discussed. So what I wanted to bring forward before we land and the discussion is that the country is developing its own isolated infrastructure, whether we want it or not. It's happening. But with educational engagement, perhaps not for the persons we were teaching before, but for the future generation, we could bring a bit of bright the future. And now we are going to explore what's the role of tech in this all. And this work finished 2017. If you're interested, what happened next with actually with you, how it's, there will be a talk, funny enough. Next stage, I'm starting at six o'clock about Beirut. And now it's off to Will. Cool. All right. So I'm going to sort of be giving a somewhat different talk on the tech side of things for the other half hour that we've got or so. And looking specifically at how technology has sort of this arc of what technology as a landscape looks like in North Korea. So I'm going to start by talking a little bit about the history of like the last 20, 30 years in particular of what these companies and corporate entities look like. Corporate is maybe a little of a strong statement. And then what we know about the current state of internal technology and sort of the line of what international engagement between North Korea and the rest of the world technically looks like. So there were a set of government labs or efforts to begin this current line of computer technology that emerged around 1990. Both the PIC, Pyongyang Informatics Center and KCC, the Korea Computing Center got established under a three-year plan that happened between 88 and 91. The Pyongyang Informatics Center was sort of the first of these labs. There's a few others that have emerged subsequently. You saw that then coincide in the late 90s. There's a big famine and sort of this time that we maybe see as really hard for the country. And that meant that there was a really strong drive for entrepreneurial engagement where these corporate entities that had been formed five years earlier are now going out really aggressively to find ways to get foreign money. And sort of trying to get hard cash to support themselves because they don't have the same level of state support that they were maybe have had in the past. The planned economy is falling apart internally. For individuals' lives, if they can end up working in China or outside of the country, they live a much higher quality of life. And so there's this very sort of scrappy, externally facing view there. And then that international expansion continues through a lot of the 2000s as they realize they can actually make money on this in an ongoing way until sanctions hit in 2012, somewhere around there, as nukes sort of wind up and you get a disengagement policy forced on them and they start retreating internally. And we start to see a lot less of, especially these like big brands that are sort of known externally because they are all targets of sanctions now. So three of the entities that sort of still exist and have been the same entities through that whole transition, Kim Il Sung University has a technology program that was sort of where the academics are, but that ends up blurring quite a bit into KCC, the Korea Computer Center. And then there's the Pyongyang Informatics Center that's sort of the counterpart that's the other one that has its own set of software. With KCC and PIC, you can think of as conglomerate entities. They look a little like Hyundai or one of these big Korean conglomerates in the South or like a family thing that just does a bunch of things in this space. KCC in particular, up until 2005, was chaired. So the sort of patron who's managing this whole thing is Kim Jong-Tek, Kim Sung-Tek, the guy that got assassinated in Malaysia that was like the half-brother of Kim Jong-Un. So these things are fairly tightly tied in at sort of top political level of top-down management. So Korea Computer Center, we see in the 90s, there's a KCC Europe that comes out. There's one guy in Berlin who thought he could make money outsourcing to Korea. They managed to get something like a million dollars out of him. And with the restriction that he could only run the servers in the DPRK compound in Berlin. So the why is this type of activity happening is the DPRK embassy system is that all the embassies need to be self-sufficient, which means that they need to create their own revenue. So this was an example case on that. So a partnership between KCC and whoever was entrepreneurially minded in the DPRK's German embassy at the time realized they could make money on this. And they sold this KCC Europe entity the rights to the dot KP top-level domain. And it managed them from servers in the DPRK embassy compound until that sort of technically got mismanaged near the end of 1999 or something. And then there's a period where KP was offline and then DPRK eventually just sort of reclaimed it and has a different entity running it now. Another entity that's KCC affiliated, they sold a game of go in South Korea. They also have entities in China, Singapore, Vietnam, and sort of a bunch of offices that we're doing contract programming that we're trying to sell software. Some of those still exist. I think one of the points there is that this is pretty deeply rooted in a top-down management of and then combined with a bottom-up set of sort of people just sort of going off and trying to make money. We see that continuing now, but with an inward-facing view under the current sanctions regime. So people are now facing on how do they make money against the internal population because that's what they have access to. There's sort of, I guess that's the at least legitimate view, which is we see a bunch of KCC things of like importing phones for the local thing because they can get hard money for taking a Chinese OEM, buying a load of phones for them, and selling them on the internal market. Also what's clearly visible is the development of these internal markets on Pyongyang trade fair. Being one example is where there are thousands, thousands of people and 2013, 2014, 15, still there were some foreign companies, but as the sanctions tightened, it went through a domestic and especially on the healthcare. Healthcare and health tech, a very peculiar phenomenon, North Korea health tech, is that there was one flaw full of because the healthcare system has collapsed, country sanctions obviously something gets developed and it is a one big consumer party, the Pyongyang trade fair, like it's a lot of dollars going around and that's an example of that. The flip is the people who've been externally facing because of sanctions and the difficulty of doing a legitimate international business, you're seeing those groups that were maybe perhaps previously a KCC lab doing contract programming now turning to malware or hacking Bitcoin or trying to do these sort of crime-like things as ways to get money that they just sort of ignore the sanctions because they don't need some foreign partner to actually legitimately buy things. These fall within little fiefdoms, so a lot of the ministries or top-level political things will sponsor internal groups to provide technical things. This is an example of one set of businesses that I think this one's owned by Ministry of Light Industry that has an associated bank. The stores would like you to use only the card from that bank to buy things from them, so they just sort of have this whole little top-down world and as you go to different stores that are owned by different ministries, you have a different world. Each of those ministries would have some software development thing that's like they've got their brand of cell phone as well and their tablet that they're importing as their way to make money. It's sort of within their little subset of the family or the country. So current state of technology, there is hardware that is reasonably easy to acquire. Businesses are coming in from China. Phones, mostly what you can buy, at least legitimately, are OAMed from Chinese companies and then software specifically for North Korea as mandated by the government and then sold. There's been a country-wide emphasis on science and technology for at least since Kim Jong-un before. So they get a fair amount of leeway and resources and hype. They build a bunch of showcase buildings. They're very proud of their electronic libraries and some of these evidences of science and technology. Yeah, and what's behind this and also what I discussed is that there is a massive policy change. So it used to be military first and all the resources and partially that's why famine in happened was that the military first policy. Now there is a big, big change towards developing sustainable economy within the country and that's what has happened for the last couple of years, like a really big push on resources for that. The main area that the government is concerned about and that there's restrictions is around connectivity, so the ability to communicate with other people. And so here's some examples of modern last couple of years cell phones and some of the brands. You had a period where Wi-Fi was just completely disabled. They spent a while where they would try and get the OEMs to not put Wi-Fi chips to not populate that chip and that turned out to be a thing that the OEMs got confused by because they're integrated circuits that have Bluetooth and Wi-Fi, so they would just disable it in hardware. Now they're sort of feeling comfortable enough that they're reintroducing Wi-Fi. They've shown a couple of things in the last couple of years that are sort of weird where they're like, we have this street is now Wi-Fi enabled, but you need to get a SIM card to use the Wi-Fi, which sounds like they're doing something pretty funky. And they have been... Comment on that funky news is in terms of networks, which was the network was provided first one by Oraskom, which is a chip provider. And so currently there are other operators as well, which are local, but there are two different networks, which is the foreigner one and then the local one. And you could buy a SIM card to this. And the last time I bought it, it was $250, which got you 50 megabytes of data. So you do not use it too extensively. It's worth noting that the price scheme for locals is totally different. So they're not paying 250 megs, but they also generally do not have external internet on data. I think pretty much anytime... So I guess the other side of that is these phones, for a while they would sell them to you for money. And typically they would ask for a hundred to 200 US dollars for a phone. I never saw a local actually paying that. The locals all would have vouchers and it was going through the state distribution system for how a work unit would be allocated new cell phones. So that price was for some very small, rich subset of the population that had access to hard currency, they could go around and not use the state distribution system and just like pay money, or foreigners, they could try and milk some money out of. But that was not where most of these cell phones were getting distributed. That was happening through the SOPEC rationing system. And what is the estimate? It used to be 3 million out of population of 23 million, have a subscription. Is it... Up to 7 or 8 million. Up to 7. Okay. So it's quite a big... They have computers. We talked about RedStar a bit. RedStar 4 exists now. I guess this is... I don't have a shot of it. RedStar 4 exists. It hasn't come out of the company. They sort of teased it on TV a couple times and showed logos of like, we've got this but never took screenshots or like showed what the UI changed or what's new in it. And it's unclear if anyone has actually used it besides that they had an exhibition booth where they claim that they had a version 4 now. Sorry. This is a Coriolink office. So this is where you go to register your cell phone, get it licensed, get SIM cards, get by phones. Coriolink is the Oraskom subsidiary. It's actually divested from Oraskom now. So Oraskom, the original Egyptian owner, has sold it off. And I believe it's now... There's like a Hong Kong-based subsidiary that still has an interest in it. Yeah. No, that's... Sorry. I'm getting these mixed up. So the other side of this is the wired network, which is Starco. And that was a joint venture initially with Locksley Pacific, which is a Thai company. And that now, Locksley Pacific has divested and it's a Hong Kong guy that's the partner for managing their wired connectivity out to China. Software still gets installed physically in general. So they have app stores, which have lots of pictures of all the various video games. Or at... I'm sorry, that photo's blurry, but that's one of the exhibition halls. And you could go and people had a little stall and would go take your phone and load apps onto it. But that's how you get apps. And one word about the software or games. Then having discussion, for example, when I showed that Angry Birds was really popular, I showed a version of Angry Birds in my home, in my phone. And the students were, yeah, of course we know the game. It's developed by our country and it's called the Slingshot Birds. So that's how games also found their way. They'll often repackage them. Just take the APK, change some of the sprites, rebuild an APK and put that on phones instead. We're in a midst of a transition in some ways. Most people have some watching of TV that may be at their workplace, but there is pretty high TV prevalence at this point. Even in the countryside, a lot of that comes with weird DVRs that play weird formats of things and are mostly meant for local TV broadcasts. But there are set-talk boxes that are pretty widely distributed. Some of those are running an Android or Linux system as well. The companies and startups and technical efforts, a lot of what would have been direct ministry investments is turning into going through these corporate banking structures that have been set up and that's more about someone else is getting the cut or they've got a different set-up for how they're going to take that tax on profits. In terms of how the country is interacting internationally, there are three ways or three different types of view that you can take. One is that they are a consumer of technology, primarily from China. They also are a producer of technology. They will still contract when they can. And then they are engaging and sort of taking whatever they can get for free in terms of humanitarian and educational support. As consumers, some of the Chinese brands, GNE was a big media tech OEM Chinese cell phone distributor. They went bankrupt at the end of 2018. And then the CEO of that started a new company called Chenyi also in Shenzhen that I think also is now bankrupt. But there was a period like for a while all of the DPRK cell phones were basically rebranded GNE products. And then in 2019, the new ones that came out were rebranded Chenyi products. So there's some relationship maybe between that CEO and DPRK. Some of the tablets have been traced back to be the same hardware that Huzhou, a Chinese company is selling. So there's pretty reasonable evidence that they're going out to these companies, mostly in Shenzhen and getting the hardware made there, which is what any other country is doing as well. So that's not surprising. And there's some collaboration in customizing the operating system based on the requirements of Pyeongyang. As producers, they've got some websites that are still up. Silverstar China got put on the sanction list a year or two ago as like being a DPRK consulting company. The CEO is North Korean. It's like not particularly hiding. It just was registered as a Chinese company. They still run their website. They claim that their app store feature is that they claim to have written a Fox News election 2016 app. So I think this gets more laughs in the US where we are worried about Russian interference. It's like, well, the North Koreans claim they made like our election apps. I'm not sure why that one hasn't gotten much press. And then they have other consulting things as well. I think we have people in the audience who are more familiar with that than I am. Sanctions. So there have been sanctions for a while. There was another wave that got put in in 2017 or end of 2016. This is when US citizens stopped being able to go. US passports now are not valid for travel, claims the US State Department. So it's the US State Department that gets me in trouble if I were to go. As you go and ask for a one-time passport that basically has this little stamp in it saying this is valid for one trip to DPRK that they have only given journalists. They started enforcements this year basically as things went back to being not great. There was a period of sort of bromance between Trump and Kim Jong-un and that's starting to fall apart. What that sort of has meant is earlier this year the US government started, it had claimed that people, foreign nationals who had traveled to North Korea wouldn't be valid for ESTA but hadn't been enforcing that until this year. So now if you've traveled there, like if you've traveled to Iran, you have to apply for a full visa and you aren't valid for an ESTA visa waiver. And then recently they arrested Virgil Riffith, US citizen for traveling to North Korea earlier this year. You run a blockchain conference? He wasn't running it. He wasn't running it. The North Koreans were running it. No, the Korean Friendship Association, a tour company was running it. Blockchain conference. But it's really just another level of sort of enforcement and causing fear and sort of trying to break down or mess with that relationship. Plenty of bad decisions in that one. A big list. And so one of the other things is that there is a lot of this contracting going through China that's just going to appear either it's a subcontract where the external company doesn't actually even see that there is this subcontracting happening. Since most of the business is through China, like the opacity of that means that you may either be working with a North Korean company that's in Russia or China and not notice that or it'll be a subcontract that you aren't even told about as a way that the Chinese company is saving costs. And then finally education. So PUST is still going. They're using third country nationals so no US citizens anymore but they're still running a working university. There's other efforts. Chosun Exchange is based in Singapore that takes North Koreans out to Singapore for weeklong trainings in entrepreneurship. They've been running those and they've also had people go into the country and run workshops there. It's seemingly a very successful program. And then there's I think more engagement on that educational side than you would expect. There's a professor at UBC in Vancouver, Canada that's been having North Korean academics come out to Canada five or six a year every year. That program's still going. She's been going in three or four times a year. So there's, you know, there are people who are still able to walk this line of living and working between North Korea despite sanctions. So that line is growing thinner but it still is there. In terms of technical capacity, the educational system, I think I've mentioned this, is really a traditional Asian educational system that places a lot of value on rote memorization, much less on creative thinking. That doesn't translate super well into programming. So a lot of the computer science students that you're going to see who have gone through sort of a standard educational process, you know, can rewrite code samples very well but can't debug very well. One of the things that they do do is they will air gap access to the internet and access to the internal network. Since the PUSC campus has internet access, that means the students don't have access to the internal network, which is what they would normally have access to. And that means that they are normally, they're mostly acting in a disconnected way while they're there. So they'll have access to a land but they won't have external connectivity in general, which is pretty limiting for them. Many of them express that they prefer being at Kim Il Sung or Kim Chek University where they would have intranet access because they can share files much more easily than they can hear. Yeah, so I think that's it. We have a few more minutes for questions, hopefully. We seem to have 15 minutes. I don't know about that, but I think we have an angel who will help us. Yeah. This was one hour ago. There was a sign that got popped up that said something like five or ten minutes left. Okay. I think we may be getting close. But are there questions? Any questions? What kind of jobs did the students get? What types of jobs did the students get? The actually one of the most talented ones ended up as PhD students and then it seems in academia, in local environment. So basically professors in their system and then some of the students ended up in the banking sector, which was also in the talk. Then obviously, especially in the business, in the finance and management, is something that in theory is international trade. But of course, there is not much trade happening at the moment. But we get very little information on that unless we see, of course Pyongyang is in the end small place with the market, so you might bump up to students. On ECE, some of the early graduate students did end up making it into KCC. So yeah. Hi. I have a mic. I'm back here. I was just wondering, can you talk about how, I would hear, curious to hear from both of you how consumers in North Korea of technology learn about technology because you talked about how they can go to these shops to get apps installed. But is there advertising for things? And through what medium do people learn what they want to get or what they need to get on their phones, for example? Thank you. Excellent question. I'm slicing this to two in terms of advertising and in terms of the new knowledge. Basically new knowledge is entering the country in a very increasing speed. Why is that? It's because of the USP's and because of the awareness in general. If you ask directly, have you watched foreign movies? It might be a double topic as one-man discussions are not possible in the campus, for example. But everyone by default has consumed foreign media and even it's being monetized. Like information is very valuable, obviously. If you have something that can be a trade-off, then so it comes both. It's not the information that is being parachuted. It's that's probably not the most valuable info, but that is coming through the markets and those is one. And then the notion on advertising was that we often encounter discussions on that we don't need advertising in this type of country where it's the planned economy. However, the discussion then changed when there were the first advertisements of local products on a stadium, for example. And you could see on the way to the new Pyongyang Airport that there were car advertisements. The most default one was rumors. So basically... So word of mouth. Word of mouth. Yeah, within. Word of mouth is an important thing, especially for things like consumer technology and phones that you'll see your friends or whatever someone gets a tablet and everyone wants one. The 2012 was the first consumer smartphone, the RE-RONG, and that allowed taking pictures and sending them to friends and that was a big deal, right? That that was both a show of wealth and also this like new capability that people wanted to have. So there is a latent desire for this sort of stuff. You've also got a core elite population that is able to travel to China and sees a lot of this technology just in common use in China and then wants that as they come back into North Korea as well. There is a fair amount... They're really into like infomercials. So you can watch the KCNA, the TV, and they upload a lot of it to YouTube, although YouTube keeps trying to take it down. But you can find sort of daily uploads from North Korea on YouTube of the current daily broadcast and there's just a lot of infomercials about internal products that they want you to buy. There's a lot of quack health science. Like they'll sell everything as a health supplement. So there's a lot of that, but also sometimes you'll get advertisements for new tech products. There's a question over here. Yeah, thanks a lot. My question is on you briefly touched upon the ministries and the role they play in the whole thing in the IT infrastructure. Could you talk a bit more about are they competing with one another basically or how does this look like? Yeah, I can tell on my experiences, it's the reason, indeed, a lot of competition between the ministries, those who obviously interact with foreign entities and then a big power place, those who are engaged with the special economic zones. And if you were in the education, like I was, those went to specific ministry as well. And you never knew what the map is like and the decision making is done like that you, it's very much in the darkness, but definitely a lot of competition on, I don't know about tech. Yeah, I mean, so you've got entities like KCC that are direct, sort of, there's under a technology council. A lot of this falls under the ministry of post and telecommunication as like the entity that's setting regulations and restrictions. But then you can have a company or some part of another ministry like the Ministry of Light Industry do an actual import of OEMs and work with KCC to do that. So there's some, both collaboration here of, if you need technical services, you go to one of these approved labs because that sort of de-risks you and you want to lower your risk and liability of getting called out for messing up. But you can still make money by doing the actual work of doing an import and selling around a run of devices. Thank you. Are we done? I don't see, I see two more hands over there. Okay. Please say that you have questions before so I can run to you. Let's finish up the two questions and yeah. Thank you. The Otto Warmbier incident, the Otto Warmbier incident. To what degree did he provoke what happened and did you feel threatened by it? And then there's another question. What role play Korean soaps would trickle in from China, South Korean soaps? Okay. The discussion around Otto Warmbier and then on the South Korean software, what role does it play? So a couple of words on that particular case. To set the ground is that obviously the hostages and persons in North Korean prisons, it was all always about nationalities of U.S. persons. And that was like you were in bigger danger than I would be as a Finn if I would fool around which leads to the issue that if you are in North Korea and you are U.S. citizens and you go to a floor in a hotel that is a surveillance department basically and you steal a propaganda banner from there, that is quite a big offense to provoke in there. So that being said, what then happened is not justified by any means, but if you are a visitor in a country and you break the law, obviously some type of punishment will come, but it's there a bit of provocation, but then how it was handled, of course, that's another, like it was a horrible accident. And nobody knows what in the reality, what happened. What's your take on? Do you have a take on that? So I guess the thing that we heard is that there is sort of a culture of fear and lack of responsibility. And one of the things that happened was during sort of the negotiations and release with the U.S., the diplomats from North Korea didn't actually know of Otto's condition that the hospital had sort of not told anyone about that. And so they thought they were releasing Otto in good condition, like up until the couple days before. And so that sort of prompted a different response from the U.S. than might have happened if they had actually realized what was happening and had dealt with things in a cleaner way. Like part of that was as a side effect of the internal culture, they messed up pretty badly politically in how they handled the situation. And then that led to a bunch of reverberations in terms of sanctions and outrage. For South Korean dramas, I mean, I never saw any. I hear that that is more of a thing in two circumstances. One, there are people near the Chinese border, that that's a place where there's a more sort of crossover that some people have TV sets that can pick up Chinese TV. And there's sort of just a bit of a black market that happens back and forth where that's a thing that people are walking back and forth or otherwise can get stuff physically. In Pyongyang, that is going to be replaced by privileged citizens who just sort of fly to Beijing and have a USB stick and are above the law. And so it's still, you know, off limits enough that you're not going to see it, especially not as a foreigner. But by all reports, it is happening. So for other movies that were like Disney or action movies or software that's not state-approved, that was a minimal enough infraction that you would see students with that stuff and they didn't care too much that you saw them with it. So that sort of is normalized to the point where it's not going to get you in trouble, that you are watching a Disney movie. But South Korean stuff, I think, was probably more sensitive that that wasn't something that I was going to catch a student with. You've been waiting for a while. I'm so sorry. But I think we are at the end. If it's quick. What kind of languages do the students learn? Like English, of course, and what kind of programming languages do they learn or hand learn? So most of them speak Chinese as a second language. Some speak Russian. Some speak English. See. Cool. I think we are at the end. Thank you. Thank you. Thank you. And thank you, our host. Thank you. Big thank you. Thank you.
|
The Democratic People's Republic of Korea (North Korea) is a hot topic in the media. The peninsula is changing rapidly, but how is that reflected in life on the ground? What is it like to live in Pyongyang? Are the externally reported societal changes and developments in technology also visible in everyday life? This talk will describe modern urban life in Pyongyang, and the recent forces driving change. The talk will particularly focus on observations around the state of youth mindset towards change and technology. For example, what are the future elites' attitudes towards entrepreneurship in an officially communist country? What small signals of changing attitudes can we observe that might influence the opening of the county? Presenting the realities of this environment leads us to the demo of consumer technology, and presented that opportunities for both societal change and technological development might be broader than we often see. We will present this deep dive to North Korea from the perspective of two foreigners who have been spending months at a time in Pyongyang and have been studying it since 2012.
|
10.5446/53022 (DOI)
|
our next talk is going to be by Sumi. It's called Internet Access and VoiceOver IP as comments, open comments infrastructure. Sumi? Yours. Thanks. I'm talking to you about the cryptic subject of Internet Access and VoiceOver as open comments infrastructure, which means my personal, absolute, incomplete overview of what is out there and my personal wish list of what I would like to be there. So first of all, I'm going to try to improvise some kind of definitions because I haven't found usable ones anywhere. I'm going to give you some examples of what is out there, my view on what works, in which aspect and what doesn't, and my personal wish list here, it's called perspectives. I haven't found any suitable definition of open infrastructure. In analogy to free software, I would say that you should be able to use the infrastructure for any purpose. You should be allowed to understand how it works, what it's comprised of, etc. You should have the freedom to extend it and the freedom to improve and modify it, which is kind of more difficult for infrastructure than for software because you can easily destroy the workings for others while you improve it for yourself. So there would probably need to be some restriction on that. An example would be the PyCopiering agreement, which used in Freifunk context and other mesh networks, where this focuses very much on the traffic and the routing information and doesn't really cover anything above that scope. For comments infrastructure, I don't know how many of you are familiar with the concept of common's economy. Let me see if your hands none. One. Okay. So then I'll take a bit bigger turn here. So comments originates from the commonly used land in a community that belonged to everybody and everybody was free to use it. And in common's economy is trying to recreate new comments. So without barriers to access and not in private property of any entity. With different means like companies owned by the employers, by the customers, like constructions where you have formally private entities owning things in order to fit into the capitalist system and then you try to take the control out of these entities and into the hands of the end users again. So that would be my short definition of common's infrastructure would be that resources should be accessible to all members and that the owners should be identical to the users and or employees, admins that use and run the infrastructure. The prime example is in this context would be Freifunk respectively, the Austrian version of Funkfeuer for open infrastructure. I guess most of you are familiar with it around here. And do I need to explain Freifunk to anybody? Fine. So what I found notable about it, it's that it has never grown much around Germany and Austria. And that though it does provide all kinds of services on the network, most users actually use it for internet access and not much more. Gweefy, if I pronounce it correctly, is like of my most beloved, most favorite example, let me add on that point that above Freifunk I don't have very much experience with these communities at all. So if there's somebody here from one of these communities and I'm talking complete bullshit, please feel free to correct me. This is kind of a mixture between open infrastructure and common's infrastructure. So you can build your own stuff, add it to the network, offer whatever you want, but you can also pay a company in order to set that up for you. It's quite large, around 40K active nodes, around 50K kilometers, no, what would it be? Mega meters, 50,000 kilometers of Wi-Fi links, some diesel fiber links. Also here, mostly internet access and largely limited to Spain, Catalonia. What I've included here, though I know basically nothing about it, but some of the members is FFDN, that would be an example of kind of common infrastructure. It's a Federation of non-profit ISPs. Mostly, never has grown out much, never has grown much out of the borders of France, but yeah, this kind of representing the interest and also helping with the infrastructure of like community, small community ISPs. What I like very much, even less about it, is telecommunicaciones indígenas comunitarias, yeah, who offer GSM networks in rural Mexico to mostly indígenas communities. At the moment, as far as I found out, they probably have 16 communities with around 3,000 users. The nice thing about it is they are able to offer their service for less than two years a month, which is really important in these communities because the income usually is even below 200 euros a month. What we are doing is telecommons, also kind of a commons infrastructure for VoIP service to mostly kind of eco-villages and housing associations and this kind of communitarian living thingies, also some commons economy organizations. Jointly owned by users and employees. A special feature is solidarity-based economy means everybody pays what they deem adequate, durable, what they feel it should be worth paying for them and what they can afford. Everybody gets the service they need independent of their payments. My vision is that open infrastructure has been quite successful. Its limit is it seems to me that projects tend to not grow out of one region or one language, which also makes some sense intuitively. For pure open infrastructure, you mostly have kind of a nerdy prosumer base like people who also actively developed the network, administered it and kind of their friends and family as a consumer base who get their support by these nerdy admins. And that judging mostly from Freifunk experience, there's a workshop about that right now over there. At least from development of the Leipzig community, but I think it works in general, is that the success of these networks is largely dependent on the alternatives of access. Like if some ISP comes along and provides high bandwidth links for low cost, then you will have half the user base will be gone. On the other hand, if you have rural community where people definitely need more access than they can easily get, there will be a chance to grow a community. So for commons infrastructure, it's a bit different because you usually have smaller projects. They usually tend not to get that big as open infrastructure. But you have quite a stable development. I've never, well, not never, but I don't hear much of commons infrastructure projects that tend to just vanish or shrink tenfold in size. However, they're also usually regionally and socially quite limited. And what I'm asking myself is if there wouldn't be a way to combine these two in a senseful manner. So from a user perspective, I would like to have the availability and reliability of a managed infrastructure. And also, I wouldn't want to have to be really tech savvy in order to be able to join a network to use the services. From a political perspective, which is these two are easily fulfilled by any commercial provider. From a political perspective, I would like to have openness in the organization. It means that anybody can easily join and transparency on how it works, how it's financed, what is needed, and to have it be extensible so that what works in one region can easily be deployed in another region that new people can come in and can take part. And the same from a technical perspective, to that the infrastructure is transparent. Of course, it's built from open source components that anybody can understand it, can add to it, et cetera. So my personal wish list would be to have alternative ISP and WIP services available for, let's say, nationwide for the moment. I would love worldwide with professional management and support and where users can choose their degree of involvement. This is how much time do we have left? I have one idea in which direction it may go. I don't know if you're familiar with Mitshauze Zündikat, maybe some of you, who's familiar with Mitshauze Zündikat or with DFN, the provider of the German universities. So my idea would be to have kind of small independent organizations who can be flexible, do what they want, be regionally present, and then to have on the top layer somebody who can provide high infrastructure with higher costs, the juridical background, et cetera. And also to provide some kind of a template on how to go forward if you want to found your own ISP for your five friends. And to have some kind of dependency between the two layers so that neither the center body nor some satellite can just wander off and do crazy stuff on surveillance, alt-right, don't know what. This would be kind of my vague idea where I think this might be fulfilled. I very much hope that you have either other ideas on how it could be done as well or ideas on how to work that out in detail. And there I would invite you to join our workshop which will be at the dome right now when this one finished or also just to meet up right after the talk maybe here in direction of the bar so we can share first ideas. I don't know if you still have time for some open questions. Okay. Thank you, Sumi, for this talk. We have a couple of minutes' talk. So we have a couple of minutes left for questions and answers. Do we have something from the Internet? Internet is pretty mute. Anybody here in the audience interested in some more details or a question? There is one. I'm going to go there. How does one get started doing something like this in his own region? Can you repeat it? How does somebody do that on his own? How do you get started doing that? Is it right? You don't get started on your own. You need at least. But there is, I didn't count them. There's dozens, two hundred, really like community ISPs and lots more other small ones that I guess would be definitely interested in such a structure. So if you think of something useful with three people and put the concept out there, I think chances are not so bad that it will pick up speed. Okay. Okay. Thanks. Another question? Okay. I want to thank our talk, our speaker here, Sumi, with a small present from the OIO stage. There we go. Something to drink and something sweet. Thanks. There we go. Thanks again.
|
Some Initiatives are trying to provide internet access and VoIP dial-out in a user-owned (Commons Economy) or completely open infrastructure. We will present the state of affairs and invite to a discussion on the possible perspectives.
|
10.5446/53024 (DOI)
|
Hello, and welcome everybody at the Open Infrastructure Orbit. Here you find all the different organizations that are involved with Open Infrastructure. So we're very happy to also have Matthias and Katarina from the Afra in Berlin here, who are working on Qualnet, and they're talking about how you can do that in Rust. So thanks a lot that you're here, and that's your applause. Yeah, Qualnet is an internet independent wireless mesh communication app. That means that we try to communicate between the end users devices directly in an interconnected way. So every device can communicate with all the other devices it is connected to, and even further if the other device is connected to other. This has several upsides. One of the upsides is we build our own infrastructure. We are not dependent on the internet service providers. We can also communicate if the internet services and mobile services are shut down, or if we don't want to go over them. The idea is to have a zero-config easily usable app. So multi-language with a nice user interface, this is the version 1.0. It's a cross-platform. It's easily installable on all the different platforms. We have written it for Linux, for Windows, for OS X, for Android, and for iPhone. And this all was written beautifully in C, and it was used or is used all around the world. It's a modular structure that goes even into the routers, and you can build a really modular networks that you can add and add and add new devices that extend. But there are some challenges. So we were using in the first version the so-called ad hoc or IBSS mode, which is dying and not usable in the devices anymore. We had over 400,000 lines of C code, which was hard to maintain, and people didn't like it too much how it was written. So even our user interface, which was an HTML5 user interface, got much too big and was really unstructured. So what is really important if we want to go kind of off the grid, if we want to be able to communicate with our devices, is that our devices nowadays are the mobile devices, and mobile devices don't have administrative rights. So to go to be mobile first, we need to go off of that. We cannot have administration rights. We need to get rid of the ad hoc mode because it's not supported anymore, and we need to do user space routing. All right. Can you hear me? Great. I can hear myself, so you can probably also hear me. So I've been working on Callnet for a few years now. I started with a GSOC project in 2016 doing new security stuff in the old code base, and since then we've been thinking about how to restructure the code in a way that makes it more extensible and maintainable. The old code base, you saw this diagram with a bunch of boxes. In theory, those existed, but in practice, over time, the project has existed since 2012. Modules started bleeding into each other, and it became really not fun to work with. And so for the rewrite, we started thinking about the layers that we might want to have. We have to replicate a lot of the work that is being done for us in dependencies such as OLSR in our own code and user space because we don't have the permissions to actually do these things with dependencies. And so coming from the bottom up, we had to think about network interfaces and the way that we interface our own code with whatever is there on a platform, which is the bottom two modules that you can see in the stack. Then we had to think about how to do actual routing. So if you have a network of multiple nodes, how do you get a packet from A to B that has to go via C or D in the middle? On top of that, there is a whole bunch of stuff that we'll get into. In this slide, it's only called the service API. And this has to do with the thought that we didn't want to just be one application anymore. Callnet is very useful, and it can be used to do a bunch of things. You can send messages. You can share files. You can do voice calls over it. But fundamentally, it's still just a single application that people install, and then they do stuff with, and they're locked into whatever we come up with for use cases. And so both for a new architectural design and also to let other people extend this network, we thought about the concept of services. A service being an application that runs on a distributed mesh network without any servers, without there being someone who hosts something that can self-replicate through the network and let other people decide what they want to do on this infrastructure that everyone is building as a community. And so that's the service API, which is the, I would say, core component of the new stuff that we've been writing and something that we'll get into in a little bit. On top of that, we actually write some services ourselves. So we still do provide a messaging service, a basically decentralized Twitter, which at the moment, or has so far had a 140 character limit, I guess we have to up that now because Twitter is cool, I guess. We also have file sharing, so you can announce a file to the network, and it sort of works like torrenting where you don't have to immediately send your file out. You can sort of announce it to the network and then people can get to you and get the file. And also, if other people along the way have parts of the files, they can get them from there instead. And voice calls, of course, which previously were called VoIP, but we'll get to that. We have a bit of a name conflict because we don't have any IP addresses anymore, so also something that we'll cover. And on top of that, we have a new web UI written in EmberJS, which makes it much more maintainable and just smaller to work with. But at the same time, because the lights are being weird, because we have this API that people can build stuff on, it is possible for other people or even for us to add other UIs as well. So if you wanted to build something that is specifically Android, you can do that. If you wanted to build something that's a text-based terminal application, you can do that. You just layer stuff on top. So to go down into the, I would say, interesting bit of the routing a little bit, this is sort of a slice, again, just lower. You have the service layer at the top, which we're messaging, file sharing, et cetera, and you have a service API. And then you have this routing core. It does a few things. It keeps a routing table, of course, which is something that we can't rely on a kernel for us to do, because we might need root for that, so we need to replicate that functionality in our own code. We also collect some link heuristics about package drop rate and TTL and a bunch of other stuff that can help us factor into what is a good connection, what is a bad connection. You might not want to send all of your packets just to the first person who can receive them if that person has a packet drop rate that's really high, and instead maybe take a longer route which is much more sustainable. It also has a persistence module built in, which is something that's important to us for reasons that we'll get into. But it basically just means that every packet that gets routed can also be stored for some amount of time. It doesn't have to be routed immediately, and just because something was delivered to a neighbor doesn't mean that that packet is lost. Underneath that, you have the network interfaces. Only two of those exist at the moment, or actually only two interfaces exist because we're still sort of early in development of this rewrite, but fundamentally there's nothing stopping you from hooking this up to Tor or your local wireless mesh network like Fryfunk or whatever. The two ones that we really care about on Android are Bluetooth and Wi-Fi Direct, which are direct replacements for the ad hoc mode that we lost, which has been deprecated. Maybe we can explain a bit further. The reasons we have so many possibilities to interconnect is that in the first version we use what every wireless community network is kind of using nowadays, so in ad hoc mode or Wi-Fi connections. As this is not there anymore, the ad hoc mode is a beautiful mode where every device is kind of equally connected with the same routes. You don't have a router and then followers. With other networking modes, we cannot replace it, but we can to try and go with as many connectivity possibilities that we have and use all the different possibilities or try to use all the different possibilities, which also kind of requests us to abstract this layer. What we have been working in the last two years was also to get a concept that allows us to abstract all those layers. So, to go back to the top, to go again top down, the service API is essentially the core library of call called libcall. It's written in Rust, like almost everything in the project. It provides you a few end points to interact with a system entirely in user space. It makes a few abstractions over platforms so that if you're on Android, if you're running on Android versus you're running on a Linux PC, there's no real difference how you have to handle these things. It sort of gives you a framework to build an application on top without having to worry about the exact implementation of your platform. It handles user authentication. It handles messages, so things that you send into the network. It handles contact data where people can make friends and assign trust levels and set nicknames and whatever. And also, it allows services, as I've mentioned, to interact with this library register themselves and say, hi, I'm an application running here, please talk to me. The way that this looks in actual code, this is the initialization of an application running on callnet. The thing that's missing there, which is the to do, is this is not initializing any network modules. So, you're creating a router and then you're not attaching anything to the router. So the router is going to get every packet and go, cool, I don't know what to do with that and save it, I guess. The router is, so you initialize everything from the bottom up, you initialize your network stack, then you initialize the router, then you initialize lib call, which is the call struct that is created there. You can create users with this API, you can register a service, which in my case here is DE Space Cookie My App. And then you can send a message and sending messages is, it has a few different modes that it can run in depending on if the recipient is a flood or a user or a group. So you can either address something to a single user or a group of users or in this case, a flood, which is basically spread it into the network and any node that is existent on this network can get this message. The beauty of this is that if you have an application that you want to provide on this network and you don't know how many other people have this because it's not standard, you can send out an announcement like this with a flood and say, hi, by the way, I am running here, can please people talk to me. And then you can find other instances of your application running on other people's devices that can then do whatever you want to do with it. On the receiving end, that's pretty simple as well. I have skipped all the initialization step, but you just have a listener function. There's also a poll and we're also working on making things asynchronously, but it's sort of a rolling working version where you can listen for this specific service ID and then every message that this node encounters on the network that is addressed to the service will get called with your code. And then you can do stuff with it. So that's the sort of top level application building process. Then we get to the routing. Again we have to do routing in user space and we are orienting ourselves with Batman, which is a routing protocol which uses pheromones and sort of a distance vector approach. We are writing it in Rust and I am bad with names so I called it Ratman. It also does a few other things such as delay tolerance, which is the DTN. What that means is that if you have two networks running in two physically distinct locations and you have people with bicycles, for example, crossing between location A and B, then if someone from A wants to talk to someone in B, the messages can be buffered on the person with a bike for the duration of that trip, which is why the routing core has a persistence module. They have to be stored somewhere. It has to survive device power-offs and potentially it has to stay there for weeks until either the message is surely delivered or it can be deleted because maybe a buffer has filled up or, you know, we are not going to fill up a device just completely. There is a maximum size of stuff that we are going to save but we are going to hold onto frames and packets as long as we possibly can. Some of the stuff that we have also been working on is simulating this. This was actually a Google summer of code project by someone who is now part of the core development team. Her job was to figure out a way to get a bunch of inputs and then create transform functions so that with relatively little work we can, you know, have a bunch of events and then replicate those events in slightly different circumstances throughout the network so that the actual load on the machine simulating this can be low whereas we can simulate a lot of network traffic. This has been working pretty well. The networks that we have so far only simulated were like three or five devices connected with each other but we don't really, right now the problem is that creating these networks is a bit of a manual task where you have to like hard code stuff that gets, you know, all the nodes and how they are connected and once we build a thing that can auto generate networks this testing will be able to go much further. It doesn't replace actual physical testing but this is a pretty good approach to test some assumptions in how we want to do routing and how certain mechanics are going to impact the link quality. Right. To be able to navigate it easily and to support as many platforms as possible we again decided to go with Web GUI which is an HTML5 GUI written in Ember.js and to implement the current paradigms of usability and to have much more responsiveness that we had before so different layouts and possibilities for different devices but all within one HTML5 layout which is then over a JSON API connected to our application and there it uses then our API to communicate. It is beautifully documented or should be at least we invest quite a lot of time and you can also communicate with us. We have a mailing list and we also have an IRC channel which are our main community and also developer communication interfaces and we also try to have a weekly voice chat that you can approach if you are interested in it. So that's our development team. At the moment if you are interested we would be really happy to enlarge it and to have you join us. Thank you so much for the talk. Katarina and Mathias and now we hand over to some questions. Hello, thank you for the talk. Can you say something about the scalability of your system? Not at the moment. We think that the scalability is going to be comparable to something that you can expect from a Batman network in Freifunk for example. The routing algorithm that we take is very similar to it. People announce themselves on the network and then you have a routing table just in user space instead of some kernel table. We don't know, we can't say for sure yet because we haven't built networks that are big enough yet but on the other side we're not making substantial changes to the algorithm that Batman uses to indicate that we might have scalability problems down the line. Next question. Hi, can you speak a bit about the choice of Rust and how that's been? The previous application was entirely written in C99 specifically. We were looking at what we wanted out of a language and one thing that we aim to do is run on certain routers that then don't provide a UI and that only become infrastructure nodes that know how to keep a routing table and how to process data. Because of this we were already looking at pretty low level languages. The question was do we rewrite it and see with a better architecture? Do we rewrite it in C++ or Rust was kind of new at the time when we made that decision. It was a language that I started having quite a lot of experience with because I used it in a lot of personal projects. At this point I'm also on a few Rust teams and I'm quite heavily involved in the community. The initial decision was made because it is a much more modern language than C++. It gives you a lot of the same benefits. It has other benefits. Of course it has some downsides but we thought that because of the benefits that the language gave us the downsides didn't seem so bad. Next question. Hi, thank you for the talk. It's very interesting. I've been looking for a few months now into decentralized things in the work-smash like this and I was just wondering how would you compare this to other projects like libp2p or GNUnet or other projects? I'm just wondering why again? In a way it's not again. So when we started the project in 2011 there was really nothing usable at all so I never wanted to start an other project of anything. Since then many projects tried also to go in this path. It's a very hard path as most projects that also try to be decentralized or are decentralized are just Internet overlay. So with integrating everything into one application from the infrastructure level, from the real network connectivity over the services up to the user interface and the whole configuration of the system. All this in one program is really painful and I haven't seen an application kind of be really succeeding in that. So one of the reasons we are rewriting that is that it was working really great back in 2012 until 2014 and we were still working okay on desktop systems but not on mobile systems anymore. And to have an application that is interconnected all those devices together is unique and is not there at the moment and we hope we will be succeeding. We are also in constant contact with other projects and also try to check what they are doing and most probably we can share some of the things because the real things at the bottom of it. So how do we really interconnect with it? Those problems are for I guess everyone the same. One thing to say there, I don't think we're really reinventing a lot of things. We're mostly looking at what other projects have learned over the last few years and then applying those findings in a different context and seeing what we can come up with. So definitely I would say without a lot of the other projects that have existed, for example, server approach to the delay tolerance and how to store messages and communicate changes in journals and advances in general routing like Batman and the derivatives that exist now, without those we would not be able to do what we do now. So it's very much a we're sort of succeeding or we're trying to be the successor of some of these ideas and see where they lead. Wonderful. Thank you so much again. Thanks for your very interesting questions also. And now my question is of course where can I find you during Congress if I want to contribute? I heard you have a workshop over there now, right? Right now we have a testing workshop where we are looking a bit more in the program where we are and can also find out a few things together. And then we are sitting a lot at the EFRA and there are some other testing workshops that I'm doing tomorrow and after tomorrow about decentralized networks about networking protocols. Okay, so I find that of course in the FA plan. So 6.30 you find them in the workshop area and if there's not more questions, I have a tiny thing for you. You can now choose whether you want some martin or some chocolate as a tiny gift of the open infrastructure orbit for you as a thank you. I haven't slept yet so I'll take the martin. Okay, perfect. You want some chocolate or? Yes, please. So thank you very much. Thank you for having the talk here.
|
Concepts, goals, implementations and the lessons learned from rewriting qaul.net decentralized messenger in rust. qaul.net is a Internet independent wifi mesh communication app with fully decentralized messaging, file sharing and voice chat. At the moment we are rewriting the entire application in rust, implementing our experience of 8 years off the grid peer2peer mesh communication, with a mobile first approach and a network agnostic routing protocoll wich can do synchronous as well as delay tolerant messaging. We are currently rewriting qaul.net 2.0 in rust with a new network agnostic routing protocol, identity based routing and delay tolerant messaging. The talk will show our learnings and the journey ahead of us at the alpha stage of the rewrite.
|
10.5446/53026 (DOI)
|
Good evening and welcome to the Kau's West stage. The next talk is about storing energy in the 21st century. Frank is a journalist mostly writing on manned spaceflight, but also working on energy storage. And he's going to tell us what his ideas are, what his knowledge about solutions for the future of storing energy will be. Please welcome him. Hi. As was already said, I'm a journalist and I'm a podcaster. I write text. I speak into microphones. Doing presentations is not part of my job. My presentations are bad. I'm sorry about that. This talk is going to be about energy storage in the 21st century. If you expect this to be about the future, you're only half right. Because if you had a look at the calendar, it's 2020. We are in the 21st century. And one of the things we need to talk about is how we got here. And you might be a bit surprised by my first slide. 1859. Let-Asset batteries. Actually, the very first kind of battery that was ever invented. That was actually rechargeable. And it's still the most popular battery around. As you can see, it's over 370 gigawatt hours of let-asset batteries are produced every year to date. And that's more than twice or maybe this year, it's just about twice of what we have in production in lithium-ion batteries. So this is really a sort of very important technology. And yeah, electric cars, well, are not very modern invention. In fact, around the start of last century, they were quite popular. And lots of people will say, yeah, okay, this was the moment when everything went bad. But if you look at the actual performance of these cars, well, this was for early adopters. They had four horsepower, three kilowatts of power. They were driving at about the speed of what you would walk, maybe a little bit faster. And if you were trying to go up a hill, this thing still weighs a ton. I mean, it's almost as heavy as a modern car. They were not very good overall. You need, when you have a real car, you need a bit more than that. These days, about 70 million passenger cars are produced every year. 25 million commercial vehicles like vans, trucks, tractors and so on are produced every year. And about 90 kilowatt hours of lithium-ion batteries are put into electric cars. So if we took all production of passenger cars and divided the batteries among them, it's 1.5 kilowatt hours per car. That's not quite enough. So the number of electric vehicles you see on the street is not because people are not willing to make more, but they just don't have enough batteries for that. We need at least 3,500 gigawatt hours. And that's about 20 times the current production of lithium-ion batteries just to supply passenger cars. All the other things like trucks, vans, vehicles, maybe grid storage, all the other things that you need the lithium-ion batteries for is not included in that figure. But in fact, lithium-ion batteries are still fairly modern technology and there were others like nickel-cathmium. Also at turn of the century. And yet others like nickel-ion. And what happened with nickel-ion is that they replaced the catmium with iron. In fact, it was still a bit more expensive to produce. But these batteries kind of worked. And that was a great advantage. On the other hand, nobody quite knew why they worked. In fact, when you see the formulas here, these formulas were figured out in around 1960 or so before that was very hard to figure these out. Because you couldn't actually look into the battery and do all the chemical analysis and they needed near infrared spectrometry to figure that out. They have a look at that formula and you squint a bit and maybe subtract the OH here from the H2O. What you actually see is that here's the hydrogen and it moves over to the nickel. And this was something that was observed and somebody had the idea. You could just build a battery without any catmium or any iron. You just have the hydrogen in there. And these batteries were actually quite successful even though you might never have heard of them because they were mostly used in space. Hubble Space Telescope still has them. The International Space Station had them or has them. They're currently being replaced by lithium batteries. But they were quite important at least in that area. Especially because they had a large, they could sustain a large number of cycles, like 20,000, 50,000. And you have to remember that we're used in satellites or like the space station in very low orbits. So every 90 minutes they go once around the Earth and they have a sunrise and a sunset every 90 minutes. So about 6,000 per year. And so you get 6,000 cycles every year. And your battery has to be able to sustain that. However, these batteries were absolutely not suitable for anything here on Earth. For one thing, you don't really want to have a pressure vessel filled with anything around you because it's really high pressure and something might go wrong, especially when you build like millions of them. And also, inside there is gas. There's actually hydrogen being developed inside the battery. And as soon as it leaks the hydrogen is out there, it's flammable, it's possibly explosive. You don't really want that. So people had the better idea. Actually, they had this idea before they said, why we can just use the hydrogen gas. And they stored the hydrogen in the form of hydrides. And you all have these batteries. They're the typical AA rechargeable batteries with 1.2 volts. And they're chemically exactly the same. They're chemically exactly the same thing because they make hydrogen and the hydrogen gets stored inside this black stuff here and the black stuff is a metal, a mixture of metals that easily forms hydrides. And they act, so you have a chemical bond, but a very easy chemical bond between the hydrogen and the material inside there that can, so the material can take up the hydrogen and give it off even at room temperature at fairly good rates, at least good enough for the batteries. The problem is it's pretty darn heavy. When you look at like the stuff that's in there, it's mostly nickel and cobalt and a bit of lanthanide. And that was the first mixture there. Now these days there are others. They're a little bit better and completely different materials like titanium, but almost always some nickel. And yeah, there are others. Ford patented this one actually for cars. Yes, Ford did want to build electric cars and the battery they had was almost as good as the titanium ion batteries around 1990. They were quite good. They were three times as good as lead acid batteries. Only problem is it's a high temperature battery. Sodium and the sulfur are both liquid. And if you ever had sodium somewhere in, if you've ever seen the experiment in chemistry where sodium was in water and it exploded, well if the sodium is liquid, it gets worse, gets much worse. And in this battery, the sodium and the sulfur are divided by a solid electrolyte. And the solid electrolyte is actually a kind of ceramic. It's aluminum oxide. It's fairly brittle and porous. It has to be porous because something has to get through. So the sodium and sulfur can react with each other. And no, you cannot really use this stuff in a car, which is kind of obvious. You don't want to have liquid sodium in your car. And these batteries are actually still used these days in grid storage, especially in Japan. There's a company I had never heard of when I read about those in 2010. And I know it's 2010 because that company was a TEPCO, if anybody remembers this. And why this is, why it's so important that I read about this in 2010 and remember this sometime later in 2011. But the problem is a lot of these batteries, and we'll talk about lithium-ion batteries in a moment. Materials. It's one of my favorite charts in Wikipedia. It's the abundance of chemical elements in the Earth's continental crust. It's a bit of a lie. We have oceans, and the oceans contain a lot less hydrogen than would be in this table. This is not a complete table. I took some sections out just to show some important bits here. Like, you know, lead is down here. It's fairly rare. Lithium is also quite rare. The reason lithium is rare is because of what happens in the sun. Lithium is very susceptible to nuclear fusion. So in nuclear fusion, lithium almost immediately gets burned, and we don't have a lot of lithium anywhere in the universe. Also not on Earth, as you can see. Cobalt is almost the same. And actually, if you want to build batteries, and you want to build a lot of batteries, you want to have something rather up there, as far up as possible. Like, you know, sodium, for example, we had sodium sulfur batteries. Sulfur would be somewhere in between here, but we have a lot of sulfur because in case you didn't know, we do produce a lot of oil, and there's a lot of sulfur in the oil, and our fuels these days have... We have to take the sulfur out of the oil in order to produce our fuels, because otherwise we have lots of sulfur oxide emissions, and all that sulfur that gets taken out of the fuel ends up somewhere, and so we have a lot of sulfur around, so we can use that anytime. Other stuff that's... Maybe you may be surprised there's titanium on this list too, and it's... We have a lot of it. It's like the ninth most element. Nickel is quite rare on the one hand, but it's much more common, especially in production than something like Cobalt. The other thing you might want to know about elements... By the way, it's really hard to find a good periodic table that shows the right numbers, and the only one end is free to use, and the only good one I found was a Japanese one. I'm sorry, but it shows the right numbers. That's important. One of the important things, if you want to have a battery that is light and has a lot of power, is the weight. These little numbers that nobody cares about down here, that's one of the most important numbers, because that's how much the atom weighs. Chemical reactions only work with the outer electrons in the outer shell. The energies of these electrons in the outer shell are kind of similar. It's always like a few electron volts. It doesn't differ too much. It can be one, it can be five, but it's not like one has five, and the other has 500. On the other end here, everything goes from one hydrogen to something like lead, which has 207. The differences are quite huge. When you have a lead battery, you already know it cannot have a heck of a lot of capacity simply because the lead atoms in there are very heavy. Same goes for something like cadmium. Nickel, on the other hand, is a lot better already. I mean, you're around 60. Nickel, cobalt, manganese iron, that stuff is around 60. On the one hand, it's quite heavy. On the other hand, it's much lighter than the stuff we had before. When people were looking for high-performing batteries, they were looking in this corner up there. Obviously, hydrogen is the lightest atom of the mall. If you just want to burn hydrogen, you get a lot of energy out of it. On the other hand, it's hydrogen. It's a gas. It's really hard to contain into anything. For a battery, it's not the first choice. What's the next best option? You may have noticed I took the noble gases and anything below fluoride out of this one just so it looks a bit nicer. But helium, no, you cannot make a battery out of helium. They were looking at lithium. Lithium is fairly light. You see it has a weight of 7. That's 7 grams per mole. If you want to make a battery that has the highest possible performance, it's obviously lithium. Sodium has very similar chemical characteristics. That's why we have a periodic table. Everything that's on top in one column has kind of similar chemical behavior. Sodium will play a role a bit later. But as you can see, sodium is about three times as heavy as lithium. The first choice when you look at a new battery that you might want to invent will be lithium and not sodium. That's what was done. Of course, this year we had Nobel Prize finally for John B. Goodenough at the age of 97 years. By the way, he's still a researcher. He still researches in batteries, lithium batteries and also sodium batteries. This was a revolution. It was really the big breakthrough. If you ever look at, I mean, there are so many articles that I try not to write that say, yeah, the next super battery is just around the corner. On one hand, developing this thing took from the 1970s until the 90s just to make it work properly and commercialize it. The first ones were offered in around 1991. But it improved. It improved quite a lot since then. As you can see, it's almost two and a half times the capacity in the last 30 years. Unfortunately, there's a lot of people who write articles saying, yeah, okay, we can get five times or 10 times the capacity within the next five or 10 years. By the way, these kinds of articles and announcements have been made ever since batteries were around. You can actually find articles in the 1900s that said the same thing, like, yeah, okay, 10 times as much in the next 10 years. Well, let's put it that way, 100 years later, we had about 10 times as much. That's about how long it takes. It's actually not bad. We are really quite good, especially in the last 20, 30 years. Improvements have been quite good. Main problem with this thing, okay, maybe I should say how it works. There was actually another talk that explained how this thing works, so I will be quite brief. You have a material with lithium in here. Let's say cathode. And when you charge it, the negative charge in here and the positive ions go over here and are stored. In this case, and that was the first kind of battery ever was around, was in carbon. Actually, they used coke, I think. These days, we use graphite, and graphite is more or less maxed this one out. If you remember this table, storing it in graphite has a problem. You see up there carbon. Carbon has a weight of 12. And when you store lithium in these layers, you need about six carbon atoms to store one lithium atom. And the carbon atoms together, six carbon atoms have a weight of 72. The lithium atom has a weight of seven. So the carbon is about 10 times as heavy as the lithium. So obviously, this is not ideal. But it took a long time just to max this out. These days, I think they're at around 90% or so of the theoretical capacity. So if you want to do something with graphite, you cannot improve this much more. On the other hand, here, this one says metal cobalt. And this is because lithium cobalt oxide was a material of choice from the beginning in lithium batteries. It was pure cobalt oxide. This too has changed. Ah, damn it. Okay. This will change and we'll see it later. Lithium. Where's lithium coming from? This year, I was always sure that most of the lithium is coming from Chile. And it's not too bad. I wasn't wrong. But actually, this changed in the last two years. These days, most of the lithium is coming from Australia. And what changed is that lithium used to be mined simply by pumping water into the ground into salt layers where you had a lot of lithium salts, pumping the water up, evaporating it, concentrating it and extracting the lithium from that. There are some spots where this is possible and this is actually quite easy and also quite cheap. But that's limited and we've pretty much run up to the limit. In Australia, what they're doing is mining ores. Ores like this one here. Spudom means this is a lithium-aluminium silicate. And in theory, they could contain about the ores themselves. It could contain about 6.8% of lithium by weight. I don't know. That's lithium oxide. So it's about half that. So about 3-4% of lithium. But typically, you will get maybe 1% of lithium by weight. And that's just for the ore. You have to get to the ore itself. And when you mine something, you not only get the ore itself but also other stuff. So actually, you need to mine maybe about 1,000 tons of stuff in order to get one ton of lithium. And that sort of thing is done on a large scale in Australia. And that's where, as you can see, that's where it's coming from. And most of that is, by the way, from Chinese firms because most of the batteries are made in China. Cobalt. Cobalt is a huge problem. A problem for one thing because we don't have too much of it. As you have seen, it's about as much as lithium. And you need a lot more cobalt in your battery than you need of lithium simply because, I mean, it's the same number of atoms. If you had lithium cobalt oxide, you need one lithium atom and you need one cobalt atom. But the problem is cobalt atom is about 10 times as heavy as a lithium atom. So you need a lot more cobalt than you need lithium. And so we first ran into the problem of running out of cobalt. Cobalt is mostly mined in a Congo. You will have seen a lot of pictures like this one and much worse ones also. And about a quarter of the cobalt is mined in the Congo and two-thirds of all the cobalt is mined in the Congo and one quarter, it depends on the current economic conditions. Something between 10% and a quarter of all the cobalt in Congo is mined by small-scale mines like this. But most of it, also mostly by Chinese companies or at least to a large part by Chinese companies, mined in something like this. I didn't... If you go and look for pictures of cobalt mines that are free to use, you only find these. So I have a placeholder that is actually a copper mine, I'm sorry. But I really didn't find a free picture of a large-scale cobalt mine anywhere. But the main problem is really the Democratic Republic of Congo is a warthorn, poor country and it's really worth reading up on the history of this country and that's the problem. The problem is not cobalt, cobalt is just an element somewhere in the table. Don't blame the cobalt, blame the social situation in this country. And aside from that, we still have limited reserves. So there are many good reasons not to have cobalt or to at least reduce the amount of cobalt you can have. And this is what has been done. Also because cobalt is getting expensive, the more demand there is for cobalt, the higher the prices get and when demand or strips the supply, prices just skyrocket. So people had to find alternatives and they did. These days modern lithium ion batteries use something like nickel manganese cobalt mixtures instead of just pure cobalt. And you have numbers like 532 or 811. The 811 means eight parts nickel, one part manganese and one part cobalt. So today batteries use between 1 tenth and 1 third of the amount of cobalt that they used to use. There are other possibilities. You can use something like lithium iron phosphate. It has no cobalt in it at all. But the problem is you only get about half the capacity out of the battery. And especially when you have something like a car that you want to have as much range as possible, that's not really an option at least for this. It has other positive properties like it's much more robust, it cannot decompose thermally like a lot of the cobalt materials can. But it's possible. Lithium sulphide. Lithium sulphide is something that everybody would really want to have. And trust me it's not for one of trying. Because you know sulphur is only half an atom of sulphur, only half as heavy as an atom of cobalt. And you can have two lithium atoms together with one sulphur atom. The problem is the lithium has to, somehow you have to divide the lithium and the sulphur. And when you divide the lithium and the sulphur, at some point you do it in stages. At first you have like two lithium atoms and one sulphur atom. And at some point you have, I don't know, one lithium and one sulphur. And then you have two lithium and three sulphur and so on. You do it in stages. And one of these stages unfortunately is liquid inside of the electrolyte. And that means that your cathode is now suddenly soluble and it begins to slowly but surely destroy itself every time you charge or discharge the battery. And this is a problem that hasn't actually been solved yet. There are companies that sell these lithium sulphide batteries. They only have relatively short life. And the way they did it is by just putting a heck of a lot of material, a heck of a lot of lithium sulphide into the battery so that it, well, it still destroys itself but it takes longer until it's completely gone. The other possibility is air. Lithium air batteries. Also one of the dreams that everybody wants to have and lots of people are trying is actually this is, this actually works almost like a fuel cell except with lithium on one side instead of hydrogen and air on the other side. The problem is that you now form, you form an lithium oxide on one side of the, on the cathode side of the electrolyte. And it's very hard to really maintain the integrity of this whole thing. And especially, I mean it works in very thin layers but if you want to have a thick layer and you would need a thick layer in order to actually exploit this because I mean remember what you have here is just lithium on one side, only lithium atoms and nothing else on the other side. That would be great. That would be perfect. It would be like two kilowatt hours per kilogram of battery. That would be really nice but it really doesn't work. You need, you only get it in very thin layers and then you have a relatively speaking thick layer of electrolyte between that and so long as the electrolytes that you need are so thick that they make up more weight of the battery than the actual lithium you're shoveling back and forth, the actual capacity is still worse than what we have today. And you also have to make sure you can cycle this without losing any of the lithium on the air side and it's a bit of a mess, it's a bit of a problem and nobody has solved this yet. Has solved this yet. Yeah. Okay. I used to have the other slide before that that's why. Yeah, lithium is just as rare as cobalt and we have to replace this at some point maybe as well because lithium supply is limited and we have to get like 20 times as much batteries into production as we have currently. Okay. I know materials. Right now we have talked about one side of the battery, now it's the other side. Do you remember the graphite with the six carbon atoms? Well, you can improve this. If you don't use graphite, you can use graphene or graphite or some nanotubes or multi-walled nanotubes. Unicorns might help. Maybe unicorns can make those cheaper. But no, right now you can find a lot of papers and a lot of science being done with graphene and nanotubes and so on and it works. It actually does work. The problem is this is bad expensive as gold, maybe not quite but almost and it's not useful for batteries right now. Maybe someday somebody gets to make those on the cheap and then it's an option. Silicon and phosphorus could help as well. It could improve upon the graphene on the graphite have much higher capacities because especially with silicon, silicon forms an alloy with lithium and you get much better ratios. So you can get much higher capacity. The problem is when you form such an alloy, what you get is much bigger. The electrode just gets much bigger. It gets four times as big as it was before. This puts a lot of stress on the material and it starts to crack and basically every time you use the battery again it destroys itself. That's not helpful. Maybe, I mean there are tricks. Some people make lithium like the wafers we use in chips and make them very small and you have very small structures that can store the lithium inside of them without cracking because they are very small. But they never talk about price for some reason. I don't know why. I think it might be too expensive. Okay. The other thing you can do and there is active research and it's quite promising actually. It's pure lithium. Just nothing. The reason why nothing hasn't been done before is the lithium. When you grow lithium by electrochemical means, you just put one atom onto whatever lithium is already there and you do it by an electric field and all the atoms go wherever there is the strongest electric field. The problem is the strongest electric field is wherever there is a little peak, a little bit of like a needle, the point of the needle or something like that. That's where they go. What you get is dendrites. That is like small needle-like projections that grow ever more and ever faster and that has prevented use of just pure lithium anodes even though that would essentially halve the weight of the battery. That's why people want to use it. You would just cut the weight of the battery in half and get about twice the capacity. What you need is solid electrolytes because you see right now the electrolyte, that's the stuff that's between the cathode and the anode, is a liquid. That's why these dendrites can form. When the electrolyte is solid, like in the sodium sulfur battery, you know like the hot stuff that we had before from Ford, there's already something solid in between there and so nothing can grow through it, at least if it's solid enough. People are working on this. They have made some advances. One of the problems is it's still a bit too heavy. The electrolyte is a bit too thick and too heavy and it's a fairly slow process. I mean after all, it's no longer liquid. You have to get the ions through a solid material. The other problem, as I said, I switched two slides around, so now we talk about sodium. Sodium can replace lithium. As I already said, sodium is about three times as heavy as lithium, so it's not perfect. But on the other hand, since lithium currently only makes up about maybe three to five percent of the weight of the total battery, switching to something that's three times as heavy isn't quite so bad. One of the problems is that there's a lot less development that has been done on using sodium instead of lithium simply because people started using lithium because it was the best performing material and there was plenty of lithium around. Why not? I mean, especially in the 1990s, lithium was mostly used as a grease or as, I don't know, like for colors, I think. They didn't use a lot of it. There was plenty. There was a lot of supply of it. Why waste your work on sodium when you have something that works and something that is always going to get you better performance? Sodium will always be the second best in performance, and so people didn't really start development on it. They have started development on it in the last couple of years, maybe this decade. You get a lot more papers if you search online in scholargoogle.com. If you didn't know that page, it helps a lot to find scientific material. You find a lot more work on this. And there are even some companies that are selling them more like as trials than as actual products, but at least it's something and they have capacities of 140 watt hours per kilogram and that's similar to what we had in the year 2000 in lithium ion batteries. So development lags behind about 20 years roughly. And they still don't have the same kind of durability and cycle life that we expect today. They can sustain maybe 200 or 300 cycles. These days we expect a lot more than that. All that is the result because there has been much less development in the materials overall. Sodium is very similar to lithium, but there are always some subtle changes and it just needs to be understood. It's simply you need a lot of scientists on benches and in front of their computers crunching data, doing experiments just to find out what we don't understand and what we do wrong. Yeah, the other problem is I mean batteries are great for stuff like this microphone or my smartphone or maybe a car. But at some point you want to store more energy and batteries won't cut it. At least not lithium batteries. Sodium maybe. Sodium, there's huge amounts of sodium. We're not going to run out of sodium, trust me. But you still have, when you have a battery you still have to purify the materials, you have to assemble them. There's a lot of processing going on and that's a problem when you want to have something that's really big, something that can store a huge amount of energy like we need for storing something like the grid level energy. I mean if you read about it in the news or in articles you usually find something like this. This is from June. This is perfect for solar energy. As you can see lots of solar energy. But right now this is where we had this month. Almost no solar energy and we had some wind except when there's not. We kind of need to store the wind energy in order to have energy when we need it. With solar energy it's much easier. What a trouble is we're here at 91 degrees north and at 91 degrees north here in Leipzig for example, it kind of looks like this. In winter it's cold and it's cold because we don't get a lot of solar energy right here. It's very different when you're somewhere in California or even in Spain. I mean Spain is already a few degrees further south and they get a lot more sun in winter as you can tell from the temperatures actually. So you can rely much more on the cycle of the solar energy like every 24 hours and so on. But here we need much more storage. We need to be able to store energy from times like these and use it in times like these when you don't get any. Now we're talking about, I mean we need 80, I think I have it on this slide. The amounts we need is huge. 40 gigawatt hours, by the way 150 gigawatt hours is the worldwide production of lithium-ion batteries. 40 gigawatt hours in Germany is enough for 30 minutes of power supply. Let's say we really need a heck of a lot. One of the large scale, what we consider large scale storage possibilities is pumped storage. And problem here is physics. Physics like this, E is MGH, M is the amount of water, the mass of the water, H is the height to which you pumped it and G is the gravity. We've maxed out gravity. Actually in this solar system you will not find a solid surface that has more gravity than ours. So we cannot optimize that further. I'm sorry. Height, if you get something that's better than this, I mean this is not 360 meters but I assumed here 360 meters for one simple reason. It's easy to do the math on. Then you need one ton of water to store one kilowatt hour. And you know, one kilowatt hour for one ton of material, that's a heck of a lot. And that's why these are almost always huge and don't store a lot of energy. And total storage is something on the order of 40 gigawatt hours in Germany. There have been other proposals like compressed air storage. What you do is here in Germany we have, there used to be a sea here around Germany and a lot of the salt water from the oceans evaporated and left behind a lot of salt. And we get these salt domes. And so what you can do is you can make holes inside the salt. Simply by pumping water down into the salt and dissolving the salt. And yeah, then you have a huge hole in the ground. Except you cannot see it from the surface. Here it's a bit better than the pump storage. When you do the math on it, you can get about three kilowatt hours per ton of salt that you've removed from the ground. This assumes 70 bar. If you go deeper into the ground, you can have higher pressure. You can have more than this. But it's kind of limited. And also you have a hole in the ground. And this hole will collapse sooner or later. I mean, these holes are stable for us. They're stable for centuries, maybe thousands of years. But eventually they will collapse. Something will go down. And sometimes when people don't pay enough attention when we're making these holes, it actually happens. And you can find accidents where this actually happened. Okay, how does this work? It was very simple. You pump the air in and you take it out. You go let it blow through a turbine. It's almost like wind energy except for a very good turbine. And you get energy from it. The problem is when you compress the air, it gets really hot. And then you store it and it gets cold. And then you let it out and you decompress it. And when you decompress the air, it gets even colder. And that's how you lose a lot of energy. And so what is done, there's one of these is in Germany, actually. There's one in Germany and one in the US. And what they do here is they burn natural gas in order to make up for the heat that was lost. It actually makes the power plant itself more efficient. But yeah, it's not exactly renewable. What people actually want to do is store the heat. So as I said, when you compress the air, it gets hot. And then you want to store the heat. And when you decompress it, you put the heat into the decompressed air and heat it up again. And it gets much more efficient. So instead of 45%, you get up to about 70%. Oh, damn it. Okay, I was way too slow. The way you can store this heat is something like this. Thermal storage, there's a lot of thermal storage using liquid salt. Liquid salt is not terribly efficient at that task, especially because not very high temperatures. But you can actually use stones like volcanic rock. I think Siemens is building something like this. And what they use is basalt rocks, like this big, I mean, a couple centimeters, like the pebbles you can see here. It's a very, very, very slow, very small storage, thermal storage. You want to build these things big. And when you build them really big, they're really efficient because it's a matter of geometry. When you have something like a cube, and it's one meter on each side, and you make it two times as big, two times, it's two times as wide, two times as long, two times as high. It has eight times the volume, but only four times the surface area. And when you only, and the problem is you lose heat through the surface area. So when you have eight times the volume and four times the surface area, it means that your heat losses are only half as big. And that's not actually enough because when you take something like this and you build it two times as big, you will also make these parts two times as big. And that's the insulation. And when you make the insulation two times as big, you also reduce the losses by a factor of two. So when you build something like this really, really, really seriously big, you have very small losses. You have losses for small ones, you get losses of about 1% per day. And when you build them like 1,000 times as big, you can, in theory, get 1 tenth of that. In theory, you could store heat for weeks or even months in these. And you need, I thought I had this memorized, but you need, I think about 10,000 tons in order to store one gigawatt hour of electrical energy. Because you can actually use this to store electrical energy. Yeah, okay, I'm sorry. Everybody enthusiastic about hydrogen. I will not be talking about hydrogen because I'm running out of time. And yeah. Big thing here is it has really cheap raw materials, simple stones, and capacity is fairly high. You get about 30 to 60 kilowatt hours per kilogram. If you try to take the heat, because you can heat these up to about 7, 800 degrees Celsius and run them through a common turbine, like the turbines you have in a coal power plant. And you get an efficiency of 45% out of this. That's not very good, especially not compared to a battery where you would get efficiencies on the order of 80% or 90%. It kind of depends on how fast you charge. So if you have a Tesla supercharger, you have much less efficient batteries than if you take it slow and charge it slowly. But for something as simple as this, 45% is pretty good. Especially because you can scale it up into the gigawatt hour range and even tens of gigawatt hour range, which is kind of like what we need. Yeah. And all you need is stones, fairly huge amounts of stones. The reason why I think this is a pretty good idea, even though it's not perfect, as I said, 45% you waste half the energy is because hydrogen is even worse. Yeah. Okay. I tried to do this in two minutes. I'm sorry. It's going to be very fast. Okay. Hydrogen, in theory, it sounds great. You split water into hydrogen and oxygen. You use the hydrogen to get energy and you have really great density of energy. 39 kilowatt hours per kilogram is really great. We have theoretical efficiencies of 83%. And yeah, that sounds great. But in practice, well, the problem is the efficiencies are usually related to the lower heating value, which is not the higher heating values. So there's a fudge factor of about 18% in the efficiency or actually 10% in the efficiencies when you get to the real efficiencies. So then you have, when they say, okay, 60% efficiency for the electrolysis, no, when they say 70% efficiency for the electrolysis, the reality is more like 60%. And for the fuel cells, it's more like 50% than 60%. And the reality is when you take all the losses into account, including the storage, you get about one third and very often much less than 30% of the energy out of it than you put into it. That's a big problem. Okay. I'm sorry, I've run out of time. Thank you for this great effort. I couldn't see the, I really couldn't see the clock until just now. I think it was switched off at first. That's why I kept looking at my watch. Sorry.
|
The 21st century will be powered by electricity. I'm a journalist in the field of science and technology reporting. I followed the development of electricity storage and generation for over 10 years. In this talk I will outline the current state of electricity storage technology and its limitations. There is a gap between the intermittent availability of electricity generation and demand for it. Cobalt and Lithium are increasingly limited in supply and their production is often produced using unsustainable means. Alternatives are being development and will be presented. Some of these technologies are in the form of chemical batteries and some use very surprisingly simple technologies. I will be giving an introduction into future technologies for electricity storage currently in development. Some of these are batteries without rare materials and others are not batteries at all.
|
10.5446/53027 (DOI)
|
There's a long way from Argentina to Prague to Leptsich. These two young researchers, security researchers, the lady and the gentleman, Veronica and Sebastian, are here to tell us something about emergency VPNs, virtual private networks, analyzing mobile network traffic to detect digital threats. I'm quite convinced you're going to have a good time. You're welcome. Let's have a big hand for Veronica and Sebastian. Thank you. Thank you. Thank you. Thank you, everyone, for coming here. My name is Veronica Valeros. I'm a researcher at the Czech Technical University in Prague. Currently, I'm the project leader of the Civil Sphere project. I'm Sebastian Garcia, director of the Civil Sphere project in the Czech Technical University in Prague. The Civil Sphere project is part of the stratosphere laboratory in the university, and the main purpose is to provide free services and tools to help the civil society protect them and help them identify targeted digital attacks. So Matty Monjib, he's a Moroccan historian. He's the co-founder of the Moroccan Association of Independent Journalism. He was denouncing some misbehavior of his government, and because of that, he was targeted with spyware around 2015. Alberto Nisman was a lawyer in Argentina. He died. He was, until the moment of his death, the lead investigator in the terrorist attack of 1994 that happened in Buenos Aires. It was a sad incident that might have been covered up by the government. And after his death, he, the researchers, found traces of spyware in his mobile phone, allegedly installed by the government to spy on him. A commandman, SOR, is an activist from the UAA. He is also a human rights defendant, and he also denounced misbehavior of his government, and because of that, his government targeted him repeatedly with different type of spyware from different places. Right now, he's in jail. He's been there for almost two years, and he barely survived more than 40 days hunger strike. He did complain about the prison conditions. Simon Barquera, maybe you can check the slides. They are not. Simon Barquera is a researcher and food scientist from Mexico. He is a weird case because it's not very clear why he was targeted. The Mexican government targeted him and his colleagues with also spyware. Carla Salas, she's a lawyer from Mexico as well. She's representing and investigating the murder of a group of human rights defendants that were murdered in Mexico, and she and her colleagues were targeted by the Mexican government with the NCO Pegasus spyware. Griselda Triana, she's a widow. Her husband was a journalist from Mexico covering drug cartel activities and organized crime in Sinaloa, Culecan, Mexico. She was targeted by the Mexican government with spyware a few days after her husband death, and we don't understand exactly why. Her husband computer and laptop were taken away when he was murdered, so there was no reason why she was targeted. Emilio Aristegui, he's the son of a lawyer, a minor, and his phone was targeted by the Mexican government with spyware to spy on his mother. She was a lawyer investigating some cases. So these are only a few cases of the dozens of hundreds of cases where government used surveillance technology to spy on people, but not only civil society defendants, but also civilians like this kid. And the common case among all these is that their mobile phones were targeted. And there is a simple explanation for that. We take our mobile phones with us everywhere. We use them. We don't take computers anymore. When we are in the front line in Syria covering war, we record the videos with our phones. We send messages that we are still alive with our phones. When we are working on this field, we cannot not use the mobile phones. So they have photos, they have documents, they have location, they have everything. This is perfect for spying on someone. So it is a fact that government are using spyware as a surveillance technology, not only to surveil, but also to abuse, to imprison, to sometimes to kill people. And we know that they are governments because the technology that they are using, like for example the Pegasus software by the Israeli company NCO, they can only be purchased by governments. So we know they are doing this. So these tools are also cheap, easy to use, cheap for them, right? Easy to use, they can be used multiple times, all the times they want. And sometimes they cannot be traced back to their sources. It's not that easy. So you find an infection and it's hard to know who is behind it. So for them, it's a perfect tool. So what can we do if we think our mobile is compromised? There are several things we can do. For instance, we can do a forensic analysis, it's costly, because it takes a lot of time. We need to go on the phone to check the files to try to see if there is any sign of infections. And sometimes this also involves like sending our phone to somewhere, somewhere to analyze and in the meantime, what are we going to use? It's not very clear. We can factor a reset our phone. It might work, sometimes, sometimes not. And it's costly, sometimes we might lose data. We can change phones with the simple solution. We just drop it to trash, we pick another one. But how many of us can afford to do this like maybe three, four times a year? It's very extensive. But we can also do traffic analysis. That means work on the assumption that the malware that is infecting our phones will try to steal information from our phones and send it somewhere. And this sending of data will happen over the Internet because that's cheap. So that communication we can see and hopefully we can identify it. So how can we know if our phone right now is at risk? Something that you're crossing a border, someone from the police takes your phone, then give it back to you, everything is fine. How can you know it's not compromised? So this is where in civil sphere we start thinking which is the simplest way we can go there and help these people. Which is the simplest way we can go and check those phones in the field while this is happening. And we came up with emergency BPM. So the emergency BPM is the service that we are providing using Open BPM, this free tool that you know that you're installing your phone. And from this we are sending the traffic from your phones to the university servers so the servers are in our office and then to the Internet and back so you have normal Internet. And we are capturing all your traffic we store in there. What we're doing with this, we have our security analysts looking at this traffic, finding infections, finding the attacks, using our tools, using our expertise, threat intelligence, threat hunting, whatever we can and seeing everything in there and then reporting back to you saying hey, you're safe, it's okay, or hey, there is something going on with your phone and installing applications or actually change phones. We are from time to time suggesting stop using that phone right now. I don't know what you're doing but this is something you should stop. So we are having experts looking at this traffic, also we have the tools and everything we do in there is free software because we need this to be open for the community. So how does it work? This is an schema of the emergency BPM. You have your phone and in the situation like Veronica was saying you are at risk and say right now I'm crossing a border, I'm going to a country that I don't know, I suspect I might be target. In that moment you send an email to a special email address. That address is not here because we cannot afford right now everyone using the emergency BPM because we have humans checking the traffic. So we will give you later the address if you need it but using an email say hey, help. Automatically we check this email, we create an open BPM profile for you, we open this for you and we send by email the profile. So you click on the profile, you have the open BPM installed or you can install the additional one and from that moment your phone is sending all your traffic to the university to the internet. Maximum three days we stop in there automatically and then we create the pick up file where the analysts are going there and checking what's going on with your traffic. After this we create a report that is being sent to you back by email. So this is the core operation like 90% automatic of the emergency BPM. So advantages of this approach. Well the first one is that this is giving you an immediate analysis of the traffic of your phone wherever you are. This is in the moment you need it and then you can see what your phone is doing or not doing right. Secondly here is that we have the technology, we have the expertise, our threat hunter, threat intelligent people, we have tools, we are doing machine learning also in the university so we have methods for analyzing the behavior of encrypted traffic. We do not open the traffic but we can analyze this also so we took all the tools we can to help this civil society. Then we have the anonymity. We want this to be as anonymous as possible which means we only know one email address, the one you use to send us an email and that's it. We doesn't have to even your real email address, we don't care right. Moreover, this email address is only known to the manager of the project. The people analyzing the traffic do not have this information. After that they send the report back to the email address and that's it. We delete the pickup and that's all we know. Of course, if your phone is leaking data, which probably is, we see this information because this is all the whole purpose of the system right. Then we have our continuous research. We are a university project like almost 30 people here so we are doing new research, new methods, new tools, open source, we are applying, checking, researching, publishing right so it's continually moving. Last, this is the best way to have a report back to you in your phone saying if you are infected or not. Some insights from the Merchant's VPN. The first one is this is active since mid-2018. We analyze 111 cases roughly, maybe a little bit more. 60% are Android devices in here. We can talk about that but it's well known that a lot of people at risk cannot afford very expensive phones which is also impacting their security. 82 gigabytes of traffic, 3200 hours of humans analyzing this which is huge. Most importantly, 95% of whatever we found there, it's because of normal applications like the applications you have right now in your phone in this moment. This is a huge issue. The most common issues that we found and we cannot say this enough. Geolocation is an issue. Only three phones ever were not leaking geolocations out. The rest of the phones are leaking. Weather applications, dating applications, to buy stuff, transport applications, a lot of applications are leaking this. Most are leaking this in encrypted form, a lot of them are leaking this unencrypted. Which means that not only we can see that but the people in your Wi-Fi, your government, the police, whoever has access to this traffic can see your position almost in real time. Which means that if the government wants to know where you are, they do not need to infect you. It's much easier. They go to the telco provider, they look at your traffic and that's it. You are leaking your location all over the place. We know that this is because of advertising and marketing. The people selling this information a lot. Be very careful with which application you have. This is the third point. Insecure applications are a real hazard for you. Maybe you need two phones like your professional phones and your everyday life phone. We don't know. But the problem usually comes from the application that you're installing just because. These applications are leaking so much data like your mail, your name, your phone number, credit cards, user behavior, your preferences. If you are dating or not, if you are buying and where you are buying, which transports you are taking, which seat you are taking in the bus. So a lot of information. Really believe us here. And last, the email and the emcee that these two identifiers of the phone are usually leaked by the applications, we don't know why. And this is very dangerous because it identifies your phone uniquely. From the point of view of the important cases, there are two things that we want to say. The first one is that we found trojans in here that are infecting your phones. But none of these trojans were actually targeted trojans. Trojans for you. They were like let's call normal trojans. So this is a thing. And the second one is malicious files. A lot of phones are doing this peer-to-peer file sharing thingy even if you don't know. Some applications I'm not going to give names, but they are doing this peer-to-peer file sharing even if you don't know. And there were malicious files going over the wire there. However, why is it that after a year or something of analysis, after 111 cases analyzed, we did not find any targeted attack? Why this is the case? The answer is simple. The emergency VPN works for three days, maximum. So it's not about reaching the right people, but reaching the right people at the right time. If we check three days before the incident, we might not see it. If we check three days later, we might not see it. So right now, we need your help. Reaching the right population is very important because we need people to know that this service exists. We know it's tricky. If we tell you, hey, connect here, we are going to see all your traffic, it's like are you insane? What would I do that? However, remember that the other options are not very cheap or easy or even feasible if you are traveling, for example. And again, as Sebastian said, everything that goes encrypted is cool. We don't see it. We are not doing money in the middle. If we see anything, we... It's because it's not encrypted. So if you believe that you are a person that is at risk because of the work you do or because of the type of information or people that you help, please contact us. We are willing to answer all the questions that you might have about data retention, how we handle the data, how we store it, how we delete it after, how long, et cetera. And if you know people that might be at risk because of the work they do, because the people they protect, the people they represent, the type of investigation they do, please tell them about the service. We can contact us via email. As we say, the information specifically to use this is not publicly available because we cannot handle hundreds of cases at the same time. However, if you think you are a person at risk, we will send it to you right away. This is the contact phone number. We are in telegram, wire, signal, WhatsApp, anything that you need to reach out and we will answer any questions. So we need to reach these people, okay? Yes. So thank you very much and we will be around for the rest of the Congress if you want to... Stop us, ask questions, tell us something if you need, tell us about these two other people in the field that they needed. Trust is very important here. Let us know, okay? Yes. Thank you. Thank you. Okay. And as usual, we will take questions from the public. There are two lit microphones. Yes, go ahead, talk into the mic. One sentence, please. Just a question. Thanks for your excellent service. My question is how can you be sure that all the traffic of a compromised phone is run through your VPN? So of course we cannot. We can say that in our experience, we never found or saw any malware that is trying to avoid the VPN in the phone. So we relay in that no malware or APT ever that we saw or known about is actually trying to avoid the VPN service. In some phones, I'm not sure if you can avoid it. Maybe yes, I don't know. In our experiments and trials with different phones and tablets and everything, all the traffic is going through the VPN service, right? It's like a proxy in your phone. Yes. So if you know of any case, we would love to know. We run a malware laboratory and we run malware on phones and computers to try to understand them and we haven't encountered such a case. SMS, for example, we are not seeing, right? One more question, please. So you're running the data through your network at the university. Do you have a lot of exit IP numbers? Because a malware app could maybe identify this routing through you and then decide not to act. Yes. So that's a good question actually in the university. We have a complete B class public network. We have, of course, agreements with the university to use part of these IPs. So this is part of the question in there, right? Like, anyway, we are taking precautions. But so far, we did not find anyone blocking or checking our IPs. So we will see. But it's true, right? Yeah. We would say that if that happens, we would consider our project very successful. We haven't heard of such a case yet. Thank you. Okay. Let's have a big hand final for Veronica and Sebastian. Thank you very much. Thank you. Thank you. Thank you. Thank you.
|
The access to surveillance technology by governments and other powerful actors has increased in the last decade. Nowadays malicious software is one of the tools to-go when attempting to monitor and surveil victims. In contrast, the target of these attacks, typically journalists, lawyers, and other civil society workers, have very few resources at hand to identify an ongoing infection in their laptops and mobile devices. In this presentation we would like to introduce the Emergency VPN, a solution we developed at the Czech Technical University as part of the CivilSphere project. The Emergency VPN is designed to provide a free and high quality security assessment of the network traffic of a mobile device in order to early identify mobile threats that may jeopardize the security of an individual. The presentation will cover the design of the Emergency VPN as a free software project, the instructions of how a user can work with it, and some success cases where we could detect different infections on users. We expect attendees will leave this session with a more clear overview of what the threat landscape looks like, what are the options for users that suspect their phone is infected, and how the Emergency VPN can help in those cases.
|
10.5446/53029 (DOI)
|
All right everyone, welcome to the talk. We are here to present you what we have already done in the public money public code campaign, what tools are available with the goal to enable you to use them yourself and get active yourself. So in school we learn that in a democracy we elect people for different institutions and then they have the power over this area. While in a world where more and more things are decided with technology and with software, the situation might look more like this. And so the goal of the FSFE is that in governments, in public administrations, that the administrations can use the software and control that software in their area. So free software always gives them certain freedoms that a government, public administration, can use that software for any purpose so that no third party can decide what the government or the public administration can do with the software and what not. They can always use it for any purpose without getting permission first. They are also allowed to study the software, so to get the source code and see what actually the software is really doing, to understand what implications certain things have to see if this is right or wrong, what the software is doing and check if the software is in compliance with the laws and regulations they introduced. Furthermore, free software always gives the governments the freedom to share the software with others, be it the citizens, be it the other public administrations, companies, wherever. And furthermore, that the public administration is always able to improve the software or adapt the software to their own needs. So governments should never be in a situation where they cannot do a change in law, in a regulation because a software company or developer doesn't allow them to do this. So you are always allowed to do it. You can make modifications so that you adopt the software to your needs and you don't have to change your behavior of the public administration to what someone else, a developer of the software decided. So that's the background here. As I said, what we want to present here is what tools are available, what parts of the campaign we already have. And our goal is that you can then afterwards also use those tools, be creative and hack your public administration that they afterwards also make sure that software which is published, software which is financed with public money will afterwards be published under a free software license which allows the public administration but also each one of you and all the other public administrations to use the software for any purpose, to study the source code, to share the software with others and to improve it. So yes, let's start with that. As I said, the goals we already had for a very long time that besides citizens, companies, governments should also receive those for freedoms of free software to use, study, share and improve it. And we lobbied for that for a long time, talked with politicians, people in public administrations. But in 2017 we thought, what can we do to give this goal a push? Can we come up with a campaign to speed it up? And so what we decided was that we have a campaign workshop about that. We invited lots of volunteers from all over Europe and spent one weekend together. That's our pad there where we noted down several ideas for the slogan. We thought about what components we need for such a campaign, what tools we need. And one of the outcomes was then the slogan here which you see there beside some others where I feel a little bit embarrassed almost when we came up with it. But yeah, so that was the time when we then came up with what we wanted to do, what components we wanted to have and under which slogan, the public money, public code slogan, we want to do this. From there on, some of us started to develop a website. So we wanted to make sure that when there are people who don't have any clue about free software yet, political decision makers, people in public administrations, but also the public, the general public, when they want to learn about this, that they have an easy entrance point for it. So we created this website and our volunteers started to translate it in many languages. So meanwhile, we have 19 languages available there. You see them on the button there. And if anyone of you is speaking a language which is not yet listed there, we want you after the talk, please help us that we can make 20 or 21 out of that. So yes, that was the first part. So that when you talk with people about that, you can point them to a website. The next step was that we started drafting an open letter which we want to send to politicians on all different levels. So we started talking with other organizations, developed a draft there, get some feedback, made some small readjustments to the letter. The CCC was one of the first signing it. There are others here whom you might also notice when you walk around here. And several of them were then the first initial people who signed it. And the idea there is that for different elections, for whenever there is a good opportunity to contact public administrations or politicians that we can point to the open letter, we can show them how many people and how many organizations are supporting this. And one of the easiest things for you to do if you want to support this campaign is to sign the open letter yourself and encourage others to do so. We are also especially interested in other organizations. Of course, it's very nice if the signing process works as smooth as in this case with DBRN, we just receive a merge request and say yes. But we are also very much interested in non-technical organizations signing this open letter. So not so much out of the usual suspects but organizations, a civil society which also want to support this or also public administrations themselves. So if you can help us there to get more people signing this demand, that's very helpful for us. Furthermore, we also said that we need a short video for people who don't want to read websites, who don't want to read open letters, something very short, very simple. And that's the video I also want to briefly show you. No, no, no. And for a moment, our government would treat our public infrastructure like our streets and public buildings the same way it treats our digital infrastructure. Our members of parliament would work in a rented space where they weren't allowed to vote in favor of stricter environmental laws because the owner, a multinational corporation, wouldn't allow that kind of voting in its buildings. Nor will it allow a long overdue upgrade to more than 500 seats. This means some members of parliament have to stay outside in the street. And a couple of blocks away, a brand new gym is already being torn down just six months after it was built. It's being replaced with an exact replica at great expense. And the only difference, the new manufacturer also provides streetball as an added feature. Meanwhile, every night through a hidden backdoor in the city hall, documents that contain sensitive information on citizens from bank data to health care records are being stolen. But no one is allowed to do anything about it because searching for backdoors and locking them would infringe the signed user agreement. And as absurd as this sounds, when it comes to our digital infrastructure, things like the software and programs that our governments are using every day, this comparison is pretty accurate. Because mostly, our administrations procure proprietary software. This means a lot of money goes into licenses that last for a limited amount of time and restrict our ruts. We aren't allowed to use our infrastructure in a reasonable way. And because the source code of proprietary software is usually a business secret, finding security holes or deliberately installed backdoors is extremely difficult and even illegal. But our public administrations can do better if all publicly financed software were to be free and open source. We could use and share our infrastructure for anything and for as long as we wanted. We could upgrade it, repair it and remodel it in any way to fit our needs. And because the open source in free software means that the blueprint is openly readable for everyone, this makes it much easier to find and close security holes. And if something practical and reliable was created digitally, not only can you reuse the blueprint all over your country, the actual thing itself can be deployed anywhere, even internationally. A great example of this is Fix My Street. Originally developed in Great Britain as a free software app to report, view and discuss local problems like potholes, it's now being used all over the world. And it's now being used in a way that is not only available in the US, but also in the UK and the UK. And it's now being used in the UK and the UK as a free software app to report on any problems that may arise. And it's now being used in the UK and the UK as a free software app to report on any problems that may arise. So, yeah, this video was actually something which was quite difficult for us. We had many people involved in the campaign. And when we started to think about what to put in the video, there were lots of people, we can't say it like this. That's too much simplification there. And in the end, we had to decide that it's just a small group of three people from us, plus the person doing the video, that we came up with this, which I hope that is now very easy to understand for politicians, political decision makers and people in public administrations. What we did later in the campaign was that we translated the video in several other languages. So, it's now available in English, Spanish, French, Italian, Portuguese and Russian. And thereby, we should in theory be able to reach over one billion people in their native language. We are still interested to also have that in other languages. So, costs for that are around 900 euro if you want to do it professionally. We also had some organizations in other countries who were taking care about that. One organization was handling this for French. And we also found several people in other countries there who liked the campaign a lot and then helped us, yeah, either doing it pro bono or for a very good price. So, yeah, this video is there. You can use it. And then one other component which we also, which also people started using was that some activists spent their nights playing around with gobo projectors, those things where you can play around with light and bro check that on buildings. They are also, I have already seen some people around here who may have been part of this activities. And so, yeah, the reason for that is that it gives you afterwards have some pictures which you can use when you want to bring that in the press, talk about it with the media and also in general to have some images for something which is very abstract like talking about software. So, yeah, some people spent their nights in Frankfurt and in Berlin to project messages on public administration's buildings. And then one thing we did was that with the open letter and the video, we started contacting politicians for the federal elections in Germany in 2017. So, I know this is in German, I will briefly translate it. So, what we did there was that we sent an email to all candidates who candidate it for becoming member of parliament in the German Bundestag. And one experience we made there is that the feedback we received from politicians there was quite good. But what even worked better was that we encouraged people to contact their direct candidates themselves. And this is an example of a reply my mother received. So, she was also contacting politicians about that and asked them if they would be in favor of publicly financed software being published under free software licenses. And this was the reply by a conservative politician who is also secretary of state from that party. And he wrote that it's very, he's very impressed by the work her son and this FSE are doing and that after the election he would like to have contact with us and talk with us about this and it's an area where politicians are also dependent on the expertise from external people. And so that's something which is where you see that with some very small messages, short emails, you can already reach people on a more personal level and show them this video. I mean, he was also in the email replying, oh yeah, that video makes sense. I also forwarded that to our experts. I mean, for what they say, but I mean, for himself first, all the politicians said like, yeah, that all makes sense. We should do that. And the same political party just a few steps forward. A few weeks ago took a decision. So, it was the CDU and the CDU at their last party convention. They took the decision that publicly financed software should be published under free software licenses and be available for everyone, which was for us a big, big improvement there. Because it's usually, it came from a party which years before over 10 years was more blocking in this direction. And so that's of course something they say now and we have to make sure to monitor this and to remind them about what they want to do so that actually things are implemented there. But I want to show it to you to show that it's often, it's lots of people who have to do something there to make a change. And it could be one of the small emails which convenes is one of the politicians to say yes, when this is coming into the party convention and they have to take a decision there. Furthermore, what we also did before this decision already is that on the 14th of February, we always encourage people to thank free software developers for their work. So we call it the I Love Free Software Day because we think that not just the flower industry should have some benefits there. And last year what we did on this day was that we prepared letters and roses and each member of the German parliament received such a letter and a rose from us. And in those letters we explained them about free software, about public money, public code, and asked them to also be more aware about what free software is doing for our society. That was something, again, some opportunity to remind them about this topic and explain them what we are working on. One other component we had was that beside the website and the video and the open letter, we wanted to have something that when politicians or political decision makers in general say, that all makes sense, what can I do next? How can I learn more? That we have something to hand over to them first. So we decided that we wanted to create a brochure with more inside text, some more details about the topic. So together with other organizations who contributed text for this, we created a brochure for public money, public code, for the modernization of the public infrastructure of our public administrations. The brochure is now available also at our booth. You can hand it over whenever you talk with politicians about this topic and say here, something more for you or for your assistance to read. At the moment it's only available in English, but we are working on a German translation together with Öffit, which is a public body to help public administration modernize their IT. So soon there will also be a German translation, which then hopefully will help people in Germany, Austria and parts of Switzerland to convince more people about this again. So, and by that I'm handing over to Bonnie, our current intern, who worked on this as one of her internship projects. Hello. Thank you for your applause. Bonnie. Yeah, all right. And just click. Doing my internship, as Matthias already mentioned, I worked on the public money, public code campaign, and I mainly did this by sending out letters to politicians. I wanted to get in contact with loads of administrations all over Europe to get them to sign the open letter, because so far we have only three of them and we want there to be more. Yes, so those three are Juntate de Generelle de Asturias, I hope I pronounced it correctly, some Commander Elbmarch and the City of Barcelona. How did I do this? I sent out some mails to the politicians all over Europe, I looked up their addresses and I found out if they have any connections with software or free software in the town. I also did this with my hometown, who I'm still in contact with. I basically sent them some basic information about free software as Matthias already introduced to you. And I also told them about the benefits from free software and the public administration, like collaboration with other communities, giving back something to the people, saving costs, for example, through collaboration and reducing dependencies. Also to support the local economy could be a good benefit from it and innovation, because you don't need to reinvent the wheel every time if you use free software in the public administration. Then I ended those emails with a question like how is your digital strategy and what are you planning for the next legislation. For this I also created a wiki page and our wiki so that you find there an example email and also loads of information on how to do this and how to get in contact with politicians. But then I also asked myself, well, is the politicians the only administrations we have? No, we have more. We have the universities, for example, who are the entities. Oh, now I need to click. And also schools. Those all use public money to fund their IT infrastructure. And also libraries. Libraries don't only have books, they actually also use software. And hospitals, for example, as well. Those are all public administrations who use public funds for their IT infrastructure. I also have some examples. Currently, as you already saw this picture in the video, by the way, there is a motion in the city parliament of Kassel. I know it's in Germany. And don't worry, later we'll have some examples from other countries. And that motion in the city parliament of Kassel asked the megastrat to follow the guiding principle of PMPC by the purchase and development of new software. Sorry, so that they buy new software. Well, there will be a vote on this in the mid of January in the responsible committee and in the end of January in the city parliament. This was already postponed a few times. So there's still time for all of you who live in the area of Kassel to get active and write them some emails. We already did this and we will be happy to help if you have any questions there. I also have another success story from a hack lab called Pika Pika in Asturias. As I already said, Asturias signed the parliament of Asturias signed the letter and they asked their parliament to do this. They got in contact with their politicians as well and told them about the benefits. Some parties were more interested in saving costs or supporting local economy and others were more interested in collaboration with others. And they used all these arguments with the parties and got in contact with them and convinced them eventually to be the first parliament to sign the open letter. For this we have an interview on our web page and you can look up the whole story there. Then I also have another example that's currently happening in the parliament of Luxembourg. There's a motion that says we invite the government to actively promote and support free and open source software and public administrations and to publish every new developed software which was financed from public funds as open source software. Currently there are at least three opposition parties. The pirates who have two seats, the CSV EPP who have 21 seats and Dylink, I hope I pronounced it correctly, GUIE who have two seats as well that are supporting this motion and also Vivian Riding, a former EU commissioner from the CSV, is speaking in favour for the motion. So there are all examples where you can still get active and write some emails. And if you're now motivated to do this, to get in contact with your local administration as I already said, there could be any administration, it could be a university, a school, or your hospital or your library in your local hometown where you're from. You can come to our booth and you find information material there. You can talk to me and I'm always happy to help you with this and you will also find information on our wiki page that I created for this and we are always happy to hear about stories like that. Yes, the booth is in the Merzweckflasche 3-4 in the cluster about freedom in the CCL. So, I mean, we showed you some examples of what we already have there. As I said before, we would be interested for you to see what you can make out of that. You should be aware that changing things in the public administration is a steep, stony, rocky, long way because for many, many years they were driving in a one-way road and it's very difficult for them to make changes come out of those vendor lock-ins they drove into. And so be aware that when you reach out to politicians, political decision makers, to people in public administrations some of them will not understand what you want to talk about. Some of them will don't want to talk with you about that. Some might say yes, but how should I do that? There are so many reasons why we can't do that. So, it can be quite, you have to be aware that it's something which takes a lot of time and it will not be like when you try to convince public administrations to contact some, there might be 20 of them who say no or who are not interested in what you want to talk with them about. But there might be then one or two where you are successful and where you can make a change. And in the end it's, I mean the FSFE, we are working on this, many people in Europe are working on this campaign. We hope that we can encourage you to also be part of that for some time, to support us, to walk this way with us, to make sure that all the others are proceeding with that, that we can make changes there and that we can make sure that public administrations will publish software under software licenses so that everybody of us will benefit from that, that other public administrations will benefit from that. And I mean, that's one of the quotes which my first teacher wrote down for me at the time, an African saying, many small people in many small places do many small things that will alter the face of the world. So please don't wait for some heroes, some other people to do some changes, to fix some things which you think are important, but rather do something today, something small, start now to do something there and in the end it's the sum of all those activities from all of us that will make a change, not some big activity by some person or one big organization, it will be rather the combined sum of all the activities, the small things which all of us will do together. So I hope that you will start today with this and now we have some time to talk with you about your questions. Yes, looking forward to this. Thank you very much. And also thanks a lot to all the supporters of the FSFE who made it possible for us to come till here. Thank you. Okay, so we have sometimes left for questions. You can see a microphone over there and over there. You can go there to ask questions. Yeah, the microphone to the left. Hey, thanks first for your engagement for free software and to you and to the FSFE. My question would be when talking to officials, I often hear, okay, but we need this software now and if we do not buy this software from this commercial industry, then we will have the problem that no one else will develop it and they will stop developing it for us or to us to very high prices because we want to have it open source. What do you answer to this complain? I mean, there are definitely cases where something like that is the case. So that they say, well, we want to procure software for this area. There are just two providers for this at the moment and they say, well, we know that there are just two providers and we will not offer it as free software. It's very difficult for the public administration to do something about that. I think for it, it depends a lot on the area there. There are areas where the software has a lot of influence on our society where we should also push political decision makers more to make this mandatory that the software needs to be under free software license. There are other areas where maybe we don't have to focus our resources on this case at the moment because there are others where it makes more sense to focus on them to also find some examples where it's easier to do it as free software. So you can already show people what benefits they have from this, show the public administrations, how much they can benefit from that so that over the time they build up competency, how to deal with that and can then also easier tackle those difficult problems. But as I said, I think there are areas in which the software has a huge impact on society and on the people's lives where we should not just accept there, well, the companies say no, but where we should think hard about if we should not invest more resources, more money in this, that we develop solutions like that as free software, that we also reach out to other administrations all over Europe and think about if maybe resources could be combined to tackle such a problem. Thank you. Okay, now for the microphone on the right. Thanks guys for your work on this topic. The question I have is, do you have some resource on positive examples of software that is already open source and used in a lot of administrations that you can refer to when trying to convince people? There was already mentioned to fix my street in the video, for example, and for us I will have to think. I mean, in the brochure we mentioned some examples of software which is widely used, so there are some standards. There is on the European level the sharing and reuse award and it's something where the European Commission, they give out awards to public administrations who develop software which is then also reused by other public administrations in Europe and lots of them are published under free software licenses. So I would encourage you to have a look at those examples there. I'm a bit hesitant to mention concrete examples because one of the experiences we made was that sometimes people just look at some of those projects. For example, with Munich, before there were lots of people always there, Munich it's the lighthouse project of free software and then everybody is looking at that and they don't look around what else is happening and then through political changes things are not going that well for one of those projects, then people are so disappointed and think that this one thing, that's the big part there. I mean, I'm convinced that we don't need those big lighthouses but that we have many stars in the sky where people are doing something good in public administrations. So look at those winners of those awards, look at our brochure, what examples there are and then find those which you like personally because they are from an area which you care about and then use them to explain this. Yeah, exactly. I was looking for a list with all these projects to hand over a list to show that this is a big thing and not only individual projects, exactly like you said. So you don't have a list on your website? In the brochure there are examples and there are also references to other resources like for example the European Commission has joined up which published case studies about free software usage in the European Union. So those are very good resources to find good examples for yourself. Thanks. All right. Are there any questions from the internet? No. Okay, then let's go with the microphone on the left. Thanks for the talk. I'm interested in working in this area but I do not know about any funding. Like is there a European and German funding that will allow to fund a company or just work and get money from the European Union? When you have a project idea or to work on a project to finance yourself? You mean that you want with your company to develop free software for the public administration? Yeah, for example, like that you can live of it, not do it in your free time after 40 hours, do 40 hours for that. So when you want to make money with free software in this area you have to follow the same procedures as other companies have to. So there is a procurement process. You have to make your bid there in the procurement processes. And our goal is that public administrations will say more often that they want to have free software for this and that they don't allow proprietary software for those solutions. So that when you are making an offer that you have disadvantage there that you don't have to compete with other companies who don't want to provide those rights to the public administrations. But we don't at the moment do anything there that free software companies themselves that there is any program to just make money with providing free software to public administrations because that would be something which is against the procurement laws there. I mean there are some smaller things where you can get funding. For example, there are project funding opportunities for free software project about new internet technologies which might also be something for public administrations with the next generation internet. You can apply there for funds. That's one thing where we are also involved to help with license issues. And there is for like smaller things when it's very new. There's also funding from the prototype fund to develop some software which can afterwards be used for the public administration. But beside that, I mean you have to go the normal way. Our goal is to increase the demand for free software in the public administration and to have also regulation that it's very difficult to procure proprietary software. Thank you. Alright, then let's go with the microphone on the right again. Hello. So you probably know that in Italy since June of this year we have a law that requires in most of the case to use, reuse and implement free software from the public administration. So somehow there's a law. So we are entering into a new phase where there is the need to monitor the implementation of the law and to act when the law is not respected because almost no single authority have the duty to control this kind of very complicated software and public procurement law related things. We are going to implement a project to make a monitoring community around that. But I wanted to ask if there are other existing initiatives that already entered into this phase of monitoring public procurement of software acting on it legally and I mean not just advocating for it but enforcing it from a civil society perspective. You're from Hermes Center? Yeah. Yeah. So yes, we are in contact with, don't know who exactly from you, we're in contact with Hermes Center probably. Okay, with you. Perfect. Thank you very much for your work there. I mean that's something which is very important now that whenever we reach the state that a law is introduced or a party is making a commitment that we monitor this, remind them about that and see how this is actually going. So, for Italy this law that before there's a procurement that public administration should check if other public administrations already have a software which fulfills this need, else they should check if there's free software available to solve this problem and just if that is not the case then they are able to procure proprietary software. Well, the thing is always in political processes you have a law so someone formulated something, you have a decision about something and then comes a part about policy implementation and that's the part which is most complicated and where most policy fail. So, it's not so much that people cannot vote yes or no, we want to have this or not, the problem is in the implementation phase and that will be the hardest part and there we need the help of many people, many organizations to monitor what is actually going on there to then again remind them about that, to readjust laws, to think about what other things can be done to make the right incentives. Some laws are there, they say it's forbidden to do this, others are there like if you do it that way then you have lots of benefits. So, we have to see how to make such changes in a way that is sustainable, that the public administration is not so much forced to do something by law but that they see that they have advantages when doing that and that they want to do that and it will be very easier. So, yeah, in that case thanks a lot already to the Hammer Center which is working on this for Italy. We hope that we are able to monitor that for other countries as well together with other organizations. So, yeah, support organizations like Hammer Center and others who are doing this. So, if there are still no questions from the internet? No, then we'll go with the one on the left again. Yes, there are initiatives, even big ones, so if you don't focus your business on public money then actually I believe there's more money in open source consulting than in proprietary consulting. We made this experience. Can you come a bit closer? And the moment Germany together with France is starting the Gaia X initiative which was a little bit ridiculed because nobody expected this from the German government and I think they had to learn a lot of details about this initiative and there is a lot of money waiting to be pumped into the open source communities. On the other side, we make the experience that the open source community is not prepared, for example, to maintain projects in a stable state for 50 years. We are talking about projects which are critical infrastructure which means you have 20 years of maintenance to guarantee. Let's say, Python for 20 years and then normally this thing extended to up to 50 years. So, what you believe that Python 3 is supported for 50 years, we have to change the maintenance problem. On the other side, the proprietary software vendors have the same problem but you find Windows 95 systems in critical infrastructure which have not been updated for 25 years if I compute right. So, there is a big pressure and the politic has recognized it. But we have to organize business into the communities which are not prepared to do business. My opinion is you cannot do anything without doing business even in open source and we have to prepare the open source community for playing well with the money which is available in a way that they can take it and maintain it in a fair way. I think that's important. Okay, so we'll just take the next question from the mic on the right. Hello. Just a quick example of a project open street map I think is a very good example. It's a completely open map and I think there's a lot of cases where that could be used in public administration. I know all the tram information maps and Dublin have it. I want to ask is there a plan to bring public money, public code to the EU level because I think it's next year the seven year budget, the multi annual financial framework is going to be renegotiated. So it might be worthwhile just putting a requirement for public code in there. And also, could we look at developing pan European tendering processes for open source and could we develop that so say five or six administrations from different parts of Europe would work together to put together a joint tender to say improve an open source project if their needs were similar. I don't know had someone thought about that or could we maybe develop the policy a bit to include that. So I mean, we already are in contact with the European level. We are talking with politicians there and also contacted them with those demands and are thinking about how to move forward there. I mean, you have already the sharing and reuse award by European Commission where they say the sharing and reuse should be the default for public administrations. So that's something where we are encouraging them to go forward there and help them how to do this. And we also next year we want to reach more members of the parliament and think about how to how to establish this then after now most of the politicians got a bit understood a bit how the institutions work and now you can also start with them working on the content. So we plan if we if we are able to get funding for our next year we want to make sure that every member of parliament gets this brochure that every member of parliament will be when they are in the right working groups that they will be contacted. So that's something where we want to do this and we also I mean we hope that by spreading the news encouraging people to contact their members of the European Parliament as well that we will make a bigger impact there. The second question was about that meeting. Tendering at the tendering. Yeah, so different countries working together on an open source tender. Sometimes in something like that works that you bring a few people a few public administrations together and then think about that they share the costs and a procure your software. So in a lot of cases this also slows down the process and makes it very difficult and the failure rate is very high. So you have when you have seven people and they say what they want to have for a project and then they start with something and then you see okay we have to make some compromises. So it's very difficult to find to take a decision with seven administrations. Then if you have one or two. So that's something which I'm a bit hesitant to do this as a first step I would at the moment rather encourage public administrations for their small business to do this with free software and gather some experience there and then think about enlarging it doing it pulling it doing it together with others because else I see that the failure rate will be way higher if you if you combine this because I mean from the public administrations this process they are often a bit slow and the process are complicated and as for software companies public administrations are not always the best customers on this regard and when you then don't have like one public administration as a customer but you have seven that's for us something where we are very difficult. One last thing to the first question that's what I was talking about we also need you to step in and get in contact with your representatives in the European parliament because we can do this yes and we are on this we are going to do this but we need more and we need you because many people in many places can bring big things can do big things sorry. Yes so we have about two minutes left which fits quite nicely since we have one more question on the left microphone. Yeah thank you very much for the presentation and for the campaign. I think it's not only sensible but definitely necessary and coming back to your point about what we can do. I would like to talk to some of the decision makers or local politicians and I think the brochure would be very helpful at that moment. You said that the German translation is in progress. Yes. How would I go about receiving an actual sort of printed copy to take with me once that has happened. You can go to our website and there you find the contribute and spread the word page and then you can order some. You can also go to the public code website public code dot e you and there you find also spread the word and there you can order all the material and send it to you for free. Thank you very much. You're welcome and you can also come by it. We have some copies at our booth together with some other advertisement materials for public money public code lots of stickers t-shirts bags and so on. And yeah in general when you sign the open letter we also try to keep you up to date of what is happening when you do this then you will also get information when for example the German website is then at the German translation of the brochure is available. Yeah so those are ways to do there. So I think just one last word. Use those tools get creative. Hack your public administration there and let us know what experience you make give us some feedback so we can read just our our tools so we can think about what else is needed for this and don't wait. Just think about now whom could I contact my member of parliament local politician public administration and start doing it. Thank you very much. Thank you.
|
Do you want to promote Free Software in public administrations? Then the campaign framework of "Public Money? Public Code!" might be the right choice for you; no matter if you want to do it as an individual or as a group; no matter if you have a small or large time budget. More than 170 organisations, and more than 26,000 individuals demand that publicly financed software should be made publicly available under Free Software licenses. Together we contacted politicians and civil servants on all levels -- from the European Union and national governments, to city mayors and the heads of public libraries about this demand. This did not just lead to important discussions about software freedom with decision makers, but also already to specific policy changes. In the talk, we will explain how the campaign framework including the website publiccode.eu, the videos, the open letter, the expert brochure,and example letters can be used to push for the adoption of Free Software friendly policies in your area; be it your public administration, your library, your university, your city, your region, or your country.
|
10.5446/53030 (DOI)
|
So, das ist mir eine Freude und Ehre zugleich. Oskar, auf die Bühne zu rufen, er ist Webbamp-Wicclo, Typograph und natürlich auch Antifaschist. Und darum ein warmes Willkommen für The Technicalist Political Society and Resistance. Willkommen, mein Name ist Oskar, meine Pronouns sind hier ihm his, wenn ihr mich nach dem Gespräch mit<|de|><|transcribe|> der développement beseitigen. die Dinge, die Technologie mit unserer Gesellschaft gemacht hat, nicht necessarily positiv. Okay, so, some last remarks before I start, and I want to ask the person, the PSA dolphin on the screen. So, everything I said has been said before, and I'm not the one who said it. They were mostly queers or women or people of color, sometimes all of the both who said it. And I don't trust you to believe me, but I would ask you to believe them. Secondly, this talk is very much an introduction. I will cover a broad range of topics, but I will not go into depth into some of them, so there will be questions left unanswered. Yeah, so just please keep this in mind. And lastly, if you want to see the slides, you can find them at r.ovl.design. Or by scanning this QR code in the top right corner. Or after my talk, I will tweet them at underscore ovlb. Yeah, these are, I think, the opening remarks, so let's go somewhere. I want to tell you a story and a story that has some facts, that has some opinions, that has some swearing, because I'm kind of angry at some of the things I will talk about. It's a story about connections and connectivity. And as we live in the 21st century, I would kind of be amiss to not at least briefly talk about the internet. And I would like to start with a bit of an overview of the internet. There is a spoiler, there is no cloud, there are just other computers for those. So, like most great things, the internet started from very humble beginnings. In December 1969, like 50 years ago, the predecessor of the internet, the ARPANET, which stands for Advanced Research Projects Agency Network, with just four universities connected to each other. In October 1969, the first message was sent on the ARPANET and it was low. The connection dropped in the middle of the message, which was sent from the University of California in Los Angeles to Stanford. It probably was supposed to be locked in or low, but time can't tell us anymore. For us as technicians, this is kind of a relief, because even the ARPANET started with a bug and every bug we wrote has some historical predecessors. Now from then on, things evolved pretty much. This is a schema of the internet from 1997 around roughly from the Federal Agency for Security Information Technology. But it's a bit oldish and a bit funny maybe, but it still roughly holds up, even though we have no ISDN anymore. But roughly the structure is the same. We have internet service providers and we have content providers, people who host websites. We have the domain name system in the right side there. And then we have clients. And they have the virtue, the foresight to make the internet a cloud thing, even though I think in 1997 this wasn't really a word already. But it's only a word. It's no infrastructural reality. The infrastructural reality very much looks... Sorry, I don't switch anything up. Today we managed to connect more than 50%, almost 60% to the internet, which is kind of amazing given the size of the Earth. But this also means that almost 60%, almost 40% are not connected to the internet. And how they will be connected to the internet admit us very much how they experience the internet. For us, we probably have this promise in mind of the internet, this was going to give voice to the voice und the power to the powerless. This is a quote from an article by Mike Montero, who was an activist and writer, who writes very much about technology and power dynamics and technologies and how we have to fight. And I would quote him more often going on. Yeah, so this is kind of the promise that we maybe grew up or that we experienced when we got connected to the internet. Now, quickly, reality check, this is more likely the central infrastructure of the internet, the deep sea cables running around the oceans. This is a projection of how it would look like by 2021. You can see basically every continent is connected by many strings. This is a graphic from the New York Times. They ran an article where they explained the......the technology behind it. And I forgot one thing, the link for the slides, there's also an exhaustive list of resources. Basically everything I say has a kind of going deeper article. I will also show the link at the end again. So if you're interested in something that I mentioned, this page will help you out. This article, for example, is linked there. But this graphic also shows the yellow strings. And the yellow strings are cables that are owned or will be owned by Amazon, Facebook, Google or Microsoft. So they are owned by private companies. And as we see the breadth and the width of the internet and how many people are connected to it, it feels like a public good, but still more and more of the infrastructure, of the central infrastructure is privately owned. And I think there's something clashing there. And it is okay, as long as these companies don't make their interest count, I think they will at one point. And besides the private-owned infrastructure, there's also another thing. More states are trying to centralize the internet infrastructure into central access points. And the Mozilla Health of the Internet report 2019 reported that there were 188 shutdowns in 2018. A shutdown is not a route of failing or something. A shutdown is a region or a complete country disconnected from the internet, basically not being able to use it anymore. For example, largely unnoticed in the western media, the longest shutdown of a complete region is currently going on in Kashmir, in India. It's offline since four months. This is the longest shutdown in the democracy of the whole internet. There have been longer shutdowns of single services, social media for example. But the whole Kashmir is currently not connected to the internet anymore. And this is just one of many cases in 2019 also currently going on. But even if people are connected to the internet, their connection to the internet might look like this. They are probably and sometimes only experienced Facebook. There has been one survey by a researcher called Helina Galtaya in Indonesia in 2015, where she found the staggering number that more people are saying they're using Facebook than they're using the internet. And she thought at first there has to be something wrong with the numbers, because obviously Facebook is using the internet. But they are so connected through Facebook, they basically just use the Facebook app, and they are so entrenched in the Facebook ecosystem, that they think Facebook is this thing. There is no internet, there is just Facebook. Now if you are working for Facebook, you can say this is a success. I think that's dangerous as fuck, because we know how magnificently Facebook managed to destroy everything they touch basically. Okay, so this was kind of the basic connection part. The thing with Facebook centralising access to the internet also shows something else. And I want to be a bit utopian in the next section. It's called the web we lost. And this is a quote that I kind of stole by any of the issues, an activist and an entrepreneur working in trying to build meaningful technology. And he wrote this piece in 2012 and he gave many examples, like Technorati, which some of you might remember, which were offline then by years already. So this web we lost might not be more like a distant memory for some of us. Some maybe have never experienced it. And he starts this article like this. The tech industry and its press have treated the rise of billion scale social networks and ubiquitous smartphone apps as an unadulted win for regular people. He goes on to say they seldom talk about what we have lost along the way in this transition. I want to talk about some of the things we lost. And I want to start with maybe an unlikely candidate. This is Tom from MySpace, who some of you might remember, because he was the first top friend we got on MySpace and stayed our top friend basically till the end, until he luckily lost our data. An unlikely candidate, because MySpace very much feels like a centralised platform, we had just MySpace and we had our MySpace profile. But MySpace enabled something. MySpace enabled something like this, which isn't probably like the top of web design ever. But, and this is vitally important, this is personal. This is something someone built. Someone set there and added HTML and CSS to their MySpace profile and made it their own. They didn't necessarily own the infrastructure, but they owned the design. It was their personal MySpace page. No other page looked like this for better or worse. MyPages probably looked even worse, but it enabled us to learn CSS, to learn HTML, to get an entry point into working with technology. To quote Mike Montero again of the article I aforementioned, at the beginning of the internet or the worldwide web, we put our stories and songs and messages and artwork, where the world could find them for a while. It was beautiful, it was messy, and it was punk as fuck. And how punk it has been. This is a screenshot of a GeoCities page, which was another hosting platform, where you could put your important things on there and kind of work with the web. It reads in large comic sense letters. If you study the material on this website, you will hopefully understand what our purpose here on Earth has been. This page is intended to be useful, it's written in smaller letters. At the top, I don't know if you are able to read this. And it reads, welcome home. And I feel very much like home, and I believe someone had the time of their life building this website. Really, there must be a ton of fun happening there. And then some usefulness, maybe, I don't know, because I can't look it up. Today we are more or less stuck like this, or most people are stuck like this, with the standard Facebook News Team, there's basically nothing we can do about it. We can post links there and stuff, but we can't make it beautiful, we can't make it ours anymore. Und, jetzt natürlich, ich bin ein Designer, ich habe ein bisschen über Design gesprochen, und ich habe das Design zu illustrieren, aber wir haben nicht den Design verloren. Um zu quote Enny Ladesch, wir haben die Keyfeatures, die wir in der Webseite gelangen haben, und die wir in der Webseite gelangen haben, die wir in der Webseite gelangen haben, fundamental zu der Webseite. Er geht an einen Artikel, nennen es Examples, die große Technologen, die wir zusammen kooperieren, um eine gute Idee zu lösen, die eigentlich nicht mehr in den Plattformkapitalismus oder in den Surveillancekapitalismus passiert, Twitter bittet Twitter und bittet proprietary Formats von Daten, die ihr auf der Webpage gebetet habt, für Twitter, und Facebook ist das gleiche für Facebook. Und es gibt keine Kollaboration mehr. Bisherlich ist das, was jeder Plattform versucht zu gewinnen. Wir haben die Aktionen verloren, um die Dinge zu machen, wir haben die Artikel verloren, und wir haben die Artikel verloren. Und overall haben wir die Ownerschaft verloren, um unsere Daten und unser Content, um die Plattformen zu erzielen, und nicht unsere eigene Infrastruktur zu bauen. Ich würde soviel sagen, dass wir uns verloren verloren haben. Aber trotzdem kann die Web ein Plathorium sein, eine Ermittlung von allen coolen Dingen. Es gibt noch sehr tolle Websites. Es gibt auch die allgemeinen Knowledge, die auch andere Menschen helfen. Es gibt noch eine Ausflugung auf der Web. Und die Web kann ein Plathorium sein. Wir haben keine Erfahrung, um auf der Web zu wissen, dass es nicht unbedingt auf der Internet ist, sondern dass wir das nicht vergessen wollen. Aber das ist ein harmloses Sammler, und wir reden über das in der zweiten. Das ist ein ganz wichtiges Problem, dass die Web wirklich alles sein kann. Ich denke, es ist sehr wichtig, dass wir uns bewusst und dass wir uns das versorgen. Dass wir gegen die Städte, die Infrastruktur zentralisieren, und gegen diese Schachtelmsen, dass wir für Net-Nutualität, und dass wir für Open Access, für unsere Daten und unser Content, für ihre��-ested 3D- cheeks Ir 아닙ide, und wir müssen imbalance erfunden. Das heißt, wir haben auch Poses, keine Gedanken und.. PROFESSOR Brazilian Ali Jurinn EsIEL machte so Karena Chapter<|te|><|transcribe|> Und sie durfte einfach zehn Stunden hinterden ein, als sie sich darüber rettete, was Officialant methane übernommen lumt. Und sag doch, das ist dasываемste..! Aber ein Un éc Love, not a completely purchase탄 Better! Und Sie haben einen jahrelieictionalி aus der Etsch. Wir behaupten immer, dass und was wir tun, dass die given also in Befehlte Sie hat gesagt, wir müssen die Sicherheit über die Spe�ale protecten. Und sie hat gesagt, in der Erwärmung der Christchurch-Attack im März, oder nicht in der Erwärmung, in den Nachmitteln der Christchurch-Attack. Und in dieser Sorge, sie geht sehr in die Art der Technologie, ist die endangered Gruppe sehr verabschiedet, nicht nur auf der weltweiten Web, aber auch auf der Digitalen-Räume. Wenn du eine Sache schaust, das ich hier erwähnt habe, Catherine,'ve einfach gesagt mit dieser Stimme umsonst, D kara,... Copsnack! Sie resentiert die behaviorszünd JFK von Rusz Benerador in Hessen. Wir haben nicht alles zu sagen, man, ich meine, der Freespieler oder der 9.11 war ein Insideschub. Und ich habe eine sehr simple Antwort zu diesem, der ist, schaff die F*** ab. Aber mit der F*** ab, du bist eigentlich nicht in einem irgendwie heilten Diskothek, also natürlich, warum du es korrekt hast, das kann nicht der Ende unserer Arbeit gegen das sein. Wir müssen irgendwie reden, was Freespiel ist, was die Freespieler erhält undlungslos говорит. Schließlich eine Teste für das Mischbord von einem Freespieler oder das Misch zeichnet Sehne, Spass zu schlechten Ideen, obwohl sie einen legitimierten Recht haben, um zu verabschieden. Und ich möchte das stressen. Wir haben einen legitimierten Recht, um zu verabschieden. Es ist nichts in der Welt, das uns zurückstecken und nicht etwas gegen die Heldung oder in der realen Welt sagen muss. Wir haben den Recht, um zu verabschieden und wir müssen das nutzen. So, ja, die Web ist für alle, aber wir können nicht das Recht, das online zu verabschieden. Und natürlich ist das nicht nur ein Technische Problem. Das ist nicht nur ein Technische Problem, weil Technologische Firmen auf marginalisierenden Arbeitern im Datacenter sind, um die ganze Zeit zu filterieren und das ein sehr, sehr reales Problem mit allen psychologischen Implementationen zu machen. Und es ist nicht nur ein Technische Problem, weil man nicht mehr auf die Heldung kippen kann. Die Gesellschaft ist kein ISC-Schad-Rum, wo man auf die Heldung kippen kann und sie sind weg. Das ist nicht wirklich, wie die Opinien arbeiten. Okay, also, wenn es nicht ein Technische Problem ist, aber ein psychologisches Problem, das macht einen Sinn, vielleicht zu reden, um die Gesellschaft zu beantworten. Ich habe die nächste Sektion der Rede gezwungen, die wir verbrochen haben, und mit dem, was ich meine, den weißen Menschen. Ich möchte beginnen mit einem der Probleme. Problem Nummer 1 ist, dass weiße Menschen Geld zu weißen Menschen geben, um die Probleme zu lösen. In einer sehr weißen Gesellschaft, in der ersten Platz, wird man immer Geld in die Wünsche des weißen Menschen immer noch mehr im Zentrum der Wünsche, als die Ausgabe der Lösungen und der Ausgabe der Diskurs und der Technologie. Und basically alles andere war entdeckt, und das ist nur ein Rickenling, und ich versuche, das zurückzuhalten. Problem Nummer 2 ist, dass weiße Menschen Geld zu weißen Menschen um die Probleme zu lösen. Es ist ein cooles usespann어�uch, undَoruure haben wir die problemsatized, often one or the other of the employers or of the employees of such companies shows up and said, but we didn't mean to. They didn't want to build companies that exclude not male people or that include exclude not male people and to some extent I want to believe them and I want to believe them because if they would mean to they would have been full blown shitheads, they would be like assholes basically. Ich will das nicht glauben, weil vielleicht Menschen nicht besser sind. Vielleicht. Das ist okay, dass das nicht bedeutet. Aber dass das nicht bedeutet, das Problem nicht zu lösen, weil das Problem hier ist, zu Quote Diana Macken, dass Intent nicht den Erreisungsschmutzung ist. Es ist nicht wirklich wichtig, wie du deine Technologie geintenst, oder was deine Intentionen zwischen den Bildern und den Bildern waren. Das ist der Impact von den Menschen, die es haben, und wenn sie Diskriminierung fühlen, wenn sie diskriminiert sind, wenn sie dein Produkt diskriminiert sind, das ist nicht wichtig, dass du nicht ein Produkt, das diskriminatorisch ist, nicht verbinden willst. Der Text ist nicht neutral, und es ist nie neutral. Die Technologie, die wir bauen, kann und wird immer für die Bedeutung geäußert werden. Das macht es für uns, dass wir nicht neutral sind. Wir können nicht sagen, dass wir es da rauskommen und was passiert, weil es dann eigentlich nicht passiert. Wir müssen einen Stand nehmen. In 1985, der Scholar Joseph Weißenbaum, war interviewt und er sagte, es sei nicht für einen Wissenschaftler oder Technologen, dass er oder sie nicht wissen oder nicht wissen, wie die Technologie geäußert wird. Das war eigentlich schon vor 24 Jahren, aber das Problem ist nicht wirklich geändert. Der Text war nicht wirklich gut, und das ist so weit und wir können das ändern. Das bringt mich zu meinem nächsten Post. Ich möchte Facebook als Bullshit stressen. Facebook PR-Repräsentativen, die in Publikationen, mit den Ad-Ads, die diskriminatorisch sind, sagen, dass die Inklusion bei der Arbeit der Firma ist, die ist 98,5% der Revenue, ist eine Advertisement. Wenn sie sagen, dass Inklusion bei der Arbeit ist, dass sie wissen, ob sie Probleme haben, dass sie sie nicht wissen, dass sie sie nicht wissen, dass sie das Internt haben, dass sie nicht die Impakt der Produkte haben, dass sie sie nicht erzielen, dass sie sie nicht erzielen, dass sie sie nicht erzielen, dass sie sie nicht erzielen, dass sie einfach nur etwas erzielen, ein die Zukunft der Menschen, die einen Verkauf von Polizisten übernehmen, und die Realität, dass diese Algorithmen wieder und wieder gegen Menschen von color diskriminieren. Wir haben Amazon die Fotos der Ring-Homesäkertekameras zur Polizistik, die ohne die Kontrolle über was die Polizei macht. Und wenn man nicht die Polizei kontrolliert, dann wissen die Menschen in dieser Ruhe, dass das nicht so gut geht. Und dann haben wir die Unternehmen, die auf den modernen IT-Departements und die Polizisten, die mit der Polizei und der HESN-Date eine große Zahl der Daten auf den Polizisten verhörigen. Und sie machen das alles über die Welt, um die Polizisten zu verhindern. Und sie benutzen auch Algorithmen, um die Zukunft zu verhindern. Das Problem mit diesen Algorithmen ist, dass die unterliegenden Modelle falsch sind, weil die Algorithmen die Daten des Pastes gegeben haben und aus der Pasten sollten sie die Zukunft verhindern. Die Zukunft ist sicherlich, dass sie die Algorithmen nicht kontrollieren. Und das ist oft nicht so. Das ist ein sehr famils Beispiel, mit Google's Fotos-Aggorithmen, die gegen die Black People und die sie als Gorillaz labeln. Google hat das aufgestellt und ist noch nicht aufgeregt. Und das ist ein sehr starkes Problem. Eine stealthige credit card said, but we didn't mean to. But you did. You did. You didn't mean to, but you built technology that did basically. So it really does not matter if you did mean to, and it really also does not matter if you put race or gender as an explicit input into the algorithm. To quote the researcher, Rager Thomas, even if race and gender are not inputs to your algorithm, it can still be biased on these factors. Machine learning, axels, at finding latent variables. And if you don't control your algorithms, these latent variables will result in a runaway feedback loop. So the algorithm over and over reinforcing itself and setting the next set of data, one of which it learns, and then suddenly it is so ingrained into the data that you can't do anything about it anymore. To quote Nuschen Yat-Stani, who I think concisely and rightly said, data from the past should not build the future. We can use data from the past to learn about the past. But what we're currently doing is trying to predict the future, and this has to fail. This is something that this data can't do. And Tatjana Mack said that the technology that we use, accelerating at a frightening rate, a rate faster than our reflective understanding of its impact. But we still let the technology, or we still evolve this technology, ever faster and faster and faster, and we still don't try to really understand what's happened there. Or if we do, we do it in kind of niche groups, but not as a society as a whole. Also, I think a problem because of the humans that have access to computers. So I want to talk about means of production for a moment, or a very short history of computing. I want to start with a kind of imaginative thing. So for a moment, please imagine a programmer. OK, I guess it's very likely that you came up with someone who, more or less, plus minus the moustache, maybe, looked like this, a kind of white, able-bodied male person. And rightfully so, because programmers today mostly look like this. Hi. But next question for a moment, please imagine a computer. And chances are, again, that you kind of came up with something standing on the desk here, like a modern-day laptop or something. If you know about the history of computing, you maybe went a step back and thought about something, something like this, this room-filling, big, giant computing machines that were doing less than the phone that is in our pocket, but were still like the history. I want to go into the future, because I think it's a very important thing in the history. I want to go even further back into the year 1892. In 1892, the New York Times posed at a job ad. It reads, a civil service examination will be held May 18 in Washington and, if necessary, in other cities to secure illegibles for the position of computer in the nautical almo-neck office, where two vacancies exist. One at $1,000, the other at $1,400. The examination will include the subjects of algebra, geometry, trigonometrie, and astronomy. Application blanks may be obtained of the United States Civil Service Commission. Now, this really does not sound like they were searching for some kind of machine, right? And they didn't because they weren't. They were searching for humans, and more specifically, they were searching for women. Historically, and to the midst of the 20th century, a computer was a human, was a woman. They were doing really complex and amazing mathematical computations. At Harvard, for example, they were calculating how stars travel across the sky. And they did so, so right, that some of the data they calculated back then is still in use today. And how women were so synonym with computers that by the mid-20th century, computing was so much considered a women's job that mathematicians would guesimate, their horsepower by invoking girl years. And when they refers to mechanical computing machines, and when they were describing the units of machine labor that one of these machines has, they were describing it as a killer girl, where today we talk about killer bytes, maybe they were referring to killer girls. And it all started with her. This is Ada Laughles. Now finally, we have a room named after her, next to her, the biggest room of this conference, and rightfully so, I think. She was working in the mid-19th century, and together with Charles Péba, she worked on a thing called the difference engine, what's really an engine, steam powered and such, which was doing calculations. And she's widely regarded as the first programmer, because she wrote the programs for this engine. Now going into the mid-20th century, it's a very short history of computing, we for example have women working as telephone operators. And when the first computers came up, they were like these huge things I just showed, and these women had the skill necessary to work on these machines. So some of the women who were programming the first real computers, that we kind of know or understand today as a computer, were for example Grace Hopper, working for the Navy during the Second World War. Or we have the ENI-X6, namely Kesslin, McNulty, Betty Jennings, Elizabeth Snyder, Marlin Veskov, Francis Billers, Ruth Lichtenman, working at the University of Pennsylvania at a machine called the ENI-X, which is generally considered to be the first all-purpose computer. So they were calculating missile ranges for the Army, but it's generally like this computer should be able to calculate basically anything. Or we have in Great Britain at Bletchley Park, we had the Colossus Mark II, a code breaking computer, which broke code, the enigma code of the Nazis basically in real time. And here we have two women operating this, namely Doris de Boussaint and Asi Bucke. And this continued well into the 1960s, I think, where people working with computers were mostly women. They weren't doing punch cards anymore, no they had like keyboards and stuff, but they were mostly women. And women also were among the people creating the first compilers in high-level programming languages, so the things we probably interact with today. And then a question was raised, which is what is a program, or who is a programmer really. And it was raised among other things, because there was a shortage of programmers. And some newspapers at the time talked about a software crisis, it was more like a labor crisis actually, and I think it still really isn't resolved today. And this crisis, or crisis however, went as far as the NATO held a conference in Germany in Garmisch. They weren't including any women in the guest list, so basically just male people deciding the future again. And here they made the change. And Clare Evans in the history of women in computing wrote, the most significant change they made arguably was semantic. In programming they decided what she had to for be known as engineering. The title we probably used today. The shift on vocabulary was a shift in focus. It was more focusing on technical skills than maybe on the skills that humans have to possess as the so called soft skills, working with other collaborating. But from then on you have to be kind of an engineer, a very serious thing. In this crisis there were also more concrete factors at play, for example a lack of childcare, a lack of mentoring, and of course because we live in a patriarchy, wage discrimination against women. So there were also other factors driving women out of the workforce. Factors I think that are still very relevant today. And the changes or the problems they worked, this iconic Apple ad, introducing the Apple 2, it's from 1977, so roughly 10 years later. And here we see the woman back in the kitchen doing the household and very admiringly looking down at her men, who is in the front very earnestly working at a computer with some kind of economical data, so he's probably destroying the economy, I don't know. But this ad shows that in the public view, programming and working with computers was getting more manlier by the day, and it still is today. Now, what we still have today is again white men who hire white men to solve the problems of white men and they hire their peers basically in their companies und by that reinforcing this cycle, this focus on a male workforce that we have today. And I think we have to broaden the access and we have to break the cycle. Now, there are organizations like here in the Club, the Hexen or globally Women Who Code, who are actively working on getting more women and more non-binary people, basically everybody who's not male in the computing workforce. There are feminist tech spaces, and we saw a talk yesterday on the stage about feminist tech spaces, for example the Heart of Code in Berlin, who tried to make programming more accessible and trying to teach this to more people. And what they do is fighting, and this brings me to my last section, which is named You Gotta Fight. We have to fight because it shows the breath of things, and these are not the only things, I had to leave out such things, because otherwise I think the talk would have been 45 hours instead of 45 minutes probably. And I want to talk about ways we could fight. The first one is maybe a bit unlikely. I want us to be playful. I want us to work with technology in a way that is playful, that's fun. Because being playful is one of the ways that we can use to reclaim the web we lost about our talk, but it can also create different entry points into technology again. Casey Evans said, keep making fun stuff, and talk about web animations with SVGs, where you animate graphics. Casey Evans does stuff like this, which maybe isn't... it's no real algorithmic magic, but it's super cute and super amazing, and it's code. It's code that does it. It makes the jellyfish go up and down, and smile and flinkle their eyes, and I think it's super cool. And she also does stuff like this. This is a blob that is a bit shy and stops dancing when you look at this. To do this, she uses the Shape Detection API that's built into Google's Chrome Browser. It's available at the flag, and the Shape Detection API has the ability to detect faces. Now, this kind of sounds dystopian, if you weren't animating blobs with it, but you can also use this technology that maybe at first sound... sounds a bit dystopian to use to build. Good things. Tech maybe can be plain useless. This is a Twitterbot called EveryDNA. It just combines to emojis and builds a double helix DNA out of it. This is the Gay Rocket DNA, and I think it was about them time that we have this. And it's... it's... more or less useless. It's fun. It's built with a Twitterbot called CheapBots done quick. So, there probably wasn't even a lot of time involved to build it. But it's still great. I love this every time a wonderful combination of these pops up in my Twitter feed. I'm lucky. I'm happy. So, to quote, as Casey Evans again, in a world that becomes more dystopian by the day, we can also use technology for good, and it's important that we use technology for good, because there's something in this playfulness that's more than fun. And something about this playfulness that we can use to teach technologies, and that we can use to build communities. And over the course of computing history, we're mostly not male people doing this kind of thing. So, also to the male men in the room, please be more fun and build more communities. And we can also use this technology to educate ourselves and to educate others. We can use this to learn stuff, but we can also, and this is also important, use this to unlearn stuff, not just forgetting, but actively unlearning ways of thinking, for example, that we learned in a racist society in where we have to make a conscious effort to unlearn this. We can't just forget this. We can build book clubs or communities in which we exchange ideas about technology, for example. And it was Grace Hopper, actually, in 1968, who already said that you needed people for capillaries. And she means that we as technologists need for capillaries that are not just strictly about technology, to be able to interact with the world as a whole. And the more technology builds the world as a whole, the more of the capillaries we need. And the more important it is that we welcome people who know these for capillaries into our circles. One project that's very near to my heart, currently, is Self-Defined. It's built by Tatjana Mack and maintained by our community. A modern dictionary about us. We define our words, but they don't define us. So it's available at self-defined.app. And it's basically a way to kind of reclaim languages and make languages not ours, because I'm a white man. Language is probably already mine, but the make language again, formed and defined by the people who are impacted by this. And this also brings me to the next thing, which is that we have to hear and amplify the voices of endangered groups. We have white people, we have to use our safety to stand up for people who are targeted by racism, because we have safety and we have privilege in the society, and we have to use this privilege. Of course also we as men have to use the safety we have in a patriarchal society to fight against this and use the safety. And of course also straight people have to use the safety they have as straight people to fight against homophobia, to stand up. If a dude in your workplace make a homophobic slur, then stand up and say something, not just sit there, make a voice hurt, because your voice is actually more hurt. We can have to change this, but as long as it is, we have to make our voices hurt. This isn't strictly about empathy, right? Because empathy requires you to feel the way the other person is feeling. And I think this can be a dead trap, because I as a white person cannot possibly know how it really feels to go around the street day by day and be targeted by racial microaggressions or something. I as a man cannot know how it feels to have a Twitter account and be bombarded with all these kind of bullshit comments, because I just don't get them mostly. So to quote Krim Kratten, empathy relies on someone else being able to understand the suffering of others in order to do anything about it. And she goes on to say, we don't need to understand the suffering of others to take action to minimize their pain. We only need to be aware that the potential for suffering exists. We need to be aware that they are speaking up, we need to be aware of the dynamics. And this is this awareness that makes it possible to consistently act. It's not the empathy, because we just might not be able to feel this, but we can always be able to be aware. Last thing, organize. Of course, we have to go on the streets and we have to go and we have to take collective action. Not only if you're feeling, currently feeling the impact, but by being aware that others are feeling the impact or might be in danger of feeling the impact. We have to prioritize communities over competition, because in the capitalist market space we are all competing against each other, and this is how this works. By focusing on communities and building health of communities over this competition, we can go a step into resolving this competition and making this a thing of the past. We have to prioritize communities over companies. Tech companies are very good in building like a company community, and basically all you have to do is live and breathe for the company, and I think that's not necessarily correct. There are vital communities inside companies probably, and it's cool that they are there, but the company is not your community. The company is someone who has hired you to make money out of your labor. That's not a nice basis for a community. And we have to prioritize communities over nations. We have to be aware that other peoples in other states are basically fighting the same fights we do. For example, Github currently has a contract running with the Immigration Enforcement Agency in the US, und das ist ein Agency, das putt Kinder in Kages, und sie zu den Fällen zu deportieren, und sie zu den Fällen zu separieren, das ist eine sehr, sehr evtl. Sache zu tun. Github refuses zu dropen das Kontrakt, und das ist ein Riemenbremand, für alle, die Github-Publicanen, zu hören. Bitte dropen das Kontrakt. Wir sind in diesem Zusammenhang, wir sind in diesem Zusammenhang gegen Nation-Boundaries, oder auch in allen Boundaries, und wir können das nicht machen. Ich bin jetzt auf der Zeit, so schnell zu kommen. Ein ganz langer Zeitverband, perfekt. Ich komme schnell zu Ende. Ja, so ein warmes Willkommen am ersten Beginn, bevor wir mit der Q&A-Session beginnen. Kann ich zum letzten Mal... Eine letzte Sache. Die Kollektiv-Elektion wird aufgerufen. Wir müssen die Städte zurücknehmen, und wir müssen die Räste wiederführen. Wir müssen, für Beispiel, die Ressourcen, die ich in der zweiten Minute zeigen werde, und wir müssen wie Rampikett. Wir müssen aktiv sein, denn wenn wir nicht aktiv sind, sehen wir einiges überpaides Bollocks. Ich denke, das ist ein sehr schlechtes Schatz. Ich möchte den letzten Wort von Rosa Luxemburku von 1990 und 1900 vorhin sagen, aktiv, aktiv, grächisch, risolotlich und konsistent. Okay, Dolphins wieder. Das ist der Schatz. Ihr findet die Ressourcen da. Applaus, Applaus, Applaus, Applaus, Applaus! Ja, wenn es jetzt noch... Wir haben noch zwei Minuten für Fragen. Links und rechts sind Mikrofone. Wenn die Lichter angehen, einen sehe ich schon auf der rechten Seite. Hey, thanks for your talk. And thanks for the positive points you made in the end. Very fast. But I think it's hard to keep the fun and the feeling of freedom in an Internet that we lose more and more to private companies. And you made the point at the beginning that a lot of people look at Facebook and think, this is the Internet. And I think this step is one further. A lot of people look at their smartphone and think, this is a computer and this is the Internet. But it's just a bunch of applications. And as long as young people get used to the Internet through this, I think it is hard to break this feeling. It's hard to break. This is why it's so vitally important that we have another feeling and maybe have another approach to technology. Go out there and teach people, young people, old people, middle-aged people, make the point for an alternative and try to build something positive. And this is also where I think fun can be very vital to do this. Thanks. Yes. On the left, there's a question. Yeah. Thank you a lot for your talk. I just want, like, if you don't mind, I would like to compliment on two aspects you mentioned. And the first is the notion of free speech. Because I actually think rather than talking about which speeches should we allow, we should just be conscious that there's a difference between the freedom to speak and the freedom to discriminate. And there's actually plenty of really problematic opinions which can be voiced without being upfront hateful. So I think this is what this is actually about. That people go back to actually voiced their really problematic opinions in ways that don't directly threaten individuals. And so I think, yeah, rather than just always mention this free speech that's threatened, let's just make this differentiation really clear. And then the other point is about the disappearance of women from technology. There's actually something really interesting. And it's that as soon as any kind of profession becomes valued and that this value is reflected by actual pay or higher pay, it starts becoming a male dominated profession. And I just wanted to mention that, so was I showed how as soon as it was becoming engineering, it became a male profession. And statistically, and this is true for all, at least I know that it's true for almost all European countries, we can really observe this phenomenon. I actually had this in my speaker notes and didn't say it, so thank you for your point. Is there any question from the internet? No, so at the end. Thanks a lot, a big applause for Oscar und for this value input. Copyright WDR 2021
|
Where did all the nice things go? The World Wide Web had plenty. Everyone had Neopets, Geocities and Tom from MySpace as their top friend. Where did all the women go? Computing had plenty of them. The first computers were women. This talk is going to answer these and more questions. Starting with the technical underpinnings of the internet – explained on a beginner-friendly level. It takes you through the history of the web and computing, focussing on the exclusion of women out of the work sphere. It will illustrate that the tech industry as a whole has been complicit in oppression for far too long, but also show recent examples of organised workers, trying to change things for the better. In the second part, it will use these learnings to make the case for building and promoting inclusive, accessible communities and websites. It’s time to take back the web. PSA: There might be occurrences of cats, trance dolphins and other mystical creatures of the digital sphere along the way.
|
10.5446/53040 (DOI)
|
The dire reality of the climate crisis. Quick question, who among you was in this year in 2019 who has been to a demonstration against climate change? At least one, show a hand. Fridays for future or something else. I would say that's like 95, 97% of the audience. Thank you very much. And one of the Fridays for future demonstrations that I went to was very impressive for me because I felt that they had a very interesting balance between the situation is really, really bad and we're all fucked. And on the other hand, but it's not too late, we can do something. So people are getting really scared, but also like constructive and motivated to do something. So it's an emotional and intellectual roller coaster. And in this talk, we're going to learn a little bit more about what the scientists say and if it's better to freak out or be motivated. And our speaker is Hano Böck. He's an IT security professional and he writes regularly for the media outlet Golem. And with that, I leave you with him. I hope you learn a lot. Please give Hano a big warm round of applause and have a lot of fun with this talk. Thank you very much. Yeah. Hello. Yeah. Just adding to the introduction, like about this, should we be freaked out and should we be motivated? After last time I gave such a talk, someone came to me and said, yeah, if I listen to you, it seems like this doesn't make any sense and we will lose anyway, which heavily influenced how I changed the talk for this time. But you will see that. So first I want to show you this graph. This is the worldwide CO2 emissions. It's CO2 only. There are other greenhouse gases, but that's kind of the big issue. And as you can see, they are mostly growing. And that's kind of the time frame in which we had something like international climate policy. So, like, we knew there was a problem. I mean, some of you were probably not even born in 1990, but humanity knew that there was a problem and we just went on. So I call this the trial of epic climate failure. But I also want to point out a few more specific things. There are kind of two points where it goes down a little bit. Like notably down. There are a few, like, that statistical variation, but there are two points where you can kind of see that emissions went down. One of them is when the Soviet Union collapsed and the other one is the economic crisis in 2009. And I show you a few more points here that is kind of important events in international climate policy. So we had the UN Earth Summit in the early 90s, which was basically the start of international climate policy. The UN FCCC, which is the UN body to organize the climate conferences, it was founded there. So there the world said, yeah, we have to do something about this. And then there was the Kyoto Protocol, which was praised as the first international binding, which is not technically true. But that's what they said. And agreement to reduce greenhouse gases, which you can see was very successful, just in the wrong direction. And we had the Paris Agreement and just went on. So yeah. So where are we? We are currently at roughly one degree warming. And I think one part why we are discussing this now, why this is a big topic here at the Congress, and why the topic is now part of the mainstream discussion is that we are actually seeing climate change. So I'm sure you remember, like right now it's really cold, but in the summer it was really hot and we had several heat records in Germany. We had several instances where it was above 40 degrees, which in the past just was not a thing in Germany. And we right now have a lot of fires in Australia. And also in Australia there were plenty of heat records in the past couple of weeks. And this is of course just two examples. I mean, there are many more. Yeah. So what is politics doing about this? So you probably know that there's this Paris Agreement, which was agreed upon by all nations of the world in 2015. And so it's an international treaty where all the nations in the world agreed to three degree of global heating. Well, that's not what it's written in it. It's probably also not what you heard about it. But technically that's what it is. So in the text of the Paris Agreement it says, yeah, we want to limit the global temperature rise to well below two degrees Celsius. And well below is even then it also says we should be pursuing efforts to limit the temperature increase to 1.5 degrees. We're currently at one degree. But the problem is that there's not really any plan how to do this. So Paris Agreement is kind of saying that's what we want. But yeah, whatever. So how this Paris Agreement works is there's something that is called nationally determined contributions, which basically is voluntary actions by nations. So nations say, yeah, we have a plan to reduce our greenhouse gas emissions. That's what we commit to. And if you count them together and estimate what the outcome will be, and this is an official UN document, so it's not from some, I don't know, radical climate scientists. This is the UN. That this adds up to roughly 3.2 degrees temperature rise. And that is if the nations commit to what they said, which usually they don't. So to give you a bit of an idea, I mean, in the climate discussion there's often these degree numbers and to get a bit of an idea. Like currently we're at one degree above what it was before humanity started emitting greenhouse gases. Then the Paris Agreement kind of says we should get limited to at least two degrees, and it would be better to limit it to 1.5 degrees. But the actual Paris Agreement commitments are around three degrees. And if you have a realistic view on current policy, it's maybe more. So what is the science saying about this? So there's this institution called the IPCC, which is International Science Premium. And they're not doing research themselves, but they are creating a summary of the climate research. They're trying to find the essence of what the science figured out. And so they create regular big reports, but the last one is kind of outdated by now. And they published a special report which gained quite some attention in 2018. And this was a direct reaction to the Paris Agreement because the Paris Agreement said this 2 degree, 1.5 degree, and so they said, yeah, the scientists should figure out what does it mean, 2 degree, 1.5 degree, what's the difference? And there were kind of two main messages from this report. One of it is that there's a big difference between 1.5 degree and 2 degree. And the other message is that 1.5 degree is still doable under some optimistic assumptions. So what does that mean, 1.5 degree or 2 degree? So here's a nice picture of some coral reefs. Unfortunately I have to tell you that in the future there won't be any more such pictures because soon they will all look like this, very likely. What this report estimated is that if you have 1.5 degree, we will probably lose at least 70% of coral reefs worldwide. If we have 2 degree, basically almost all the coral reefs will be lost, will be destroyed. And I mean currently even 2 degree seems ambitious. It will happen much more often that the Arctic will be ice free for 1.5 degree. This is estimated to be, this is switched, sorry, that's a mistake. For 1.5 degree it's estimated every 100 years and for 2 degree it's estimated every 10 years. Then it's expected that 2 degrees will mean more sea level rise. It's a relatively minor difference here but still that matters for nations where every centimeter means more area lost. And this I think this is one of the things that very directly impacts humans, the estimate of how many people will be affected by extreme heat and extreme heat can also mean something like you cannot just go outside anymore without any kind of cooling support. That this will be more than doubled with 2 degrees Celsius. And always keep in mind right now we're on a path to 3 degrees. So what would be needed to achieve 1.5 degree? So the path would be roughly reduce the greenhouse gas emissions by 50% in the next 10 years and come to a carbon neutral state by roughly 2050. So but lately there has been quite some discussion is the IPCC is telling us the full story or if they are underestimating some of the effects of climate change. So many scientists are worried that the IPCC is too conservative because they are kind of trying to create a consensus of the view of the scientific community and they also have a lot of pressure from because I mean you know that they're basically nations that were the head of state denies that climate change exists and these people also found the IPCC. So there's a lot of pressure from people who basically don't believe in climate change and also there's this consensus view which can lead to that maybe more outliers that they are not viewed as much even though you probably should also have a look at what the worst outcome would be. And there's been kind of meta study where some scientists looked at IPCC predictions and compared them to what actually happened and they came to the conclusion here. The evidence suggests that science have in fact been too conservative in their projections of the impact of climate change. We suggest therefore that scientists are by it not to us alarmism but rather the reverse towards cautious estimates where we define caution as airing on the side of less rather than more alarming predictions. And like recently this has been picked up by the New York Times where they had this article which was a bit problematic because it kind of had this somewhat populist headline how scientists got climate change so wrong. I mean there were scientists who were warning about this and it's been part of the process. But also occasionally you can then read headlines like this where it says climate models have accurately predicted global heating study finds. And this was based on a study where they came to the conclusion. We find that climate models published over the past five decades were generally quite accurate in predicting global warming in the years after publication. So maybe you find that confusing. I also find these things occasionally confusing when I read OK climate scientists they were underestimating the effects and then it says OK they were very accurate actually. But there's usually if you dig down a bit there's an explanation and the explanation here is that actually climate scientists have probably underestimated the effects of climate change but their models on temperature rise have been very accurate. Yeah sorry the wrong slide. So I mean that can both be true right. So the prediction on the average world mean temperature has been very accurate but that's usually not what we care about. We care about things like storms like fires like heating on like more local events which directly impact humans. And if you're more interested in that I recently watched a very interesting interview. This is on YouTube. I have linked it here. So yeah. Yeah. Then you may be wondering like the climate scientists are telling us yeah we need to act very fast to avoid the worst outcomes of climate change but we can still do it. Maybe you have heard something like that 10 years ago and you wonder how is that possible. And because like as I kind of said the science did not get more optimistic. And the reason for that is that climate scientists recently started adding something called negative emissions into their models which means they estimate that in the future we will be able to reverse emitting CO2. And so most of these scenarios that they are calculating where we achieve 2 degree or 1.5 degree they are estimating that that's what we will do in the future. And so you may wonder how can we do negative emissions. We can do things like planting trees because a tree when it grows it sucks in CO2 from the air and that is turned into a wood. That is great but it has very obvious limits because the earth has only a finite amount of land and the amount of land where we can plant trees will probably get smaller due to climate change because we will have level rise and we will have more heat. So we should plant trees a lot but there is only so much you can do here. And so we need to talk about something which is called carbon capture and storage. The idea here is that we get carbon dioxide and then we store it underground. And originally this entered the discussion in the context of building new coal power plants. For example in Germany I remember roughly 10 years ago I was living in Karlsruhe back then and there was a new coal-fired power plant built in Karlsruhe and some people were saying hey isn't that not so good for the climate and then some people said yeah that's no problem because we already planned in the future to use CCS on that plant. That never happened but that's how they said it then. And this is not just happening in Germany, basically CCS for coal or gas plants that has largely been a failure. So today there is only a handful of projects running and the impact is minimal. So it's really small and most of the projects that are running are in the context of something which is called enhanced oil recovery. And what they are doing there is they are pumping CO2 into old oil fields and can squeeze a bit more oil out there. Now you maybe get the idea that that's maybe not the best thing for the climate as well because that means more oil. But in order to get to negative emissions we need to do something different. And one way is you could imagine doing bioenergy like for example using bio gas plants or burning wood and then coupling that with CCS. So I don't know for example you burn wood and then you capture the emissions and then you store them underground. Now this obviously has all the problems that you usually have with bioenergy that is it competes with land for food and if you use pesticides and fertilizers that has the climate impact as well. And if you plant I don't know palm oil where there was previously a rainforest then it has really had terrible for the climate. So that's not a very good solution and basically the discussion is moving away from that because people realize the effect of bioenergy are so negative that it's probably not a good idea to do that. Another thing you could do for negative emissions is called direct air capture which means you run a big machine that sucks in air and extracts the carbon dioxide and then you store that underground. That technically works so there are a few startups working on that. But these machines themselves will of course require a lot of energy and it is questionable how realistic it is to scale that up to international levels and I'll get later to actually how much energy we are talking about here. And obviously the energy that you use for that needs to come from something like wind or solar because if you fire it by a coal-fired power plant that does not make any sense. So one criticism here is that the IPCC in their optimistic scenarios largely rely on technology that does not really exist at scale. Like there are test installations but they have very optimistic outlooks on how fast this can be scaled up. And even if the technology kind of works you wonder how this should work economically and politically because for someone to run a negative emission plant there is no profit in that. So you would probably have a state funded and then there is the question of who pays for that and why should a country do that. You basically have the same problems you have with climate policy with reducing emissions. Another issue with the science is this discussion about so-called tipping points and feedback loops and this is probably the most worrying criticism of the current climate science. So we have a lot of feedback loops in the earth system which is to put it short when we have warming and that causes even more warming. So an example for this is when ice is melting because as you can see on this picture ice is very bright but water is very dark. That means if there is sunlight shining on ice then it is reflected, a lot of it. And if the ice melts then the sunlight is shining on the ocean and that means less energy is reflected. So this means if we have melting ice then we have more warming following that. This is called the albedo feedback loop. And these tipping points, you call something a tipping point when you have a system that at some point it will turn into a process that something collapses and even if we stop further emissions. And one example for that where it is generally assumed by science that this is already happening is the West Antarctic ice shield. So the assumption is that even if we stop emitting CO2 now and the planet is not getting any warmer then this ice shield will completely melt down. And that in the long run will mean 1 to 3 meters of sea level rise just by the West Antarctic ice shield and there is probably more ice shields that will melt. And there is a risk that these feedback loops and tipping points could cause a cascade where some effects cause more warming that brings some other processes in motion and that causes even more warming. And there has been a study in 2018 which is kind of known as the Hot House Earth Study where they looked at various of these tipping points at feedback loops and how they interact. And they came to the conclusion that even with just two degrees of warming it may be that we end up in such a scenario. Now I want to add some caveats here. One is that there is significant uncertainty. So when scientists say this may happen with two degrees that doesn't mean they are sure about it. That means they are worried that this could happen. And also these are long term effects. So we are talking here about hundreds or thousands of years. So this is probably not something you will experience or I will experience. So yeah what would need to happen? Obviously we need to stop burning fossil fuels. So we really need to get rid of something like this. This is the Jens Schwalde Opencast Mine. It's near the border to Poland. I recommend everyone that you go to one of these opencast mines and look at it because it really kind of looks like a nightmare scenario. There's one very close here. It's roughly half an hour with the train. So I don't know maybe after the Congress you want to see an opencast mine. One kind of good news here is that it's probably much easier to build renewable energies than we thought in the past. I really like this graph. This is from a Dutch researcher called Auke Huxtra. And what he did was the black line is the worldwide installations of solar energy. And the colorful lines here are the predictions of the International Energy Agency. So what you can see here, like the International Energy Agency kind of always assumes that solar power is more or less flat even though it's growing exponentially. And they notice that their starting point, they have to move that, but they don't really seem to notice that they are not getting a trend here. And that also means a lot of people are relying on these reports from the International Energy Agency. So we can probably be more optimistic about the outlook of renewable energy than what is usually assumed by governments. So what usually the big plan is for reducing greenhouse gases is, yeah, we should switch to carbon-free electricity and then we should use electricity for everything. So just that you get a bit of an idea, that is how worldwide electricity production is. The big thing is coal. The green thing and the blue thing, that is renewable energy. Hydro is the biggest one because that humanity has been using for a long time. But the green thing is growing rapidly. And I mean, I can easily imagine that we manage to solve this and get the green thing to cover everything with the growth rates we currently have in renewable energy. But what worries me a bit is thinking about how much electricity we will need in the end. And just to give you some numbers, we currently have roughly 26 petawatt hours per year worldwide electricity production. And that is only a small share of overall energy production. That is 160 petawatt hours, which includes everything from heating and cars and airplanes and industry. So somehow we need to turn all that into renewable energy electricity use. It will get more efficient in some situations. For example, an electric car is more efficient than a gas powered car. But still this is a lot. And then there are things like if we want to turn the petrochemical industry, so plastics, chemicals, into renewable energy for that, we can do this by using something called carbon capture and use or also called power to X. There was a talk on that on the camp. If you want to watch that, that was very good. But there's been a study that if you want to turn the current petrochemical industry use renewable energy for that, it would be something between 18 and 32 petawatt hours per year. So that is kind of we would need all the electricity we have today worldwide, like double that just to replace the petrochemical industry. And we talked earlier about these negative emissions. This is a number from the IPCC report where they estimated that we might need 43 petawatt hours per year by 2100 just for negative emissions, just to remove the CO2 that's already in the atmosphere and that's causing harm. And like in a talk here two days ago by a scientist, he said, yeah, that sounds crazy, but we will do that because climate change will be so bad. So looking at that, I mean, you can easily see we will probably not just need the electricity we generate today and make that with green electricity, but probably multiple times that. And where should all the electricity come from? I have a hard time imagining that. I mean, the good news is renewable energy is not really limited. We have, if you think about putting solar plants in the desert or using offshore wind energy, there's plenty of space, but it's obvious that this is really, really challenging. And we're not just talking about energy. There are some things that are even harder. So this is a cement plant. Who knows how cement is made? A few, yeah. So what you do when you make cement is you're using limestone, which is mostly calcium carbonate, and then you burn it with a lot of heat. And then what you get out is calcium oxide and CO2. So the thing in this formula is that you can see there's CO2 on the right side, and that is part of the chemistry that does not come from the energy we need to make cement, but the chemistry to make cement emits CO2. And there is not really any technology to avoid that, except using something like CPCS. And this alone, these cement emissions, the chemical cement emissions, that is 5% of the worldwide carbon dioxide emissions. So the only plan really to do something about it would be either using CPCS or using these direct air capture plants to later get the emissions back out of the air. Then here's another, not a technology, but something that humans do is that they keep COS. COS also cause emissions, methane. And it is, overall emissions from Livstock is roughly 15% of the worldwide greenhouse gas emissions. And also for that, there's not really a solution. I mean, you can replace parts of it like running a tractor on electricity, but these methane emissions there is not really a solution. So maybe the solution is this. This is an impossible burger. I read yesterday in a big German news publication that they are unhealthy, which is, I would say, like you're missing the point. That's not what this is about. I don't eat these things, but I really hope that this is becoming successful, because we need to do something about this. So maybe my point here is because very often environmentalists tend to say, yeah, we have all the solutions and we just have to do it. Where I would say, yeah, we have the solutions, but this is not going to be easy. And then there's this thing called geoengineering. So geoengineering is where when you wonder, like, can we do something to impact the climate system where we counteract the warming from the greenhouse effect? And the most plausible thing here is something which is called solar radiation management, where you're trying to reflect more sunlight from the earth. And one way to do that is to put aerosols into the atmosphere, which is kind of understood how this works, because volcanoes also do that. So if there's a big volcano eruption, this will temporarily cause a cooling effect on the earth, because these aerosols in the atmosphere reflect more sunlight. And this is currently not widely discussed. And the IPCC, for example, explicitly excludes this from their scenarios. One reason for that is that, yeah, I'll get to that. So usually you would say, yeah, like blasting plenty of chemicals into the atmosphere. It sounds like a pretty crazy idea. But on the other hand, if we're talking about a situation where we have a planet where large parts become uninhabitable, then probably we will have to need to have a discussion about this. But the big issue here is basically that some people may think, yeah, okay, we have this pure engineering, we can just blast a few chemicals into the atmosphere, and then we can go on like we do. Which is not really plausible, but some people get this idea. And that is also one of the reasons why climate scientists are not really happy discussing that, which I don't think is a good outcome. I think we should discuss that. I'm not saying we should do that, but I think we should discuss whether that would be a reasonable thing to do or not. Yeah. I think one issue is that there are a lot of very wrong ideas about the climate problem circulating in the population. So I found this interesting. There was a survey where people from Germany were asked what they think is the most effective thing to reduce their personal carbon dioxide emissions. And the thing that was named most was avoiding plastic bags, which is like it has a really small impact compared to one flight or whatever. This is basically nothing. And this was completely disconnected from the real impact. So people have completely wrong ideas what matters and what doesn't matter. And this came to the conclusion that like excessively overestimated was avoiding plastic bags and also regional food. And excessively underestimated was eating meat, which has a relatively high impact, but people don't like to hear that. Then I think something which I would like to frame as efficiency is a lie. There's this naive idea that if we have technology that's more efficient, that's a really good thing because then we will use less energy and less emissions. But this is usually not what is happening. And I want to show you this. This is an old car from, I don't know, 80s, 90s, a Volkswagen Golf and a modern Volkswagen car. And the modern car is twice as heavy and it needs roughly the same amount of fuel. So I mean, that's really, really efficient. Isn't that great? So I mean, that's an amazing increase. We doubled the efficiency. But it's just not helping the climate. If we have cars twice as large and as you know, that is kind of becoming the new norm right now. And we have more cars as well. Here's another nice graph. This is from the Lufthansa sustainability report, where they say what nice things they are doing to become more green and here they are saying they're decoupling the transport performance and fuel consumption, which I think is not really what the word decoupling means. What they are saying here is, yeah, look, we're transporting more people and I mean, okay, our fuel consumption goes up, but it doesn't go up as fast. This is kind of trivial. I mean, you assume that technology gets better. You don't assume technology gets worse. But still, this is not helping. Like, the emissions are still going up. And this is discussed under many different names. One is the Jeven's paradox, where a researcher back in very early times figured out that even though the use of coal got more efficient, but then people were using more coal and not less and it's also named the rebound effect. And I think this should be core in basically every discussion about solving climate change with technology, but it's often just ignored. And the message here is like efficiency. I mean, I'm not saying efficiency is bad, but I think efficiency does not automatically reduce emissions and it may even increase them in some situations. And some people, I mean, if you follow the political debate in Germany, then there are some parties who are saying, yeah, we should solve the climate problem with innovation and technology. And like, I'm not against technology. Not, I think we need technology, but I think it's simply not plausible to assume that by innovation and technology alone, we will solve this problem or even we will have any meaningful impact. So the situation is not very optimistic. But as I said earlier, after my last talk, someone came and said, this sounds like there's nothing we can do and it's all fucked anyway. And if you feel sometimes there's a thin line between saying how bad it is and then saying something which is not really supported by the facts. And there was this article, who has read this? Yeah, a few, so this was in the New York magazine from a journalist called David Wallace Wells, which was painting a very dire picture of what the effects of climate change will be. And he also wrote a book later on that. This article was based on something called RCP 8.5. And this is one of the climate scenarios from the last IPCC report. So this is the worst emission scenario in the last IPCC report. And sometimes this is referred to as business as usual scenario, where the assumption is emissions just keep growing. And the scenario assumes that by 2100, we will use roughly six to seven times the amount of coal that we do today. How they come to these high numbers is not just the normal growth, but also they assume that at some point maybe oil will become more scarce and then we will start turning coal into oil. This is technically possible, but I mean, this scenario, it's not something that is impossible, but it looks rather unlikely. And I think if you talk about something like this, you should put it into perspective and not painting it as this is the likely outcome. And these discussions about this RCP 8.5 led to recently some scientists did a calculation where they took the current prediction from the International Energy Agency, which as I said earlier, are not exactly optimistic about renewable energy and ended up with that this would probably lead to something like three degree by the end of the century, but with a big uncertainty range. So this could also be up to 4.4 degree if the effects are more severe than we currently assume. So the thing is, I find this really tricky how to say, this is implausible, but I really don't want to downplay the risks. So the kind of takeaway here is there are some extreme scenarios that are rather unlikely, but even if you take a plausible scenario, that could lead to really, really bad outcomes. And also one thing that is often making these things confusing is that most of these scenarios, they look at, they end at 2100, but of course the world does not end at 2100. And when we have these discussions about tipping points, we're usually talking about much more long term things. Then this made a bit of buzz on Twitter lately. So there was a report by the International Monetary Fund where it had this sentence, there's growing agreement between economists and scientists that the tail risk, our material and the risk of catastrophic and irreversible disaster is rising. So far not surprising, but then it says, potentially infinite costs of unmitigated climate change, including in the extreme human extinction, which sounds like quite a severe claim. And some people were like, why is the media not reporting on this? But I looked into it and I was not super convinced. As you see, there's one source cited, which is from 2009, which is also not, it's a bit old, and I looked into what a kind of study is that. And this was a study published in an economics journal, and it was kind of modeling what it would mean when you try to find an economically optimal path and when you have a risk that's not very likely, but very extreme. But it was not really saying anything about the question of human extinction. So I felt this was not really justified to have that sentence there. And I really don't want to be dismissive about these extreme scenarios, because honestly, about three or four degrees Celsius means it's really hard to predict. And I would not say that we can confidently rule out that this will mean something like human extinction in the long run. But whenever I read something like this, I try to look up the sources and very often not much of substance comes up. Does anyone know this guy? Okay, so you're not around on the crank sides of the internet. And so this guy is called Guy McPherson, and he thinks climate change will kill us all within the next decades, the latest, and there's nothing we can do about it. The good news is this is not supported by the science. There's also a web page called Arctic News, which is kind of in the same realm. This is a blog written by someone called Sam Karana, which is probably a pseudonym. And he has some theories here where he says, yeah, maybe we have a warming of 10 degrees by 2021. We will see probably not. He also has a plan how to stop that. So he's more optimistic than the other guy. But yeah. One of the theories that a lot of this is based on is something we will call the methane clathrate bomb. So methane clathrates are also methane hydrates that is frozen methane that is on the bottom of the ocean. And the idea there is that if this all suddenly melts and evades into the atmosphere that this could cause a sudden climate change where we have very extreme effects in a very short term. There have been a few scientific papers about that, but it is generally believed that this is simply from the physics not possible. So to be clear, these methane clathrates, this is a real concern, but this is a long-term concern. This is one of the feedback loops I was talking about earlier. But it's not plausible that these scenarios, that something like that will happen in a very short time frame. And so if you have these doomsday predictions, they are usually a mixture of using something very speculative and saying that this is what's going to happen or massively overstating some effects and also very often confusing long-term and short-term effects. Now you could say, okay, this is ridiculous. There are a lot of cranks in the internet. Say all kinds of weird things. But some of these things are going into the mainstream. So there was a paper called Deep Adaption. Has anyone read that? A few. And this got a lot of media attention. There was this article on WISE. I think this made it kind of famous. The climate change paper is so depressing, it's sending people to therapy. It was rejected by a scientific publication due to its poor quality and it was heavily citing the people I just showed you earlier that are having these crank theories about very short-term climate effects. There was also this article, I have to hurry up a bit, in the New Yorker where someone was arguing that we're at a point where there's nothing we can do anymore. Where he said, the consensus among scientists and policymakers is that we'll pass this point of no return if the global mean temperature rises by more than two degrees Celsius. This is not true. I know where this is coming from. There are scientists who are saying they are worried that with two degrees something like this might happen. But saying that you think something might happen is very different from saying there's a consensus between the scientists. And the thing is, I think for one of these predictions, I want to say things that are correct. So I, just in the spirit of science, but also I think these predictions are problematic because you can come to this conclusion that it doesn't matter. It's too late to do anything about it. And that leads to this strange situation where through very different arguments you can come to this conclusion that either there are people who are saying, yeah, climate change is hoax, it's not happening anywhere, or it's not happening, not caused by humans, or it's a good thing. Or on the other hand, you will have people that say there's nothing we can do about it, and you come to the same conclusion, no change is needed. And like climate change is not a yes or no thing. It's not a binary thing. We cannot avoid climate change. It's already happening. But there's almost no imaginable scenario where reducing emissions now is not on the long run improving the situation. So there's this idea that with these tipping points and feedback loops, we will have this one point where it doesn't matter anymore. But this is not true. So if you remember many of these feedback effects, these will happen over a very long time, particularly melting of the ice sheets. These are effects unfolding over thousands of years. And even if you imagine a scenario where humanity needs to evacuate large areas due to sea level rise and heat, which obviously will be terrible, but I imagine that we need to discuss about geoengineering or negative emissions and deploy these technologies. It will almost certainly make a difference if we have several decades for this or several centuries. So in summary, yeah, the situation is really bad. But still, it absolutely matters what we do about it. So for the new year, I ask you to think about what you can do to stop this. Yeah, thanks. Great. Thank you very much. You actually landed on point. So that also means, unfortunately, that we do not have time for questions and answers. Anyway…
|
Heat waves, wildfires and new movements like Fridays for Future and Extinction Rebellion have brought the climate crisis back into the news. If the world continues to emit greenhouse gases like today, the world will heat up by at least 3 degree celsius, potentially much more. Yet despite increased attention for the climate crisis no political action even remotely sufficient to tackle the problem is in sight. The talk will give a brief overview of the state of the science of climate change, the politics and the scale of the challenge. It'll also give an overview of some - more or less scientific - worst case predictions that sometimes lead to claims that it's just "too late to do something", which may become a new form of justification for not doing anything.
|
10.5446/53050 (DOI)
|
Virtual Machines, this is a Hardware Talk and it's in English. Es gibt leider keine deutsche Übersetzung. Wir gehen davon aus, dass ihr gut genug Englisch könnt. Virtual Machines sind alle der Räge, für viele Gründe, für die Sicherheits- und Praktikabilität. Aber das ist, dass, weil ihr ein VM und ein Provider seid, ihr könnt es noch überwinden. Das ist nicht so ein gutes Ding. Hier ist Janosch, mit dem Hand. Claudio ist aus Italien, aber er ist in Deutschland seit 10 Jahren. Sie kommen aus der Weste von Chaos, ich denke, sie sind in Stuttgart. Sie sagen, dass es die Herausforderungen der virtualisierten Versorgung sind. Nicht nur VM, sondern auch der protectiven VM. Sie Constant Von Was sind die VMS-Protektivitäten? Er hat uns schon gesagt, dass wir eine Schauung in die basicen Definitionen haben. Und dann haben wir die VMS-Protektivitäten auf unserer Plattform S390 implementiert. Man kann es besser von der Namesprache wissen. Claudia hier ist KVM und QMU, Entwickler für S390. Und er hat einen anderen Gespräch über den Thema, den ihr auf YouTube sehen könnt, von KVM Forum 2019. Und das ist Janosch. Er ist der KVM S390 Kommentarier und auch der KVM-Developer. Er hat auch andere Gespräche mit dem Forum, aber nicht über dieses Thema. Okay, wir gehen jetzt in. So, die Verteidigungsverteidigung oder auch die Sicherheitsverteidigung erhält für virtualen Maschinen, die sich nicht observabel und alterabel sind. Die Verteidigung kann nicht in die Erinnerung kommen, sondern in der Erinnerung, in der die Verteidigung ist, die Verteidigung ist, die Verteidigung ist, die Verteidigung ist. Und das bringt uns viele Benefitzen. Das ist, dass wir gegen militärische Verteidigungen protecten. Wenn ihr einen Cloud-Ambulant hostet, dann kommt es um einen anderen. Wir können gegen militärische Verteidigung protecten. Wenn der Operator nicht mit dem Vm-Ambulant zu den Verteidigungen aber vielleicht mit dem Vm-Ambulant zu infekten, dann mit dem Vm-Ambulant zu den Verteidigungen. Und das bringt uns auch die Verteidigung gegen die Verteidigung, weil wir die Verteidigung gegen die Verteidigung überreicht. Und dann gibt es die große Sache, für Banken, die die Kompliance ist. Wenn wir über die Verteidigung der Verteidigung der Verteidigung am Kickstarter sorgen, dann stimmt man, deraged content. Im jetzigen Fall. Worauf man resume wird. Dadurch kommt mal die Verteidigung, besonders im An líderser Personal zu unterstützen. sich den F Vermögen um simply veran иногда auf euch dass die Genere pancakes. Also es passiert so ein eigenes Wiges Die typischen Parts von einem Server sind eine CPU, eine Memorie, eine Form von Boot-Image, Persistent-Storage, andere Devices wie Network IO und für VMs auch eine große Part ist die Hose-Memorie-Management, Swapping, Migration und so weiter. Wie werden wir das betroffen? Für die USA, für die Memorie und für die CPU können wir die Angriffenprotektion geben. Das ist eigentlich, was AMD-SEV macht. Sie haben verschiedene Kiste für die Memorie-Angriffen für die Hypervisor und die VM. Wenn die Hypervisor von VM-Memorie anruft, wird die data mit der Hypervisor-Key nicht mit der VM-Key, also wird es basically garbage. Das selbe betrifft, wenn es die VM schreibt. Dann haben wir hardware-assisted Access-Protektion. Das ist ein der key Dinge, die wir haben, weil das uns auch gegen die rechten Accesses betrifft. Dann kann die Hypervisor nicht nur vom VM nicht reden, sondern auch nicht die VM korrekt. Wir brauchen auch die Mapping-Protektion, damit, wenn wir eine Page haben, die Hypervisor nicht nur die andere Adresse zu remapern, wenn die Hypervisor die Content des Page-Protektions weiß und die execution in der VM. Dann können wir auch die Integrationsprotektion der Data geben, die nicht komplett die same wie die Access-Protektion, weil wenn wir über die Swapping denken, dann müssen wir die Data auch readable machen. Aber wenn du es wieder zurückkommst und die VM in den Wagen, sollte das Integrationsprotektion, also dass das, was die VM wieder ist, noch das selbe, das vorhanden war. Alle diese können und sollten basically becombined, um Maximumprotektion zu geben. Jetzt geht es zum Boot-Image. Für das Boot-Image und für Booting in general, wir haben die TPM. Und die im Plenum von Boot-Image-Protektionen ist ein Ex-Successor. Wir können Encryption, das uns mit Secrets, SSH-Key, Lux-Passwords und so weiter, und wir können eine Remote Attestation, wo eine VM mit Boot kommt, und es wird im Moment geplant, also eine Hasche wird generiert, die Hasche wird zum Verkauf der Verkaufs-Server sendet, es wird da verabschiedet, wenn es ein boot-Image ist, und dann wird die VM weitergekommen. Die Attestation ist wirklich schön, aber die Ability, die Incryptive im Boot-Image und die Integrität, die nur mit einer Hasche betrachtet, ist gut, weil die Ability prezzt und die Komplexität der Attestation-Stack reduziert. Die IO ist ein bisschen ein Problem, weil für die IO, man muss die VM-Memorie mit dem Hypervisor aufnehmen. Wenn ihr die IO-Aktion schaut, ist das alles durch die VM-Memorie. So, alles, was die VM sendet und bekommt, muss inkrypt. Man muss eine Volldiskinkription haben, man muss HTTPS, SSH, inkryptive Protokolls, außer, wenn ihr etwas richtig reines, boot-Messages oder nur Erre-Outputs durch den Standard-Konsultatressort. Die VM kann und sollte nicht Assumtions über die Behaviour eines Devices machen. Aber in der heutigen Zeit ist das oft ein Problem. Das ist nur eine normale Behaviour. In der Zukunft haben wir eine däricte Hardware, die eine protected Channel zwischen einem VM und der Hardware gibt. Dann könnt ihr das Device mit der Akkription mit dem System kommunizieren. Aber ich bin jetzt nicht sicher, ob es bei irgendwelchen Hardware-Devices so etwas unterstützt. Swap und Live-Migration sind eine komplett andere Beats für protected VMs, und man kann nicht die Memorie schließen, sondern man muss die Beats für Migration in eine inkryptive Form verwendet. Dann muss man sie zurückbringen, aber wenn man sie zurückbringt, muss man die Beats für protected VMs verwendet werden. Man muss sich über die Beats für die neue Beats verwendet werden. Das ist eine andere Herausforderung der Komplexität, die viele Menschen auf der耻 man nur auf dasious auf 50 % derocking, den chooses um axle, man kann es vielmehr aus dem Fuji-Ach Anschoure vinyl<|transcribe|> немножко mehr Peterson salsa. Mainframe, wir haben eine Instruktion für das. So, wir schauen uns an, was das Ultravisor ist und was es macht. Das Ultravisor ist eine Ausgleichung von der Hypervisor und der anderen Gäste, und die Interaktion zwischen den Gästen und der Hypervisor. Es macht viel Arbeit. Das ist ein bisschen ein Tisch, das zeigt, was wir glauben und was wir nicht glauben. Wir glauben nicht auf den Hypervisor, wie gesagt. Wir glauben nicht auf die normalen Gäste, weil es das gleiche ist als die Hypervisor von unserem Punkt. Wir glauben nicht auf die Gäste, die sie sich für sich selbst betreiben. Wir wissen nicht, ob sie sich selbst betreiben. Das ist ein Problem, aber wir wissen nicht, dass sie nicht versuchen, Fische zu machen. Wir glauben nicht, dass wir nicht auf die Gäste zu betreiben. Das einzige, was wir wirklich glauben, ist das Ultravisor, das systemweit glaubt. Das ist der einzige Ente, der wirklich glaubt. Hier ist ein Schirm, das zeigt, wie die Interaktionen zwischen den Bewegungen sind. Die normalen Gäste sind nicht glaubt. Die Hypervisor kann die normalen Gäste reiten. Die normalen Gäste in der Theorie können nicht reiten, aber sie können. Wenn es ein Bug ist, oder ein malisches Hypervisor, hat nichts damit stoppt. Auf der anderen Seite, Sie können sehen, dass die protected Gäste nur ihre eigene Erinnerung haben. Und sie können nichts anderes accessieren. Die Ultravisor kann alles accessieren. Das ist der einzige entrottene Ente, der alles kann. Und niemand kann die Ultravisor-Memory accessieren. Die Hypervisor kann nicht die protected Gäste accessieren, aber sie können eine Asterisk haben. Sie müssen die Hypervisor-Memory ansehen, um die Asterisk zu verhindern. Orsonst dzień Sodann haben Sie aber keine sehr gute Gäste. Das machen anderealu gütsche weren nicht. Der zugrunde Scham des careful for the test, some Tell47 so protecton. As we said Umtth. The guest bedeutet some. I har doubt unless itsaart. I freaking Text. When it's achtemcover glance and Venus g Aber nie andere Gäste, nie andere geschehene Gäste. Die Gäste und die CPU-State, also die Blocken der Memorie, die im General der Gäste oder der Gäste in der CPU-State containiert, sind nur von dem Ultravisor verabschiedet. So kann niemand es accessieren, es reisen oder schreiben. Die Husten-Memorie ist nie von geschehene Gästen verabschiedet. Dies bedeutet, dass z.B. ein geschehene Gäste einen sehr schweren Zeit haben, um die Boxen auszubringen, weil ein geschehene Gäste nicht in any way die Memorie der Husten accessiert. Die Gäste-Husten-Mappen werden von der Verschmappung betreffend und die Gäste werden von der Verschmappung betreffend. In diesem Fall, wenn der Hypervisor für jede Art Riesen-Malicious-Bag eine Gäste-Page zu einer anderen Gäste-Adresse hat, die nicht funktioniert, und die Gäste-Husten-Mappen sind der Gäste, der die Gäste für die Gäste für die Verschmappung gebraucht haben. Das ist ein sehr schwerer Gäste-Adresse. Ich habe die Gäste-Husten-Mappen im Parantys, weil es nicht möglich ist, die Gäste-Husten-Mappen zu haben, aber in der Theorie ist es in der Praxis mit Gästen-Husten. Es gibt auch die Emulation-Data, die gebraucht und beobachtet wurde. Es gibt auch die Bisse, die die Gäste-Adresse, die die Emulation von einigen Instrukturen beobachtet haben. Die sind beobachtet worden, und die haben die Gäste-Adresse gebraucht. Die Hypervisor können nicht die Gäste-Adresse oder Bisse oder die Gäste-Adresse, die nicht durch die Architektur gelassen sind, nicht alles ist perfekt gebraucht, aber das wird zumindest die worsten Angriffe verbreiten. Was ist in der Hypervisor denn? Die Hypervisor muss noch immer eine Art IO machen. Wenn es ein Disk-Aktur ist, muss die Hypervisor die Data von der Disk-Adresse und das Devicemodel annehmen. Das ist wahrscheinlich eine IoT-Adresse, aber das muss nicht sein. Das ist natürlich eine Schadulierung. Von diesem Punkt an ist es nur ein VM. In KVM ist es nur ein Prozess, das muss schadult werden. Das bedeutet, dass die Hosts nicht immer eine spezielle VM schadult. Das ist ein IO-Adresse, aber das ist unabordentlich. Denn wie Janos gesagt hat, man kann immer den Plug immer aufnehmen. Es ist nichts, was wir tun können. Wir haben keine Ahnung. Hauskeeping für einige Instruktions. Einige Instruktions brauchen Hauskeeping von der Hypervisor. Sie sind ausgestattet durch die Firma oder die Hardware, aber trotzdem muss die Hypervisor das wissen. Und finally, einigen Instruktions müssen auch ausgestattet werden. Das ist ein Problem, das wir nicht mehr tun können. Und finally, some instructions need to be executed by the Hypervisor. For example, IO-Instructions, not just those, but in particular IO-Instructions are handled by the Hypervisor. So, let's have a look at the life cycle of a secure guest and a secure host and see how this looks like now in this complex scenario. So, first the guest boots, and it boots in normal standard non-protected mode. And then it loads an encrypted blobby memory, which is the actual protected boot image. Although the second step can be skipped in some cases, like if you load the kernel image with a QM commonline parameter, and it's already memory, so you need to load it. Details. And then the guest performs a reboot into secure mode. Basically, the guest asks the Hypervisor to reboot, and as a boot device for the reboot, it specifies this blob. At this point, the Hypervisor will call the Ultravisor and say, hey, I need to create a secure VM, and I need to create these many secure CPUs, and then it will basically pass this blob from the guest directly to the Ultravisor. The blob basically contains all this, possibly contains all the secrets of the protected guest. So, of course, the Hypervisor will not be able to do anything with it. So, this just pass directly to the Ultravisor, which is the only one entity that is able to actually make sense of the blob. The blob also contains some other configuration parameters, some other keys that are used for some other purposes. And at this point, each page of the image is then, we say, unpacked. Basically, the Hypervisor asks the Ultravisor to decrypt the page, so the page is first made secure, so it's made inaccessible, and then it's decrypted. So, the Hypervisor will never be able to see anything inside. Once that's done, we have a boot image in memory that is now ready, it's been decrypted, and the Hypervisor just continues execution of the guest. It simply needs to use a different format for the CPU block, because now it's not a normal CPU, now it's a secure CPU, so some things are different, necessarily. So, what do we normally have in the CPU block as CPU Flags, a program counter, some registers, including all the special registers, some timer information, all the interception data, basically the reason why there was a VM exit and the instruction that caused it, some Hypervisor control flags that alter the behavior of the virtual machine. Of course, this is not something that you want to have in a secure, in a protected virtual machine. So, we do it differently when the vCPU block is for a protected CPU. So, first of all, when a protected CPU starts, the state is not taken from the block of memory that is in the Hypervisor storage, but is taken from one of these Ultravisor-reserved areas, because it needs to be protected. Except for a couple of bits, we have a couple of bits that are always copied back into the Hypervisor, because, for example, interrupt management, the Hypervisor needs to know when specific CPUs are enabled for interrupts, because otherwise it cannot inject or interrupt. In some cases, some more information is needed. So, let's say an IO instruction, there are some small memory blocks that contain some information, there are some registers that contain some important information that needs to be shared with the Hypervisor. So, the Ultravisor will actually, on a case-by-case basis, copy this information back to the Hypervisor, but only exactly those bits that are strictly needed to the Hypervisor. And if needed, some information is copied back from the Hypervisor into the guest, and these bits are also checked. Again, only for those instructions where the Hypervisor is expected to provide information to the guest, and those bits are also checked for architectural compliance to prevent the Hypervisor from playing any tricks on the guest. That's not all. So, the Instruction Text is not the actual text of the Instruction. It's a normalized version. So, every time the Hypervisor will see the instruction with the same registers. Of course, the values in the register will be in the right place. The Ultravisor will take care of putting things back afterwards. There's a new Interrupt Injection API, because normally we just read from memory and write back in memory to perform an Interrupt Injection, but that's not possible, so this has to be done through the Ultravisor. And of course, new Interception codes, new reasons to get out of the VM. For example, if some instructions seem to be interpreted in a secure way, or if some cases some instructions have been already executed, but still the Hypervisor needs to be notified about it because it needs to take care of some housekeeping. So, apart from these new Interception codes, there is interestingly less to check, because the Ultravisor will take care of most checks. And there is a Secure Instruction Data Area, which is also something that normally is not there. Some instructions have some small buffers that are usually just accessed by the Hypervisor. In this case, they cannot be accessed, but since they are small, it's also hard to use bounce buffers for them. So what happens is the Ultravisor will just copy those in a specific page, this Secure Instruction Data Area, and in case it needed, it would be copied back into the guest. This is, for example, used for console data, serial console and stuff like that, or boot messages. So, as I said, Interrupt Injection need to be done differently. This is done through the State Description. So, before running the VM, the Hypervisor sets some bits to tell the Ultravisor, please, when you start, inject these interrupts with these parameters. Of course, the interrupts are not always allowed. Only a few program interrupts are allowed to be injected. These are exceptions, and these are only allowed when they are expected. So you cannot just randomly inject invalid instruction or page fold. That's only allowed when the instruction allows for that to be injected. And of course, you can never inject interrupts into the VM if the interrupts are not enabled for those classes of interrupts you want enabled. And this is why the Ultravisor needs to give those few bits to the Hypervisor all the time about which interrupts are enabled. Swapping is interesting, because this breaks everything. Now, the Hypervisor needs to read the memory and save it to the disk, and then put it back when needed. So, how we do this, we export the page, which means the page gets encrypted by the Ultravisor, and then unprotected, so made available. This is initiated by the Hypervisor. The Hypervisor asks the Ultravisor to export the page. At this point, it's encrypted and readable. At this point, it can be written to the disk. But also hash is saved somewhere in a protected memory area, so that we know once we swap the page back, we can check if the content has changed, because we have to guarantee integrity. At this point, the pages can be swapped. So, the Hypervisor can swap the page to the disk and use that memory for something else. Once the page is needed again, the Hypervisor needs to swap it back. So, okay, reading from disk, you have the encrypted page in memory. At this point, the Hypervisor asks the Ultravisor to import the page. At this point, it's not secure. It's decrypted. And the integrity of the page is checked. If the check wasn't successful, the page is not imported. Otherwise, it is, and the guest can continue. If the check was not successful, the page is not imported, means that basically the guest cannot continue execution, so it's as good as that. But that counts as a denial-of-service attack. We said before, there's nothing we can do about it. The guest has to do. Basically, it needs to start, boot. Check if protected virtualization is available. If yes, loading the blob and do this reboot. This is done basically at the bootloader stage. What the kernel has to do actually is quite simple. Check if it's running inside a protected guest. If so, set up the bounce buffers to be able to perform IO and then use the bounce buffers. The changes in the guest are really minimal. By the way, the changes to the guest have been upstream already. The rest is still a work in progress. So, let's have a comparison of how this behaves and what the characteristics are in comparison, for example with ACV from AMD. ACV is already there, so you can already use it, while protected virtualization is not there yet. Apart from that, you can read the state of the CPU, so you can read the registers, unless you have the ES extension, which I think the newest CPUs have it. Whereas, unprotectable virtualization is never possible. You can always read the memory with ACV, but it's encrypted, so it's not an issue. Whereas unprotectable virtualization, it's only readable when it's shared. And same for writing, it's only writable when it's shared in protected virtualization, whereas on ACV it's always writable, unless the SNP extension is enabled, which I think it has been presented last month for the first time. I don't think there are any CPUs with that available yet. On the other hand, swapping is not supported yet with ACV, it's supported with protected virtualization, migration is supported with ACV, and it's not supported yet with protected virtualization. Go! What can we get out of this? Protektive virtual machines mean the hypervisor cannot access the state, cannot read or write the state. Ideally, it's tightly protected as a state, and the boot image as well is protected. This is the basic idea behind this. What needs to be done yet is to make sure that the data is not in the state. So, in general, making memory accessible requires bounce buffers. If you don't use bounce buffers, then you need to make memory accessible and not accessible on demand, which is terrible for performance. Swapping pages can be hard. For example, AMD doesn't even bother to do any swapping, so migration also can be very hard. Go! If you have any questions, you can find us around. Don't hesitate. These are our deck numbers. The current hypervisor patches are on the mailing list, on the KVM list and the Linux S90 list. They are in active discussions. And this year at the KVM Forum, we've seen that there has been a lot of discussion between the architectures. AMD came out with SEV as the first ones. Some others will follow, I guess. ARM came out, but they actually implemented their solution independently from the S90 version. So, get in touch with us. We always want to know your thoughts and the new ideas. We want to create a kind of community around there, because most platforms are working on protected VMs and they most often have the same problems. So, thank you. Okay. Now, as usual, we can field questions. There are two lit mics. Please. Okay, you have a question. On slide 31, discussing the provisioning with the protected blob. What is the encryption key used for that blob? And how does it prove to the customer who is launching this guest that it is actually being launched in the secure mode? Good question. So, the blob is encrypted with a public key. The blob is encrypted with an image key. But that image key is then encrypted with the private key of the machine, for which you get the public key when you ask your cloud provider that you want to deploy on that platform. So, on a specific machine. So, basically, the ultra, it's a classic public key system. So, the private key is somewhere in the hardware. And the public key is public. You just encrypt your image. And if your VM is running, you know that it has been decrypted by a system that has the private key. So, one of these things. Okay. Okay, next question. Is there a question from the Internet? Give us a sign. No. Okay, go ahead. Hi, you make the distinction between migration and swapping. From a technical point of view, where is the difference? Well, I mean, for swapping, you only bring out and bring in pages from disk or network. But for migration, you need to transfer the memory and then also the CPU state. And the CPU state is actually a bigger problem because you need to have some kind of protocol that transfers it in a specific way, specific structures and all that. It needs to be integrity protected. And at the end, swapping only does integrity protection on one page. But if you transfer the whole state, you need to have an integrity of the whole state for all of the CPU states and of the whole memory. Also, you need to guarantee that the destination is also a proper system. Yeah, an authorized host that is able to run the VM. Okay. I see you. I ask you a question, please. Are there any plans for nested virtualization? No. It's just not possible. I mean, we already have two levels of virtualization on the mainframe. Going deeper is, well, we implemented it, but we would need hardware assistance for that and more than two levels is not in our plans as far as I know. Are any other architectures considering it? I don't think so. I haven't heard. Thank you. I mean, of course you have unlimited levels of virtualization, but without protection. Which we don't want. Okay, any further questions? Okay, well, let's call this a day. Let's have a final big hand for Janosz and Claudio. Copyright WDR 2021
|
Firmware protection for Virtual Machines against buggy or malicious hypervisors is a rather new concept that is quickly gaining traction among the major CPU architectures; two years ago AMD introduced Secure Encrypted Virtualization (AMD SEV), and now IBM is introducing Protected Virtualization for the s390x architecture. This talk will present the motivations and the overall architecture of Protected Virtualization, the general challenges for Linux both as a guest and as a hypervisor with KVM and Qemu. The main challenges presented will be, among others: * secure VM startup * attestation * I/O * interrupts * Linux guest support * KVM and Qemu changes * swap and migration While the talk will have some technical content, it should be enjoyable for anyone who tinkers with KVM and virtualization. Knowledge of the s390x architecture is not required.
|
10.5446/53056 (DOI)
|
So welcome here on Karls' West stage on the very first talk of the morning at 11. I am very happy to introduce to you Lars Römmelt. He was previously at NIR specialist at Kwonco and is now working for the health think tank HIA of the German Ministry of Health and he will give you a talk about Hacker's Guide to Healthcare and how to improve life with health data. Welcome Lars. Thank you very much. Good morning everybody. Thanks for making it out so early. It's my first congress and I'm a little overwhelmed with all the things that are happening especially at night so I really appreciate you all being here so early. My name is Lars. I used to be a data scientist. I'm a data scientist by training. Now I switched to bureaucracy, to policy, doing a policy stint. My life looks like this right now. I work in the Federal Ministry of Health in a think tank that is advising policy, sort of bringing in new ideas from the outside and informing the rest of the world about what's going on inside the ministry and I lead the efforts on artificial intelligence. Today I'm here to talk to you about what to do with healthcare data because I think there is a lot of talent missing in healthcare, technical people that Hacker has and I think there is a lot of knowledge missing in the Hacker community about what to do with this kind of data. I'm trying to address five points today and sort of get you acquainted with some of the stuff that is happening. To get you started, is anyone here a doctor? Can you raise your hand? Excellent. They are always the worst listeners. This is a pathologist. What pathologists do is not what you see in crime scenes, what pathologists do typically is look for cancer. So whenever a doctor suspects cancer in a patient, at some point in their journey they will cut a piece of tissue from the patient, they will send it to a pathologist, the pathologist will harden it with special chemicals, will cut a thin slice of it and will look at it under a microscope like that one and they will basically look for cancer cells in that tissue. The whole process seems very scientific, it smells like chemicals, there are macroscopes involved. It's fairly opaque, even the doctor sending in the original tissue doesn't normally quite understand what's happening and in the end this pathologist will answer one question, is it cancer and if yes, how many? And then the doctor that sent the original tissue will work with that information. And as data people, we are pretty familiar with that kind of thing because it's a black box, right? It's input data comes in, then black box processing happens here in the form of this pathologist and then some output comes back. And so I thought it's pretty natural as data people to ask, hey, what is the accuracy of this black box algorithm in the form of this pathologist? Turns out that's kind of a wild question in healthcare and I found the results, the answer to the question fairly surprising and I want to share it with you. This is a somewhat extreme example, it's very illustrative but it's in no way out of the ordinary. It's a study done at a German university hospital in Hamburg where they compare the diagnosis that came in from the outside world, then they had an expert panel of several high ranking pathologists looking at the same tissue and comparing what did the experts say versus what did the original pathologist say. So we're basically comparing, it's kind of a confusion matrix if you've heard that term before, where we're comparing the original prediction of a real pathologist with the closest we can get to ground truth which is sort of a consensus opinion of experts. And the results I thought were relatively shocking. Most medical diagnosis fall on a spectrum from sort of oh good to oh god. With this example, it's about prostate cancer. If it's anything worse than oh good, so if it's sort of on the right of oh good, then your prostate will probably be removed to sort of keep the cancer in check. And in the event that you have a prostate about half of us do, you kind of want to keep it. So having your prostate removed is not something that is a very pleasant experience. It's a very not nice surgery to have. It leaves a lot of damage in the body. So sort of having a wrong diagnosis here is important. The first thing you see is that about two thirds of all diagnoses, the green bars in the middle, are exactly correct. Which means that one third of diagnoses are actually not exactly correct. And you can say hey, you know, it doesn't really matter. But what does matter is the case is from oh good upwards. Because that's where you have a good surgery and not get surgery. And what this data shows is that from these 5,400 patients, out of those that actually by consensus opinion did not have cancer, did not have prostate cancer, or did not have an immediate need to treat, one third of those patients had their prostate removed without the need to do that. So it's a pretty drastic outcome for patients that you would really like to avoid. Now the question that I ask for myself is if this were the result of an algorithm, of a machine learning algorithm or some other algorithm, would this be acceptable performance? Maybe not, but for humans it's best we can do. But I think being more transparent about how humans are just humans and maybe not particularly well suited to look at pictures all day, I think would be a good thing to have more clarity in the healthcare system about what diagnoses actually mean. And so this is my starting example. This is called in the medical community, this is called intra-observer variability. So different observers look at it and their diagnoses vary depending on who looks at it. Intra-observer variability, so you show the same picture to the same doctor at a different time of the day, also exists and it's not much better. The example I showed you is very illustrative, but as I said, this happens in a lot of medical fields, not only for prostate cancer, but for all kinds of cancer, for all kinds of illnesses. And again, it shouldn't be surprising, doctors are not bad people, doctors are just humans like us all, and so mistakes happen. And sort of, it's one example of these questions that I think can be answered with data and that currently maybe not explored enough. Another example would be who gets treatment when. So who has access to doctors, how long do they have to wait for appointments, that kind of thing. How unhealthy are waiting rooms? So as people are waiting at their local doctor's office to be seen often for hours next to many sick people, how unhealthy is that really? Does specialization into more and more specialized doctor's practices actually improve quality of care, or does it maybe detract from it because people lose oversight? Can we predict the best course of treatment for a given individual to personalize medicine? Or does the probability that a given patient receives hip surgery, so it gets a new hip, does that depend on the reimbursement that doctor gets, i.e. the price tag associated with that surgery? These are all questions that you would like to answer with data, and the problem in healthcare is what I call the great irony of healthcare data, which is roughly this. Sort of the more legitimate use would be, the harder it is for those actors to get access to that data, and the converse holds. As a case in point, in Germany in the year 2020, most patients still do not have some official plan of their medication, what meds are they taking. This is important because you have these cross-effects and side-effects of different medication, and when they're hospitalized you want to know what they're taking, and when you prescribe a new piece of medication you want to compare that to what they're taking already. The really active, really engaged patients in Germany write these little papers by hand, where they write down what has been prescribed to them at some point, and just carry that piece of paper with them, and then if they go to a hospital and they get out of hospital again, they would have to update that piece of paper. Maybe they do, maybe they don't, it's just not a good system. So as a doctor who is working with patients, every time I see new patients, I have a really hard time finding out what they're actually taking already, and what I can prescribe to them. Then we have this phenomenon of consumer electronics companies increasingly going into the health data space in a very smart way, and I think there's a lot of good things happening here, but basically declaring their health care products as lifestyle products, thereby circumventing a lot of those pesky health data, privacy problems, by saying this is not actually health data, this is lifestyle data, and getting more access, and you can sort of think why does Apple need to have health data. At the same time, I think a lot of good things are happening here. Patients are becoming more and more empowered to make their own decisions, they're learning more and more, and I think overall it's a good thing. The worrying part about this is that traditional health care providers might actually lose track of, or sort of lose the connection to the consumer electronics competition. And then finally, as we heard yesterday, health data does get lost, and it's a terrible thing. This is probably the worst story I've ever heard about. Singapore lost a whole database of HIV people with HIV, and we're storted for it, and you really don't want that to happen. So what are the learnings of the hacker community, and what are the things that I want to talk about here? First of all, I want to say very clearly, keep hacking. You know, you don't want to lose this data, and I think having people like the C2T community being white hat hackers, and making sure that people say on their toes is very, very important for the safety and security of our health care data, because I don't think we will go back to a world where this is all on paper. This will be digitalized, and we will need to make sure that it's safe. The second part I want to talk about is a little more subtle. In data privacy under GDPR, you have this idea of consent. So patients can donate their data, patients can say you can use my data to do something, and this conversation currently, especially in Germany, but all over Europe, is very much driven by this idea between narrow consent and broad consent, where narrow consent is this idea that as a data subject, I can say very specifically what can and cannot be done with my data, and so I need to allow every single step of processing my data, whereas broad consent would say, you know, hey, I can say my data can be used for all kinds of cancer research. And the issue that I see is that the idea of narrow consent is beautiful in theory and is a great legal idea, but in reality leads to situations where, for example, a patient is being asked to sign a data usage form in the waiting room of a hospital by a study nurse. I don't think that's the idea of freedom we had when we first said we want narrow consent, because you have this power dynamic, the patient wants treatment, that is not a free choice what to do with their data. And I think there's a trade-off here between narrow consent and being very specific, what you can do with data, and the user experience part of it, of how you can give that consent. And I think being more clear about the chances we have with a broader idea of consent, but having better user experience is something that would be extremely helpful in health data and in healthcare. Sort of similar to this idea of the cookie banners you get everywhere, where having one switch per tracking cookie, at least for me, doesn't really help me much. I want to say cookies, yes, cookies, no, in a very simple one-click solution. And then, finally, I want to give you some ideas of legitimate uses of things that you can do with healthcare data, in maybe kind of an open data style project that you can work on to answer, for example, the questions that I started with. So, to get you started on something, I thought I would tell you about a few data sets that I think are cool and allow some hacking. The first one are these two. On the left, you see a famous data set for pathology. So, what we started with, this is what human tissue, when you sort of cut it and dry it and cut thin slices of it and look at it in a microscope and color it, looks like, these pictures on the left, you get them with a diagnosis and you can train computer vision algorithms to hopefully improve on the quality of diagnosis that we currently have. On the other side, just to show you another example, these are pictures of birthmarks and the question is, is there melanoma, so is it skin cancer or not? Both of these data sets are relatively well studied and great places to start if you're into computer vision. Then, this is a bit of a German-specific thing because healthcare is organized nationally and so different countries will have different data sets, but these are two data sets that I think are very interesting and probably underutilized. The one is the so-called DRG browser where you're actually intended to download software which then allows you to see certain aggregate statistics about what billing codes are used for procedures are used, so it tells you a lot about what healthcare is rendered and which numbers in Germany. So, the example here is actually birth and under the billing code for birth, what procedures were done, so what specific procedures were happening. You can download that software online. It turns out that all the data that software displays is just in CSV files and install directory. You don't need to install it, but it's very interesting data to just get your fingers dirty on. And the other thing is a so-called Qualitätsberichte is something that all hospitals have to publish and somehow I don't think this has gotten much attention in the open data community yet. It tells you a lot of data about what types of procedures, what types of diagnoses hospitals are seeing and then some. So, I don't think this is necessarily super helpful for their quality metrics that they're publishing there, but it tells you a lot about what kind of patients hospitals are seeing. And if you pool this with other data, like census data, I think there's a lot of interesting things you can do here. This is another last example to sort of get you thinking a little bit out of the ordinary. On the left, you can see that certain cities have published the availability of their emergency rooms in different hospitals. So, basically, for a given hospital in a row per time of day, you can see how much room they have in their ER for specific diagnoses. So, you can say hospital A is getting flooded with gynecology cases, but they're doing okay for bone fractures. And you can do all kinds of things like try and build predictive models on this. Maybe somebody wants to archive the data somewhere. I think there's a lot of interesting things that you could do with this type of data. Of course, originally, it's intended to be used for ambulances to see which hospital to drive to. And then finally, on the right, is a very well-studied example from the US. Probably one of the richest data sets that we currently have in healthcare. Mimic is a data set of patients in a hospital system on the east coast that contains almost everything that happens in a hospital. So, lab results, doctors' notes, medication, all of these things that in Germany we don't really have access to are in this data set for research in a de-identified manner. This data set is not public-public, but extremely easy to get access to. So, another interesting case to dig into and see what's possible. And there are some privacy concerns, but given that this data set is widely used in research already, there's a lot of papers about it. I think this is a relatively innocent example to start with. Now, with all healthcare data, there's a lot of issues, and I want to talk about them briefly. I see three main issues. The one thing is ground truth is often lacking. It's unclear what ground truth actually is. And if you remember the example I started with, you know, you have an original doctor's diagnosis, and now you have a consensus diagnosis. Is that consensus necessarily better? Why is it better if doctors agreed on it? Maybe they were wrong, and the original doctor was right. It's very hard to really find out what the right diagnosis would have been, and I think a lot of research is lacking in that regard. And this sort of carries over to a lot of different data points where it's extremely hard in healthcare to just trust the data that you have. Then second, semantics are surprisingly difficult in healthcare. What this means is, you know, somewhere in a hospital, they store lab results, for example, and there's just no standard whether they use milligrams per liter or grams per cubic centimeter, or there's all these different options, and this one example sounds trivial, but this is all over the place. How do you store a certain diagnosis? Is it the flu? Is it influenza? Is it influenza A? There's just many different ways to code this stuff, and having semantics, which is called sort of the mapping of what people use to what you can actually work with as a data person, is still, I would say, in its infancy. And then finally, and this leads me to my last two points, there's a lot of sampling bias and resulting issues of representativeness. And I think sort of deep down, this comes from the fact that healthcare data is personal data, and so, you know, you can't just go and get access to all healthcare data for all patients. So often you have these examples where some data set surfaces like the Mimic data set, and then that's all you have, and you work with that. And that leads to a lot of representativeness problems in medical research, but I think in general in a lot of healthcare data projects, this is one example, a paper published in Nature this year, which is very impressive. They predicted some really cool things, but the only data they had was from the Veterans Association in the US, so a separate hospital system for US veterans, and as you can predict, they're 94% male. So learning algorithms on data that is 94% male, sort of generalization error will be an issue, and you would like these algorithms, you would probably like these algorithms to work as well on women as on men. At the same time, what else are you going to do? This is the only data that's available. So you kind of really blame them, but it's just an issue that is around in healthcare and that you need to take into account. There's one thing that this also implies, which is it's very hard to certify that a certain medical device works as intended, because the regulatory bodies also don't have fully representative test data. You're basically going to have to believe the manufacturer, the vendor of a medical device, such as an algorithm, that they used appropriate test data and that they used representative studies. Now traditionally, in pharmaceutical research, you run these randomized controlled trials, i.e. you have a status quo, you have a new drug, you sort of randomly give you a new drug to 50% of patients, you use a status quo on the remaining half, and then you compare the two groups. Still the gold standards in evidence, but extremely hard to do this in a representative way because you need to recruit patients to voluntarily take part in a trial. It's expensive and sometimes might not be possible, because for example, you might be interested in how cancer drugs work in children. We think that running medical trials in children is unethical, so how are we ever going to get to a point where we can learn about the efficacy in children? And I think a really cool opportunity for evidence that we have with algorithmic data and algorithmic medical devices here is that we can collect test data as a regulatory body. And so I think this is something that is currently being discussed internationally by the World Health Organization on the European level with the new commission, and that we are doing a lot of work on, which is this idea that governments and regulatory bodies should collect test data in a representative manner. So what would happen is on the top row you sort of see the normal workflow here for deep learning, but it could really be anything. There's public data, there's private data, people train their algorithms on it, and in the end they come up with a model on the far right. And then the regulatory bodies say on the European level would have a test data set that is actually secret, that is sort of not to be shared with anyone, that has a high quality standard. And this can be achieved, for example, by mandating that you collect this data from hospitals all over Europe, and you just say hospitals have to submit say 1% of the images that they have into the secret body. And now I have representative data that I'm not sharing, and so I can use it as test data. And I also can keep doing this over time so that I can account for population shift. So for example, in skin cancer, you could say that in Germany the average patient in the next 10 years might have a slightly darker skin tone than the average patient in the last 10 years because of migration patterns. And so if you're working with algorithms, you want to account for this population shift and you want to make sure that the algorithm that you certified looking back on test data from the last 10 years will still work on the new patients of the next 10 years. And so this will be a way to keep collecting this data and recertifying algorithms. Which finally leads me to my last point before we open to questions. I think what you have here is just accuracy in the end, but this doesn't really answer the question of what fairness means. And I mentioned earlier that you want an algorithm that was trained on 94% men to work as well on women. But what does as well actually mean? You know, is 2% points difference good enough? Is 4% good enough? Sort of not super clear from the outset what fairness means in these things. And so I think this is a broader issue that we should discuss as a community beyond healthcare, which is quantitative fairness, which is the idea that more and more decisions are made by algorithms instead of people. Traditionally, when you look at discrimination and fairness, you try to sort of empathize with the actor. So in the end, the judge would say, did he mean to discriminate or did he not mean to discriminate? With machines that doesn't really work anymore, and so we need new ideas to describe what fairness looks like and what discrimination looks like. And I think approach is to quantify the fairness of a decision, exist. The literature is relatively mature. This is one overview. But somehow in Europe, we don't really have that debate yet. And I think it's long overdue that we start having this debate for healthcare and outside. So my talk on a nutshell. First of all, keep hacking. Second of all, think about what informed consent really means for data. Third of all, do contribute legitimate use. The data I showed you, the questions I showed you are in the starting point. Reach out if I can help with anything to connect you with data or ideas. Fourth, demand evidence for medical devices. And I think trying to seize the opportunity we have to get better evidence is a really big chance. And fifth, promote quantitative fairness. Thank you very much. Thank you, Lars, for your talk. So we actually do have a lot of time for questions. We have two microphones in the hall and the signal angel. So please line up at the microphones. And we start off with a question from the internet. So please signal angel. Yeah, I'm familiar with the literature. It's not one of our main topics because we focus on digitalization. So the checklist idea is pretty convincing, actually. The idea was that in aviation pilots were at some point required to fill out these checklists before takeoff and during flight and say, yes, I did check this, I did check this, I did check this and sign it, which pilots hate it because it makes you do things that you don't really want to do. They were forced to do it and it dramatically reduced incidences in aviation. And now, the same thing should hold for doctors because doctors also hate this idea that they're subjected to processes but checklists to reduce the number of incidents that happen. The idea is relatively old and I think has been taken up in medical guidelines a lot. But we don't specifically work on that because we focus on digital aspects. Okay, microphone, let's call it one. How would you legislate or decide what kind of consent would be useful for medical data? Currently, it is, I mean, there is legislation around it. It basically comes down all the way from GDPR down for German legislative bodies. Currently, health data is deemed a special interest private data and so you need very specific narrow consent. And one option that you have for health data in particular is that GDPR actually allows for exceptions to this idea of consent where you can say if an important social interest stands against your private interest and keeping your data personal, you can say this data is available for research use, so specifically for research. This is something that can be done, but I think because it's so sensitive, you want to give people the opportunity to maybe still opt in but in a different way, maybe in a broader way or opt out. And I think that's why I'm talking about broad consent because I think narrow consent currently is too narrow and there would be ways to go broader and that would significantly facilitate access to this data for healthcare. Okay, microphone two. So you mentioned at the beginning this patient that had cancer. So it was diagnosed with cancer but he didn't have cancer, was operated. But I don't think that was the fault of the doctor because this reduces to statistics. You have the precision record rate of like if you want only the patients that have true cancer, you have high precision, but you miss a lot of patients that have cancer. So or if you want our patient with cancer, you will have a lot of people that don't have cancer. So and the same problem you also have with machine learning. So you have the precision record rate of and this reduces to actually two problems. You need better measuring devices, so better devices that reduces this error, the variance and of course more data. So in self, it's not the problem of the doctors, so it's statistics. So first of all, I agree with you, it's not the problem with doctors. They're doing a fine job and doing the best job they can. But sort of as data people, if you look at this naively, it's kind of weird that you have humans looking at pictures all day and are expected to diagnose 100% correctly. You know, all day every day they just look at pictures. That's not what humans are good at, so it shouldn't be surprising that mistakes happen. But it's not their fault, it's just their human. The second part is, yeah, you're right, there's precision recall, but what I'm arguing is that humans have poor AUC in their predictive qualities and we would want to have better performance in this type of algorithm. And how can you say that? Okay, please, please, concise questions. No discussion. Okay. Is there a follow-up question to that? Yeah, do you set then the threshold? Do you want more precision or do you want more recall? You want more AUC? Okay. Is there a question from the internet that's signature? Yes, Twitter wants to know, do you think that using algorithms on the existing health data can help prevent gender bias diagnosis or isn't there enough knowledge about how diseases show in women or in women? Sorry, could you repeat the second half of the question? Okay, I'm a little bit different. Given that we already know how certain diseases show up in women differently than men, do you think that big data and algorithms can help discover more of this and change how diagnosis are provided or how accurate they are? Yes, absolutely. I think first and foremost, the medical practice often does not yet sort of live in the year 2020. So I think a lot of gender bias exists implicitly in how treatment still is delivered. And so first of all, I think using existing healthcare data is helpful to show these biases and demand them to be reduced. And then second of all, I mean, I can't predict the future, but I could imagine that you find all types of understudied groups of patients that maybe historically, because they didn't sign up to be part of RCTs, randomized controlled trials, were understudied and you can find gaps in the care that these people receive. And women are one example, but I think ethnic minorities would be another, age groups would be another. I would expect a lot of these type of insights to surface. Okay, microphone one. Yeah, really short question. You mentioned this German quality assurance data set. What does it contain, besides admission rates, does it contain outcomes or risk adjustment, etc.? You mean the the DIG data set? The DIG data set actually only contains aggregate data on the total, so the total episode. So what was the total episode build as? So there's a billing code and then there is procedures linked to that. Other data sets exist, some of them private, some of them public access. But I think if you want to have more detail on all the admission data, you're moving into a world of currently private data or protected research data, because it's very hard to anonymize this data. And so there's always a risk of re-identifying it. Okay, so microphone one again. Thanks. Thanks for your talk. I like the idea quite a lot with building up representative data sets, obviously. But for me, the question stays in my mind, what you are achieving is a representative data set, but not necessarily also a high quality data set, especially if you force this data collection up on hospitals, then you could end up with a really messy data set. What are your thoughts about this? Very good point. I think it's not enough to just collect this data. I think you would also need to actually invest in the quality of this data. So you would basically need to hire an expert panel to improve the data that you're collecting, which means this is a very expensive overall process. You will not be able to do this for every type of diagnosis that you're interested in, but only for the bigger fields that are becoming more and more mature. But one example, I think, mammographies, so breast cancer screening, is relatively mature as a technology. There are several companies going on the market now, and I think there will be one field where starting to collect international test data will be very feasible and probably worth it. I'm looking at the signal angel. Is there a question from the internet? No? Okay. Then microphone two please. So looking at this from the perspective of the individual whose data is supposed to be these data sets in the end, is there any work being done? Or do you have ideas on how to, so to say, soften the blow if your data becomes public? This could be either unintentional, for example, data leaks, or also if I just agree to have my data be part of a public data set, what kind of protections to have as an individual in that situation? Because I think that is obviously one big reason why you don't have access to sensitive data, because it is sensitive. And is this sensitive often for reasons that are maybe something you could address with policy? Yes and no. So first of all, I think what we're actually lacking is a German verwendungsverbote. So penalized protection is that even if you have certain data, you cannot use it. So, you know, if certain types of data fall into your lap, or if you happen to accidentally re-identify anonymized data, you're obliged to delete that data. Which does two things, it makes the data less valuable on the markets, so hacking for them becomes less interesting. And second of all, those actors that are sort of in good faith will help protect that this data doesn't move through the world. But I think they kind of fall short of really protecting you. I would say one part is people tend to be a little too scared, I personally think, about what can be done with their healthcare data. So I would be pretty careful with genetics data, because we know there's a little information in there, we don't know what data is in your genetics data. So yeah, maybe not. And there are certain stigmatized data points like sexual health, or psychological health that you maybe don't want out in the open. But besides that, I think, yes, there are certain issues. These issues we can address with legislation, maybe better than we are. So quantitative fairness can also go to health conditions, and can say, you know, you cannot discriminate against certain health conditions as an insurance company. That would take a lot of fears away that people currently have. And then finally, I think, my idea is I'm willing to share my healthcare data if you're also sharing your healthcare data, because I think in a world where everybody's data is open, a lot of the risks are already mitigated. So I think being conscious about how to do this, and maybe not starting with sexual health data, not starting with psychological health data, would be a way forward. Okay, are you curing for the microphone? Microphone one. Hi, my question is basically that, giving all the concerns that people have with disclosure of data being detrimental to their personal privacy and issues, are we already at that point where we really need that, considering health, if there's a toxic substance, a lot of health issues are caused by needing to analyze data on how toxic stuff would be in certain circumstances. In the public sphere, have we done all data disclosure that is not private, that is not linked to an individual, that is linked to male companies having a business model that generates health issues for the public, and are we already disclosing this data first before we harvest my data? Like how does this count in? Because in Germany, if I recall right, there was this issue of coffee makers and the coffee brewing, and they investigated that the cleaning in the morning of this, toxic substances, that is not toxic substances led in the first brews of the coffee, but they wouldn't disclose this data for saving the companies, not having a bad business model there. So the question is, is my private data the last resort here, or is there a headway and wiggle room for improving our health without harvesting my data first? Why does it have to be either or? I think asking for more aggregate data to be publicly released is a good idea. I showed you some available aggregate data sets. You can probably think about more. But in the end, make no mistakes, these aggregate data sets come from pooling individual people's data, so you need to collect it at some point to then be able to do your analysis on it. So I think even current ideas say, maybe you cannot actually, even as a researcher, get access to one individual patient's data, but you can run queries on a lot of patient's data, and since you can't predict the queries that are interesting, you need to collect individual data at some point. Okay, microphone two. Thank you for the talk, but you mentioned mammography as a good example for using computerized detection and any improvement. So we had two waves of automatic mammography in the last 30 years. The first one was a total mess, because the techniques were not safe enough, good enough, and they differed from side to side, what they used, analog techniques and things like that. And the next wave was about 10 years ago, where we used automated data sets and all the things that we had, and it ended in a total mess too, because in 2015 when we did the first revision of this idea, we saw that the people that used it, either in the US or in the European hospitals, had the problem that there were too much false positive ideas, so we had too much recalls, and I think we lost a lot of trust, and so how can we avoid this and the next time we use it? I think what you're saying is exactly why I'm suggesting that we need regulated test data to avoid making these mistakes on patients. I would personally think that technology currently is relatively mature, and there are several companies going on the market again with these types of products, and so precisely because in the past it didn't always work as intended, you might be interested in ascertaining as a regulator that you have good test data. And I think the re-improvement of the test data is a crucial point, as you mentioned. Okay, microphone one please. Some important health data we have in Germany are the cancer registers, but they are organized on a federal level, and we had a lot of problems getting them running in a good way, and now we have the new possibilities of the electronic patients file coming up, and also when we will have a change on our regulation with organ transplant stuff, we are going to have some sort of organ register, like am I willing to donate or not? Are you thinking about is this register going to be organized on a national level or on a federal level, and do you think that the future of the electronic patient, the APA patient file, will change the way the registers, like the cancer registers are organized? Do you think that there will be some new interaction in how we use this data? Personally, I really hope so. Maybe for those that are not familiar, Germany, I think like other countries, has these registers where we collect data on individual diagnoses to be used for research. So we say, hey, we have a lack of understanding of a certain cancer type of organ transplants, and so we collect data specifically for this purpose under a regulated exception. Absolutely, I mean, in a world where we have a national electronic medical record, you would hope that this registry data can be included there, partially also because the electronic medical record could offer consent management, where patients could be polled to give their consent for certain uses, and having that on national infrastructure in a secure environment would actually be very desirable. The second part of your question, who knows what's going to happen with the national electronic medical records there. They're not really available yet, and we don't know how people use them, but in an ideal world, I think they could be used as a central information storage and sharing opportunity, not only with doctors, but also if the patient wants that, with registries and with researchers. Did I answer your question? Yeah, thank you. Just not the organ thing, if you think this is going to be national or not. That's not my field of expertise, unfortunately, I can't tell you that. Okay, might perform one again. Thank you for the talk. I attended the presentation yesterday about the APA, so the patient file, the case file, and what was explained there is that they don't want to take the efforts for actually having the option of opening the file for certain doctors, just for certain parts of the own file for doctors. It's more about all or nothing, what you have to choose as a patient first when they want to implement it. And considering that private health companies already implement systems for data collecting with an automated soilization for the data which they send to research centers, so to say, is there any effort by now from the German government or certain institutions to follow such an idea of psidemization to collect the data and keep the individual data with the doctors? And if not, is there a, or where is the best place where to start for promoting such ideas? So the idea of pseudonymizing and anonymizing healthcare data, of course, is widely spread and is well understood in the government. I think for the German patient records, the national EMR, that doesn't really, well, it could have worked, but we decided to actually have central storage and guarantee privacy by encryption, which has certain advantages in particular, I think, me personally, I'm much more comfortable with my data being in sort of national infrastructure than being in doctors' offices, seeing the IT security of the typical doctors' office in Germany. So I think there's a lot to be said about having that in a national infrastructure type place. The second part, I think the option, so what you currently can do is you can share all your data with one doctor. You cannot choose what parts of your data in the current specification. I think the criticism has been heard and it will be possible to share only selected parts of your data. But again, I think what doctors are saying on the other side of this discussion is what good is a subset of a patient's data. The patient might not know what parts of their previous diagnosis I actually need for my work. So I think the idea of withholding data from doctors is very unpopular with doctors. And I think it might be much more of an opt-out type situation where I don't want my sexual health information shared unless it's explicitly needed, but everything else is okay. Then it's an opt-in situation where I choose every single piece of data that I have in my patient record. Okay, may I ask a question? Yeah, okay, so it doesn't really answer for me the question about the psychomization because I think there's a lot of interest of gathering the data psychomized to do research. So why couldn't be this a first step? Because with the new EPA coming, if you have the access or if you have signed up for this electronic case file... Let me interrupt you for a second. What you're saying was pseudonymization is totally right. In the law that went into effect I think two weeks ago, DVG, digitale versorgungsgesetz, we actually installed a center to collect research data from health insurance companies and we have actually written into law a mechanism by which this data is pseudonymized. So pseudonymization is in the law and it is being used from a data privacy standpoint that is just de-identifying, that is not anonymizing because using external data you can still re-identify the data. And so it's a hygiene factor but it doesn't solve all these privacy issues that we have. And so I think yes, we're doing it but at the same time you need consent and you maybe need in certain circumstances effective anonymization techniques to really solve this bigger issue. Okay, so this concludes our Q&A and thank Lars very much for the talk and for the extensive Q&A. And a round of applause for him. Thank you. Thank you.
|
Health related personal data is highly sensitive -- and yet it promises an outright methodology shift for the surprisingly conservative healthcare system. This talk provides an overview of beneficial uses of health data, and formulates ways to get involved to make sure the benefits are reaped in a conscientious manner. Healthcare is rapidly becoming digital: security and data privacy call for active participation. But so do questions of quantified fairness and certification of digital medical devices. Hackers can play a crucial part to ensure this benefits patients and citizens, by championing data transparency and standards of evidence. My talk will outline ways to get creative with data beyond scrutinizing governments on information security. For the past year I worked for the German Ministry of Health's in-house think tank (hih) as an advisor on artificial intelligence. I will present my personal views, not those of the Federal Government.
|
10.5446/53064 (DOI)
|
So I'm very happy to announce for the second talk of the day Kirill Solovs-Jovs. He's a lead researcher at possible security and he's a bug bounty hunter. He's an IT policy activist and a white hat hacker from Latvia and he's talking today about nothing to hide, go out and fix your privacy. Some citizens complain about being under surveillance but they are told that if you have nothing to hide, they have nothing to fear. Still news media regularly cover cases where citizens with unusual behavior are put on suspicion lists even though they have broken no laws. Now this is a quote from a news article from European digital rights newspaper, Eddrogram number 300. Anyone here took math in college? Mathematical logic? Some? So you can tell me what this means. So basically what it says here on the screen is for every person that belongs to the group of criminals, that person also belongs to the group of people who are hiding something. That funky sign in the middle means from that statement follows that for every person that has something to hide, that person also belongs to the group of criminals. How many of you think this is correct? No one? Great. It's of course wrong but it is a really common fallacy. I hear that a lot. Now what even is privacy though? Privacy first of all is the autonomous right to choose. Who will work with my data? So maybe I'm fine with company A working with my data but I'm not okay with company B working with my data. Also how the information is processed. So maybe I am okay for Amazon to process my home address to send me a package but I do not consent to Amazon sending goons to my house to get money out of me. And of course what information is processed. Even though I'm okay with giving my address, maybe I'm not okay with giving my phone number. But it's not only that. It's also right to decide who I interact with. That includes right to be left alone. So in a sense privacy is also about consent in a lot of ways. Now to illustrate this is a bit better. To try to transfer my feeling to you I created this concept of Schrodinger's video camera. You may have heard of Schrodinger's cat. Schrodinger's video camera goes something like that. Imagine you just bought a new apartment somewhere in Paris. Great view. Nice building. And then one day you notice that a security camera has showed up outside your window on the open side. It has one of these non-transparent domes and you cannot see inside it. You cannot see if it's looking at you or not. You can't even tell if there's a camera inside or if it's completely fake. But for me it doesn't matter. The feeling I get is terrible either way. Even if someone told me that there's no camera in there, even if someone showed me that there's no camera in there, I wouldn't feel at ease. So that's why for me privacy is in many ways the feeling of privacy. Now exactly one year ago at this stage I gave a talk, a talk of personal privacy in 2018. I just want to acknowledge some reactions that I've been getting since the talk. So some of the reactions are funny. Some of them are affirmative. Others are insightful. Of course others are just plain uninformed because you do not have to actually use a phone even though it's hard but you don't have to. Even more so you don't actually have to use Facebook. But these two remaining arguments I want to talk closer about these two. So here's a short video. Some of you may have seen it. So it's a video of a lady trying to unlock the man's phone with a face ID. Now it's clear from this that it's fake. At least it's a sketch. It's clearly visible. But what I want to talk about is the reactions. And I'm outraged by the reactions. All the main topic of the video aside, all these people here assume that since he has something to hide, he did something wrong, which is unacceptable to me. So what if you really got nothing to hide? Maybe we have people in the audience here even though I doubt that have nothing to hide. I'm sure we have some people like that watching the stream, not here at the congress. So what about data hoarding? What if someone will say a government or a large corporation these days, maybe that's more likely, collects enough data about you that allows them to more easily blackmail you, that allows them to impersonate you? Even if that's not the case, think about herd immunity. That's a concept used in vaccinations. You should get vaccinated because some people medically can't and you will also protect them by providing human shield if we can so say. The same applies for privacy. There's herd immunity in privacy. So many people not hiding anything will make it really hard for the few who actually have something to hide. And it doesn't have to be anything criminal or nefarious. Many people have legitimate reasons to hide things. And by accepting that you're okay with not hiding anything, by giving up your right to privacy, you're also helping those people that actually need privacy to give up their right as well. And of course, remember that today's authority might become to tell Terian or inhumane. We see the transformation process starting in a couple of western countries right now and I don't know where it's going to lead us in the next years, but it may be the case. Imagine how much easier would it have been for Adolf to have committed his atrocities here in Germany if he had Facebook, if he had access to all the data? I hope that's not coming back, never. Now what I want to talk about now is a state of privacy. I want to take a look at what's happening around the world. And what's happening around the world, of course, is there are these protests in Hong Kong going on. So this is an article from middle of this year. And people in Hong Kong, they're really aware of tracking that their transportation cards can be used to track where they move. So they try to avoid that. We are afraid of having our data tracked is what they say. So that's good. People are getting more aware. But it's also happening in the West. Los Angeles decided, passed a law basically, that all the scooter sharing companies have to share real-time scooter location data with the government. And Uber, even though I dislike their methods and trying to disobey the law wherever possible, I think they took the right stand. All the other companies actually are ready to give the data and are giving the data to the government. Uber is the only one who questioned this and tried to fight against that. The idea behind that is a good idea. So the government wants to make sure that the communities, the geographical communities where underrepresented people live or under privileged nationalization shall say, that they also get the scooters. They don't target just the rich neighborhoods. So that's good. But do they really need real-time tracking for that? I doubt that. Now my favorite topic of CCTV, of course, this is the only slide from the last year that I included here. These are actual posters for those of you who haven't seen them from the UK, how the government is telling you that CCTV is good. So what's new? Well, this happened. Beginning of this year, so in May this year, a man was stopped by the police. All he did, he was walking by and someone warned him that there is facial recognition going on over there. So what he did, he pulled over his sweater, he pulled it over his face and walked by. Police stopped him, forced him to scan his face, found nothing wrong and fined him 90 pounds for trying to evade the facial recognition. So it's super disturbing and super creepy. But I mean, there are ways around that, right? We could use this cap, for example, and have a face of, I don't know, general secretary of a Communist Party somewhere. So then you're not trackable, not attracting attention, right? There is this law in Hong Kong that doesn't allow you to use masks anymore in the protests. And it's problematic because they track everyone through facial recognition. So that's why the mask was allegedly created. The good news is, it's a relatively recent news, in November it was ruled, it's a system, political system in Hong Kong is complicated, but Hong Kong court ruled that it's illegal to ban wearing masks. And the full hearing in the next level of court is still going to happen in January next year. But currently, the law is suspended. Currently you can actually wear masks. I mean, I don't know if police actually are okay with that, but by law you can. Now, privacy advocates, me included, have always been complaining about not being able to use public Wi-Fi without a phone number. Finally we can. We have an option to take a selfie and upload the passports. It's just terrible. Oh my God. Let's get back to facial recognition for a second. So European countries are trying to copy and paste the idea that China is doing. It's not just China, it's not just Russia. But the difference is they are asking for permission. So why are they asking for permission? Well. And now that they've got a negative answer, the question is, will they listen? Yes, they will. Because European data protection board finds a school in Sweden because of using facial recognition. I mean, thanks to whoever made it, but we have GDPR. So we do have some protection in Europe. Unfortunately, not all of us here and not all of us watching this stream are so lucky to have that kind of regulation. But it does actually work. Even though I've been hearing bad things about GDPR, yes, there are things to improve. But it does work. It does help us. Now, as you were told, I come from Latvia, and I want to share something from Latvia. This is a picture I took at a press event in Riga. So police were presenting their new vehicle, this one over here. And this vehicle, the idea is it will automatically find people for not wearing seat belts, talking on the phone, not showing turn signals. So it has a bunch of 360-degree cameras in there and doing some fancy stuff. I mean, I'm all for traffic safety, but check this out. I mean, I like that they do have a sense of humor. I guess BB was taken, so I mean, they decided to go with GC, which is okay, which is the one that police use, but still, or we'll be proud. So another slide from my previous presentation. So People's Daily China was telling how cool it is that in classrooms you can actually now use surveillance cameras to track the progress of students. Are they learning? Are they focusing? And so on. Well, what's new? What happened in 2019? Anyone knows? This happened. Brain scans. So let's take a look at this short video here. Teachers at this primary school in China know exactly when someone isn't paying attention. These headbands measure each student's level of concentration. The information is then directly sent to the teacher's computer and to parents. China has big plans to become a global leader in artificial intelligence. It has enabled a cashless economy where people make purchases with their faces. A giant network of surveillance cameras with facial recognition helps police monitor citizens. Meanwhile, some schools offer glimpses of what the future of high-tech education in the country might look like. Students have robots that analyze students' health and engagement levels. Students wear uniforms with chips that track their locations. There are even surveillance cameras that monitor how often students check their phones or yawn during classes. These gadgets have alarmed Chinese netizens. Now, that's screwed up beyond repair, if you ask me. But luckily, just happening in China, right? No. U.S., America. So there's a great article. I invited you to read it in Fall in the Guardian, published in October 2022 this year, about how they use digital surveillance for American kids or against American kids, I shall say. I will read you some of the quotes from the article. I divided this article into multiple categories so it's easier for you to understand. First of all, the reason why is anyone doing that? I mean, it's not China. It's not the national communism. You're not supposed to spy on people, at least unless you're the government in the U.S. So the reason is that lawsuits by parents of students who have committed suicide or parents of children who have been cyberbullied is a problem for the schools. So they see this as an easy solution. They track everything students do and then they are office code free. I mean, from the perspective of the school's lawyer, even if the kid does commit suicide, their assets are covered because they have this great system and they did everything they could. So it's kind of a loose, loose situation there. Now what was reaction time? So as the article says, it's not, I've sent this email two days ago. You've sent this email three minutes ago. Come to my office, let's talk. That's the speed, that's the latency, that's the reaction time of the system. In Veldt County, Colorado, a student emailed a teacher that she heard two boys were about to smoke weed in a bathroom. And school is proud of this. Within four minutes of sending the email, troops were deployed to the bathroom. Scope. So I mean, it's getting worse and worse. So I'm going to get ready for that, prepare for that. So it's not just about what they do at school. 24 hours a day, whether students are in their classrooms or their bedrooms, the monitoring is going on. Of course, I'm not talking about video cameras here, but the content monitoring. Tech companies are also working with schools to monitor students' web searches and internet usage. And in some cases, track what they are writing in their private social media accounts. Gaggle, which is the name of one of the companies providing the service in the US, also automatically sends students a scolding email every time they use a profanity. How is that for a chilling effect? Now with all that, what's the justification? Some proponents of the school monitoring say that the technology is part of educating today's students in how to be good digital citizens. What does that mean? Well, allegedly, it helps trained students for constant surveillance after they graduate. That's the actual quote from the justification of this system. And here's another quote from Bill McCullough, a Gaggle spokesperson. Take an adult in the workforce. You can't type anything you want in your work email. It's being looked at. So their idea is, you know, let's do this to our kids in schools and then prepare them for that. What are the effects? Of course, there are chilling effects. ACLU said schools don't post on a bulletin board. Here are the words we are going to be searching for. Of course, it forces students to be careful and to self-censor. They might not write about things or talk about things that are not, in fact, being monitored. The idea that everything students are searching for and everything that they're writing down is going to be monitored can really inhibit growth and self-discovery. That's a quote from Natasha Duerte, a policy analyst at the Center for Democracy and Technology. And finally, it's a military technology in the United Kingdom. School surveillance technology has been already tested for use in counter-terrorism efforts. Again, I don't want to get blown up by terrorists, but I mean, I don't like all these safety processes that we have either. I mean, even here at Congress this year, we are starting to have signs, don't leave your bags unentended. I'm not sure how I feel about that. I mean, I thought it's a safe space here. The ACLU expert that referred to previously said, it's certainly fair to ask to what extent we feel comfortable with technologies first developed for use in war, being used against our children. Now let's take a moment to talk about the company name, Gaggle. According to Merriam-Wefster dictionary, gag is a verb that means to prevent from exercising freedom of speech or expression. And the other definitions for the verb to me only emphasize the non-conceptional nature of the interaction between a student, the kids, and the school. They don't have a say in that. They're being gagged, not only technologically, but also psychologically. That's unacceptable. Okay, let's talk about something else. The end-to-end encryption, or as US Attorney General William Barr called it, warrant-proof encryption. GCHQ has suggested that touch firms communication services should be able to serpitously add intelligence agents to conversations or group chats. This is still an ongoing discussion, but this is where this is going. So I've been looking at this problem and I've been thinking, I've been trying to predict how will secure and cryptic communications look in the future? Because the government really has, they have strong incentive to try and access that kind of communication because of terrorist content, because of child abuse material. So I think as a community, we managed to convince them that backdouring the crypto part is not going to work. I mean, it would work, but it's not the worst of the ideas. So what I think is actually, this is where it's going to be going. So public clients are going to be able, like public clients, I mean, WhatsApp, Facebook, and so on, they're going to be able to add the third party to your encrypted communication channel without you knowing it's going to happen in your client software. So that's what I think is going to happen. By the way, Jim Baker, the FBI's general counsel who has been working with William Barr on that proposal, had a change of heart. This is a cool article from October. This year you can take a look at it. And the guy finally understood that what he's trying to do is not the right direction to go to. Okay, let's talk about those client apps. So let's take WhatsApp as an example. If WhatsApp were doing something shady on your phone, you could stop it by routing your phone, right? That would help because then you can install background apps that monitor the traffic, that monitor the file interactions, that take a look inside the WhatsApp. And amazing. But you cannot do that. They've had that rule for some time. And it also applies, of course, to iPhones, to jailbreaking iPhones as well. But they tend to have these waves where they reinforce the rule. So it's been there for years, but they reinforce it once and again, right? So okay, I can't route my phone. What about third-party apps? Well, nope, this is actually a bit newer. It hasn't been there for that long. But if you install a third-party WhatsApp app, you're going to get banned, right? So the only question for us, the technological nerds here, is, is it going to be legal to install our beloved secure apps? But I want you to think about the other people. I want you to think about non-technological people. What are they going to do? How are you going to communicate with them? And actually, not less importantly, how will they communicate between each other? Let's take a look at another important aspect of everyone's everyday life. Watching pornography online. The Australians want to use facial recognition to verify that the people who are watching porn online are the actual people. I mean, how short-sighted have you been to not see how this can go wrong, right? Those fake emails everyone's getting, we filmed you watching porn and we filmed your face. Those are going to turn real if this is actually enforced. But another thing, right, is of course online dating. It has launched in the US this year and it rolls out in the EU next year. And I actually have a couple of things about Facebook online dating. It actually does provide you more privacy, which is good. So when you opt into Facebook dating, it not only makes sure that your dating profile is in a way anonymized and it limits the access to your actual data, it also does something to your actual Facebook profile where it tweaks the privacy settings a bit that you are a bit more private. But there's a catch. In order to opt in for Facebook dating, you have to enable location on your phone. You have to physically confirm your location. So Facebook isn't going to give us privacy for nothing. They want something in return. So not good again. Now suicide prevention is an important topic and Facebook is doing their share and I feel quite okay about that. That they're doing that. That's good. And here is the algorithm from the official, from their official spec that's available publicly. So basically, they monitor everything. By the way, people with knowledge on the subject have told me, have informed me that even if you do not post the message, even if you do not post the comment, if you decide to write that message and then delete it before send, kidding, submit, Facebook still gets that text. And they still launch it through this process here. So they use a classifier. They use some neural nets to try to understand what's happening. And the last step, well, not the last step, the one step before the last step is it's reviewed by a human person, which is the part that I dislike about this idea. So, I mean, giving that, that taking action over here actually means popping up with this. So the user basically gets this message here. I mean, I think it'd be okay to have more false positives and not have it reviewed by a human reviewer. That would be, that would be better, even though some people are more creeped out by robots reading their stuff than people reading their stuff. I'm one of those guys actually. Still on Facebook, date reuse. So there's this article. Facebook lawyer was forced to testify in court. They do that all the time this year, but that's one of these years. It's from June 2019, one of these times this year. And what they basically said is, you have no expectation of privacy. There's no privacy interest because by sharing with 100 friends, you have published. You have shared with everybody. And then they go on to compare it to a birthday party in the article where you invite couple of your close friends, like 20 friends, and you have no expectation of privacy because any of those 20 friends could go ahead and tell your stuff to anyone else. So I'm not okay with that. Remember, privacy is also about consent, and that's not fucking consent. Okay. Let's talk about something more technical, web browsing. Specifically JavaScript. The technology that fuels the modern web from dynamic web pages to tracking. This here is an interesting message that I got when trying to search for some parts on Mouser. So it says that JavaScript is disabled, so you can either enable JavaScript or log in. I mean, if that's an admission of why the JavaScript is being used, then I don't know what it is. We also have this article here. And I'm not good with German, but it's funny. It's kind of loads, but then it doesn't. So I don't know what the point of that again. They're just screwing with people like me. I mean, I used to have to browse with JavaScript disabled to have the web not work for me. Now all I get to do is browse from Europe. That's what I get. One of the comics that I read, that is the actual comic. That part on top. Everything else is trash on my screen. And it's not just that. It's open any page on your mobile and your screen. It's full of garbage, not the actual text that you want to take a look at. So this is interesting here. If you take a look at the Washington Post a bit closer, so this is what happens when you open from Europe. You have this nice blah, blah, blah, and then you can click agree and continue. And your only other option, if you don't agree to tracking, to give up your privacy, according to GDPR, is back to all options. And if you click back to all options, what you get is you can pay to access the content. So that may be legal. Washington Post is a relatively large organization. So they probably know what they're doing here. But it's not ethical at all. Now I'd like to spend the next 10 minutes to talk about why I do all of that. Why do I try to stay private in my everyday life? I tried to convince you at the beginning. Personally for me, it's care for others. Even though I don't have that much to hide, I like to provide that shelter that has immunity for the vulnerable people that really do. But it does, so me hiding stuff, me not disclosing as much as the normal person, does tend to create some curious situations. So I have a bunch of certificates. This is not the crowd I should advertise that to. I was kind of forced to get them. Anyway, so after every time that I take an exam, I have to write them a message. Because every time I take an exam, I show them my ID, my governmental ID, my secondary ID, and they still take my photo and store it. So obviously every time after the exam, I write a polite letter to them saying, thank you for the exam, please delete my stuff. And they do. But one time, they also said we did, and then I asked why the hell are my certificates gone, why can't I verify them? And this is what I said. So sorry, we misunderstood what you meant. So we deleted your whole account and all your certifications. And the funny part, when we were trying to resolve that, multiple times they basically asked, so tell us which certifications those were, because we deleted all of them all. Like I could choose any of them. At one point they said, okay, we restored them all. I took a look at my list, took a look at their list, and one was missing. So I told them, nope, look more closely here. So another thing happened to me. I got an SMS. Anyone still get SMS here in the audience? Yeah, about half the people. So I got an SMS. It didn't come from a number. It came from a spoofed source, so it's ASCII based, crediton.lv. And what it says in Latvian is basically, hello, unfortunately, your credit request has been denied, crediton.lv. I've never ever applied to any kind of credit or credit cards in my life. So I was confused. Obviously, a normal person gets SMS, what do they do? They try to phone the fuckers. They try to understand why the hell are you spamming me, because you cannot reply to that number. So I go to the web page, which is, you know, that, and I ask them, so what's up? And they don't pick up. So I phone them. I was about to ask them, and they don't connect, because my outgoing number doesn't exist. It's set to private. I use call ID barring. So they don't connect. They don't want to talk to me. So okay, the only thing I can do is I hop on my bike. Then I ride over to their office. And everything is fine. Everything is solved. I arrive there. I show them my phone. They ask my phone number. I write my phone number on a piece of paper. They take it over backstage to some IT guys. And the IT guys come back saying, nope, it's not our system. We didn't send it. And I'm okay with that. I mean, I could easily have sent that myself, right? Or you could have done it, huh? But I mean, it's fair. Someone can spoof that. It happens. I wave goodbye and I'm out. So that's that. A week later, same number. Win a shopping cart for 50 euros if you go in your profile and renew your information. So what they do? Call them up. Doesn't connect. Look at their web page again. Take my bike. Go there about one hour before the closing time. It's not open. Apparently, the opening hours in the web page are the opening hours for the phone, which doesn't work, not for the actual office. So 10 p.m. They were there. People were there. I saw them, but they didn't open the door for me. So on my way back, I go into the patrol station. I fill the bike a bit for about one euro and I have the receipt with me. So in two days, I ride back again. They're open. I fill the bike a bit, take the receipt, go there. And I show them this again. And you know what they told me? That, yeah, we looked more closely here and we found your number. Sorry. Our bad. I told them, okay. Well, it's a bit strange. You don't verify the number. Don't you have, like, requirements by law to verify these things? And they replied, yeah, we do, but only if we approve the credit. So you can apply with a bunch of random guys' numbers and get them spammed. So I thought, okay, that's a number for you, but I have these two euros that I need you to compensate because, you know, because of you, I was there or there, you lied to me and now I had to take these trips here. And so they told me to send an email. So I got the email address. I went home. I sent them an email, attached the scans. And they replied, please provide your bank account number to transfer the money. I politely and honestly replied, I do not have a bank account number that I can provide to you at this time. So that was that. They never replied. A couple weeks later, I look at my bank account and there's the fucking money. Many people have asked me at this point, why haven't I looked deeper into that? I'm too afraid what I'm going to find out. I don't know how they got my account number. I don't have the slightest clue. Now from the privacy perspective, there's one more thing you can do. If you still use post, in the post office, you can use a post office box. That means if you give someone your address, they cannot abuse it. If I can send them to sending me stuff, they can send me stuff, but they cannot break down my door because that's not my door. That's my PO box in the post office. So I don't know the prices here in Germany. In Latvia, it's their cheap. I actually have two PO boxes this year. One box costs 12 euros per year. So it's super, super cheap. Speaking of Congress, C3Post is also quite good. It's anonymous if you want it to be, and it's right behind the stage. So go send a postcard to someone you want to send a postcard to after the talk. Let's talk briefly about mobile apps. So this is Socratic. It's a mobile app that kids use to talk about homework, right, to learn. And Apple and also Google right now have actually created a good system that allows you to control, granularly control what can apps do and what they cannot do. And in this case, they ask to ask you to contact. And the thing that Apple has done is they allow the developer to actually specify whatever text they want in here. They cannot delete or change anything, but they cannot detect. So let's say it's only about chatting. It's only for chatting about homework. Don't worry. So naturally, you press don't allow, and this happens. Sorry. We kind of need that. So I hope they come today when Apple forbids apps like that, that block your experience fully just because you haven't given them a permission that should actually be optional. Now I use this taxi app called Bolt. Then they change the branding to Bolt. And this is how it looks now. I can still use it. But if I want to press on the button which you don't see on the screen, it's super, super dim over there where all my settings were, my name, my phone number, my previous rights, I cannot access it unless I have GPS enabled. So why do I need GPS to access my history? I have no idea. But that's how they do it now. But still, at least it was functional until they deleted my account because I wrote a GDPR request to them to explain what's going on and why I can't access my data. They basically said, since you have lost your trust in a server, we will terminate your account starting next month. But I mean, I still got to use it for a while. So then I switched to this other taxi that's Yandex. It may or may not be run by Russian special services. I don't know. I was using that. I took one ride. Coincidentally, it was to the airport to go to Ukraine. Coming back from Ukraine, I enabled my phone connection. Again, this is what I got. Your next ride can happen 2023. Welcome. Speaking of Yandex, they have privacy policy, of course. All companies doing business in the EU need to have that now. And it's quite okay. So they actually have the type of data, like location. They explain how they use it and they explain how you can change your data or how you can withdraw your consent of them using the data. And it's all fine here. But if we scroll down further, we see those kinds of categories. And we see the reason is to improve app performance quality. And we see that you cannot use the Mapp without giving the data. And what does that include? A lot of disk space on Android, list of installed apps on Android, device tech, characteristics like model, manufacturer, operating system, sensor information. Anyone has a camera sensor on their phone? Geez. Okay. Back to WhatsApp. I was forced to use WhatsApp because my friends use WhatsApp and I have all the apps. I'm like that guy. But I don't want to be that guy that shares your phone book with the company because then you betray your friends. So I never do that. I always click deny. So I was using WhatsApp like that and I want to show you how I found a way to create new conversations in WhatsApp without giving access to your phone book. So here, I use my regular dialer. I dial the number I want to dial. I want to chat with in WhatsApp. I press the green button. I press the red button. I go to recent. I press on the I over there. Then I hold the message button. Then I choose WhatsApp. The call button. Then I call in WhatsApp. Then I press the red button. Then I go to calls in WhatsApp. Then I open up here and then I press the chat. And I'm in the chat. Yeah. So that's how you could create a chat in WhatsApp. And could is not because you can't do it anymore, but because they fixed it. I still don't love WhatsApp, but now they actually fixed the button for the people, for those of us who do not have the other book. So that's something. I mean, it doesn't make them perfect, but now I don't have to do that. So that's cool. I also use these things. Do you have them here in Germany? The prepaid anonymous cards. You can just buy them in the shop. Yeah? Okay. So I use these a lot since I don't have any other kind of banking cards. And if you notice, it doesn't have a name. This just says term of use one year maximum. So I decided I have to be fair. And when I buy something online, I have to write as it is. So I try to do a bunch of stuff by leaving the name on the card blank. That's just the placeholder. It's on the road. And it gives nice errors like a pork bun, a DNS register here, gives a kind of technical error so you can work from there. But for these cards, of course, you can write any name that you want and it just works. Now if you take a look at this picture here, I took that in Vienna. It's everywhere. I don't like to waste time. So I always on longer rides, I try to work. I open my laptop, but I cannot work on a bus. They will see my passwords. I don't know what kind of resolution they have. I don't know if it's a 30 FPS camera or it's a 240 FPS camera. Who the hell knows? So you can't work there. And the private information on my screen, information of my customers, it's also at risk. So basically the only place you can still work are airplanes. Although I did see one airplane that has a camera already from the cockpit to the cabin on the door instead of the usual analog systems that they use. So luckily airlines want to save weight so they probably are not going to install cameras on all seats even though I've seen news about some low budget carriers installing cameras in the entertainment systems on every seat. And they actually have been confronted about that and they told everybody that we want to evaluate it. Basically they use different wording but they want to track people like we do online. We want to see how they interact with our product. Now speaking of airports, right? Airplanes are mostly good but they have these things. In some airports it's actually considered a privilege. I was coming here from Riga through the fast track and they referred me to secondary and the position of this thing in the Riga airport and me going there through multiple times a month actually suggested to me that they only use this for fast track travelers. It's a feature, not a bug, right? You get magnetic for an L and pad down and here you get this. But you can still opt out of course, even in the UK. UK tried to ditch that. European courts ruled that it's not allowed. Now until Brexit happens we're good including the UK. Around the world it differs. My main concern is that it's not that I'm going to be seen naked. It's the artificial intelligence, the robots taking decisions in a non-transparent way about me. I mean it's going to beep or whatever anyway. Just pat me down. I'm not even talking about transgender people because for these things, the first thing the operator has to do, they have to select the, I don't know if it's meant to be sex or gender but they have to select a pictogram of who's going in the system. Boarding passes. Boarding passes are cool except totally insecure. They can accept USA, they do have a signature part where you can sign it. But you know, they're really secure. But my problem with boarding passes is shopping, especially in Germany. They fixed something two years ago and now you cannot buy a single thing in the airport without showing the boarding pass after you go through security. So I hate that. And what I'm going to do is I'm going to spend the Congress to work on an app. I'm going to present it in March in Stomnihak in Switzerland. I'm going to have a talk called travel for hackers. I'm going to talk about how to safely travel, what can you take and what you cannot take to different countries. And I'm also going to have this app. But you have to promise to only use it for shopping, not for boarding or accessing airport lounges. Anyway, you can, using this app you'll be able to anonymize your boarding pass like point and click. And then you can go shop, right? Don't use it for bad stuff or I might get thrown in jail. I don't want that. Do we? Another thing. What is secure? I mean, airports, you have boarding passes, we have these scanners. Fingerprints are secure. Unless you use them as your password, that's dumb. But fingerprints are good. So back, I think it was 10 years ago when Latvia started to enroll into this ICIO program for biometric passports. And before that, they just told everybody we don't need your fingerprints because we had no biometric passports. That's fine. Now, that changed to we only store fingerprints in your passports. And that's an acceptable compromise. I mean, I see how that can improve travel safety as opposed to the dump limitations we have on the liquids that we can carry. But I have told you the story how I tried to carry ice through airport security in Belgium. They told me it was a liquid. We argued with them for about 30 minutes and then they won because then it became liquid. True story. I gave up. I just threw it away. Yeah. So, I mean, they have their own physics. So back to passport, right? So we only store fingerprints in passports. So okay. But then I learned somehow that, you know, they might somehow save that. I talked to some friends in the Ministry of Interior and that's what they told me. Well, that's what someone hinted me. And the thing is, what happened then is there was a rush to create a law that's called on biometric data storage. So they wanted to legalize storing a hash in the database. That's their current answer. If you ask them what do you do with a fingerprint, we store the hash on a fingerprint database. And hash is safer. Why? Because if I do stuff that annoys my government, they cannot download my fingerprint from database and put it on a dead guy somewhere. Hash is hash, right? So what I did is I used the kind of GDPR. We had the GDPR since 2001 in Latvia. A similar thing. It was basically the same. Only the fines were up to 1,000 euros. But everything else was the same. So I used that to request my data from the government to take a look at the hash. So they sent me the hash. VSQ. It's an FBI's Wavelet's Qualification Algorithm. It's an algorithm that can be used to compress black and white images. I got these two files left and right, finger. And I did manage to find the only resource on the Internet, and it was GitHub, actually, that contains an algorithm to open it up. And yeah, my fingerprint isn't there. That's not an actual fingerprint, but you got it there. So my whole damn fingerprint is there. That's the repo I used. Let's summarize. So what's the status quo right now? I want to talk about multiple aspects. First of all, user demand. As you see, I marked it with a frowny face. Users, and I'm talking about people outside this conference, do not really care about privacy. They don't need private apps. They're okay with being filmed. They're okay with being, with their fingerprints being taken. I think most people would be okay with their palm print being taken. Maybe even a blood sample if it only happens once in four years. So that's romantic. The cookie law. If someone remembers that, we have that in the EU. We've had it for a couple years now. Well, it did nothing. It basically did this, that every site had to open one more banner and inform us that, hey, cookies are being used. And we did have the DNT header in HTTP. It was a great idea. I took a look at what happened, and it basically just randomly died within some discussion group. So maybe we should revive that, because that's perfect. Inform your browser if you want to be not tracked. And the website should be mandated to not show you the banner, but to actually take care of honoring that header. A GDPR for big data? That's actually crap. Big companies have ways, both legal and technical, to get around GDPR currently. And GDPR enforcement cannot reach them. I mean, if something happens that can be proven, fines are big, and those fines are going to be paid, I hope. But GDPR still can be improved there. Now, GDPR in general, that's great. That actually allows us to stand for our rights. Go and ask them about what data are you holding on me. Go and ask them to delete the data, to change your data. They need to be on their toes and need to know that we are looking at what they're doing. But the problem is, all these things are EU only. I mean, cookie law is the other thing. GDPR is EU only, so we have to make sure that other governments around the world adopt something similar. Surveillance technology is getting worse and worse. We have different technology being advertised, both as a military tool and a tool to track your kids and wife. That's not okay. And for encryption, I use average face, neutral face here, because encryption is good, right? Anyone who knows how, say, AES works or a virtual algorithm, you can take a look at that, and it's great. It's unbreakable if you implement correctly and use right key sizes. There are two problems. It's not being implemented correctly in some places, and most importantly, as a user, you have no idea if the app is actually using encryption for that part or it's bypassing encryption entirely or using something else. So that's problematic. Would any of you notice that your favorite chatting app sends a copy of your message somewhere even though the original copy is encrypted? So what do we do? How can you fix it? Now, let's not talk about cookie load, that's not that interesting. For user demand and GDPR, the same thing. We have to inform users that there's GDPR, and first hand, we have to tell them that it's good. You can use it, and privacy is good. This is how it will help you. This is how it will help your friends. This is how it will help other people online. GDPR for big data. Most of work needs to be done if there are any lobbies in the room. If you have contacts in European Parliament, that's where you go in. We need to fix GDPR so it works better for big data. Because big data is pseudonymized and you cannot do anything with it, right? It's safe, but it's not. Now for surveillance technology and encryption, this needs to be fixed by us here. We are the only ones with the technical expertise to actually try and fix these things. Privacy and indeed human rights are a relatively recent invention. They've been among us for a hundred, maybe 200 years, which is why, at least in my eyes, it's even more deplorable to see corporations and governments alike hastily eating away at our right to privacy for their own benefit. Privacy shouldn't be a luxury that only the rich and powerful can afford. Privacy is for everyone. Privacy is a fundamental right. And like with all fundamental rights, any encroachment on them needs to be aggressively and decisively terminated. Thank you so much. Thank you, Kirils, for your great talk. Unfortunately we don't have time for questions. But yeah, I think there was a lot of content in that. And if you have any questions, contact him. All the details are here. So enjoy all the interaction and enjoy the rest of the Congress. Thanks. And perhaps you did give quite a bit of a run to copyright, just a little bit to this moment. you
|
After the highly-successful presentation "Toll of personal privacy in 2018" at Chaos-West 35C3 where I talked about my personal experiences with trying to protect my privacy, this year I return with a completely* different talk that tries to convince the audience — you should care about privacy too! This talk revisits the theme of personal privacy in the digital world, this time centring around the "I've got nothing to hide" argument. A beam of intensive light is shed upon the motivation behind caring about one's privacy. We go in depth into what we can do to stay private and should we even try to do it at all. We talk about where we as an global society were able to fix privacy and where we have failed. New topics previously not covered are discussed, such as herd immunity and certification programs. \* 97%+
|
10.5446/53060 (DOI)
|
Alright, welcome everybody. We're about to start the next talk, which is about a topic that I personally know very little of, so I'm really excited and looking forward to learn a bit about window managers, which is definitely interesting. So I'm very happy to introduce Raichu, who's going to talk about his self-written X11 window manager, and he's going to talk a little bit about his experience implementing it and what he learned on the way, so please welcome Raichu. Wow, fancy. So hi, I'm Raichu and this talk is basically terrifying me because this presentation is given with the software that I've written, so it's an early alpha state, so yay, pretty terrifying situation. Anyway, I'm going to talk a little bit about my experience with X11 in Wayland and implementing Hikari, which is my window manager slash compositor, and another interesting thing about this topic basically spawns a lot of people that have a lot of opinions. It's kind of weird that people have very strong opinions on that, but maybe I can give you some interesting, maybe informed information about what's basically going on. So yeah, I said this talk is a little bit about Hikari, I'm going to talk a little bit more about X11 in Wayland, but first of all, I want to tell you why I basically started doing what I did in the last one and a half years. So I wanted to build a window manager for some reason and later on a compositor, so I've been spending like the last one and a half years looking at X11, looking at Wayland, those different protocols, and roughly spending nine months working with each one of these. And so I've written this whole window manager which basically does things like moving your windows around, resizing them, does displaying and all this stuff, and makes you, gives you abilities to manage your windows. So yeah, I've written this thing from scratch and I was largely inspired by things like CWM and Herp's Lift VM, so I wanted to have something that is keyboard driven, so I'm a M-user. I want to have shortcuts and fancy things for everything, I don't want to use my mouse, so I want to be able to do that. So fast navigation and stuff like that, and I want it to waste very little screen span. I will show you what I mean to have waste very little screen space. So this is basically my aesthetic, so this is a terminal and it has a one pixel border and every pixel means something. So the white border tells me this window has focused, that's basically all I want to know, all I want to see. And I don't have title bars which consume a lot of stuff, except when I have title bars, so I built something like when I press mod, it shows me the information that I need. Like this is like fish my shell, it's on the first workspace, and it's in the group shell. I'm going to talk a little bit about what these groups mean, and as you can see it tells me, okay, this is the window that has focus. So this is inspired by CWM which has this concept of groups, they can put windows inside of groups and display them independently, and you have groups one to nine, and I wanted to be able to have independent groups because I started using them as workspaces and which is kind of like defeats the purpose, but I wanted to be able to group windows together in an arbitrary way, so when you open another window and open another one, you can see that when I press mod that these frames turn orange, that this belongs to a group and I can cycle between windows inside of a group, and there's another thing, like I now started a root shell group, this is a different different group, so I can cycle between the groups. So this is something that I wanted to have for some reason, and it turned out to be very much, it's very much fits my workflow, so yeah. Also, I'm not a big fan of tiling except when I want to arrange my views, and so I built something that works like Herfluffam where I can tile all these views and skim through them, and this is configurable, so you can write your layouts in something that kind of resembles JSON, it's UCL which is the universal configuration language, which is used by FreeBSD by the way, yeah, and this is basically what drove me to do this, and I also want to have minimal dependencies, like have a very slow set of libraries that I'm using, and I want it to be energy efficient, I try to be as energy efficient as possible, because yeah, it saves a lot of time, it gives you a lot of time with your, with the computer, because battery time increases drastically, and also I wanted to talk at FreeBSD because that's the operating system that I'm using, I will at one point support Linux and other operating system, but right now it's FreeBSD only, yeah, so it has those two implementations, I wrote this thing basically twice, which spent twice the time on one thing, why not, yeah, so what were these different approaches, so we basically have X11, which most of you people probably know, and then we have this new thing, like 10-year-old Waylon, and both of these things are basically protocols, like TCP, so it's a protocol that somewhat describes your applications, I want to look like this, I want to work with events in a certain way, and there are certain implementations like the Xorg server, and there's also this Waylon thing that I'm going to talk about, but first let's talk a little bit about X here, and the X-Windows system, which is the implementation that most of you are probably using, except when you're using recent GNOME, which is Waylon by default, so let's see, the thing looks like this, so we have the kernel, we have this kernel mode setting stuff, we have FDF, which gives you keyboard mouse events and stuff like that, and in the middle we have this X server, and it's basically responsible for rendering all the stuff, and then we have a bunch of clients up there, so this could be your terminal, this could be your screen locker, and your window manager, because your window manager is basically just a client to the X server, it's kind of like that, and this step is optional, this is what recently, maybe like 10 years ago, I don't know, even more 15 years ago, they added this compositing thing, so when you press a button, you generate an event, the X server figures out which client it goes to, sends that one to the client, the client does things, it figures out how it wants to look like, sends this back to the X server, then the X server sends things to the compositor, the compositor munches everything together, and brings that back to the X server, and the X server basically displays all this stuff, so we have a lot of things going on here, and yeah, this, you can see there are a lot of processes involved, and a lot of communication between all these things, so what does a window manager look like? This is like the most simple implementation that I've found, it's a tiny WM written by Nick Welch, and it's, it fits a slide, I don't know if you can read that, you shouldn't, but just to give you an impression of how easy it is to getting started with a window manager, I basically had my first working implementation of Ikari after a week, and I started using it on a daily basis after that, which I think is kind of impressive because the platform gives you so much, there are so many mechanisms that it provides, that you basically have everything in your X server, and yeah, get a lot of the stuff for free. So now I want to be able to talk to this, I said this talk to the X server, I said it's a protocol, and there are different ways to speak the X server, and the old way, like the old people did, it's called Xlib, and it's, all these, all these API calls are pretty, pretty synchronous, you write a request, then you wait, then you read the response, and over and over again you do that, and so you can see you waste a lot of time waiting, a lot of applications are really waiting like a long time before the X server response, it's kind of annoying. So people came up with XCB, which is the foundation of a lot of things that Xlib is basically built around this XCB thing, but with XCB you can write, write, write, write, then wait for the responses and just consume all of these responses, which is a lot faster, and I went with this XCB and it gave me a sort of really fluffy window manager feeling compared to others that were using Xlib, so yeah, I went that way. So now I want, I have to pack a lot of stuff, so hopefully I'm not going too fast here, so I want to talk a little bit about the some interesting things that I discovered when working with X, so let's think about how I order windows in a stacking window manager, so I basically put them on top of each other and then the X service render them, certainly you need to have some sort of ordering there in which order the window, the X service rendering them and you want to cycle through them, so these, I have this concept of these groups that I showed you and the X service no idea about them, so it doesn't know when I say go to the next window, I couldn't do that because it doesn't know what groups are, so what I had to do is essentially I had to reimplement all this functionality in my window manager and synchronize them, the X server now has an ordering of windows and my window manager has an ordering of windows and I'm not the only one doing that, basically every other implementation I looked at of the modern, modern window managers, they are all doing the same thing, so I had to reinvent the wheel basically which is a bit annoying and that's another thing, think about I want to move this window to up, the thing is that the X server basically it has just one giant buffer and then your client just sends some primitives to the X server and it draws things there, so it just draws this in one buffer and maybe you've seen this, if you raise this window then this portion of the screen needs to get redrawn and so what the X server does is it sends an expose event to window 2 to the client of window 2 and then it generates all these primitives like right-align, draw a circle and draw some text over there and it just redraws them and so you sometimes you really see this effect where when your computer is on power management that this you can watch it redraw itself and this kind of I learned to accept that but yeah it gets better with compositing but it's still not pretty, it's really something that annoyed me at some point and with modern, with modern 2, even why they basically draw everything into a PIX map and hand this PIX map over to the X server and say I'll draw this but don't touch it, just draw it please and you can think about how much traffic you can generate when you have like this giant PIX map and on my screen resolution one frame of the entire screen will basically like 10 megabytes and let's talk about network transferancy here, that's interesting thing to think about but yeah this is basically what happens and this was kind of annoying thing with X that I uncovered and another thing so this is code from I don't know if you can read the comment this code from awesome that I saw, before I said that the X client is the X window manager, the window manager is basically just a client and what it sometimes had to ask for the keyboard like when I open a view and I want to change the group it's in and I want to be able to type stuff here, type and so I have to grab all those keyboard inputs because I want to like get all the screen events, get all the keyboard events and write that into a buffer so this is something I saw in awesome and awesome is basically like begging the X server give me that keyboard it's doing that for a thousand times then it waits for a millisecond so basically tries for a second to please give me that resource even though the window manager should be the thing in charge it's basically begging for resources and this felt wrong when I wrote it but yeah it's also what most window managers seem to deal with here that's that's a problem when you have a middleman yeah and this is so with the conclusion of when I implemented this in an X I basically said okay wow it's really easy to come up with a window manager just like takes a week to get something roughly working and yeah but all these graphical user interfaces they kind of evolve they all basically all do off-screen rendering then just shove around picks maps and stuff like that maybe there could be something better you have a gazillion of extensions that's also fun because at one point you will discover a client that will use an extension that you never heard of before and it will do something weird which is also fun and then you have to look up and all these other different window managers how they are dealing with it and it's it's not pretty and X is a global namespace like every client at every point in time can become a key logger can become screen recorder and just like send your stuff over the wire kind of a lot of fun with that believe me so from security standpoint that's not good yeah and the window manager is the client that's to back for things like the mouse or keyboard so and you also duplicate a lot of functionality and you have ugly screen artifacts so I was basically a bit fed up with this and thought well there's this new new thing called Wayland so why not look at that as well and see how that works so this is the architecture of Wayland and we basically just take out the entire compositor of the entire X thing now we just have clients and all these clients to upstream buffering and now the client just said hey use this buffer and we all use shared memory here so it's not going over the wire or anything like just right into this buffer and then I tell the compositor please display this and the compositor takes care of all the input events so I don't have to back for my keyboard anymore the compositor controls it which also makes this from a security standpoint is a lot more interesting because now you can say okay I will just deliver this keyboard event to this client the other ones don't see that and the other ones think they are perfectly the only things that exist in the universe like this is real UI is isolation you cannot build something that records the content of your of other any other screens which is pretty awesome and every frame is perfect this is really something that came to me pretty early Wayland really evolves around the notion of a frame it's like you know when the compositor decides when to redraw things it's not like draw a line draw a circle draw some text and in between I could just draw a frame and flicker and do screen tearing and all this stuff this just doesn't happen with Wayland everything is like super smooth it's really you don't want to go back when you see that once it's pretty impressive so and there's also stuff like damage tracking and if you want to read more about how Wayland and Wayland compositor can do things I really encourage you to read that blog post by a person probably butcher his name sorry for that but that's that's really interesting stuff I have to hurry a little bit here so how did I write this thing obviously I need to be able to write this wayland protocol stuff so I just use WL roots which is the foundation for Sway which is basically I3 for Wayland and it's like the 50,000 lines of code you need to write anyway so I thought yeah I don't want to write those thank you and I use that thing and it's basically now for a lot of compositor it's the foundation and it's it's very well written stuff so you should check that out but it was released after I started working on the X implementation so no harm done here and they want to look at the most simple Wayland compositor that they can look at tiny WM it ships with WL roots it's around a thousand lines of code that sounds like a lot but keep in mind that's a compositor that's server and a window manager like three things in one and cage is also something interesting to look at if you want to learn like have different resources it's like a kiosk thing for Wayland it's also used WL roots and basically all of these two kids that you see they all support Wayland out of the box so if your client written in GTK or QT or clutter whatever that is I'm educated I don't know and STL they all have Wayland backends now so you can basically switch transparently switch to all these things without even noticing which I found was pretty neat Firefox works Thunderbird works just set this environment variable it's a bit flaky at times so I'm kind of glad this thing didn't crash it doesn't crash that often but could happen in the worst worst of times there's MPV a video player and WL clipboard makes my NeoM happy and if you want to be able to have X applications you can do this with X Wayland so yeah I had to hurry here a little bit basically it's a lot less complexity and looks way better there is a lot of cool stuff going on there yeah I have roughly around the same time the same amount of code here which I think is pretty neat because it does so much more you have more responsibilities things like stream lockers you have to implement that so now you wonder what kind of programming language did I use to implement this and this probably divides the room into yay and oh but I basically this did this for good reasons like I said before it's 50,000 lines of code C and there were other people trying to do that in Rust and they basically said okay it's too hard we we can't do that this is from the Way Cooler compositor which is awesome in Wayland and they basically said okay we can't do this it's too much work and I don't want to rewrite 50,000 lines of code in Rust I basically don't have the time for that even though it will be probably in to string to do so so yeah I did and rest sanitizing this is a very cool thing to check your if you have things like double freeze or use after freeze asan is pretty cool and I used a lot of detrays I can show you the strip later on that basically keeps track of all the memory allocations that I have so that I'm leaving memory so yeah it's basically if if you want to get a hold of me on our master on I'm on matrix you can write an email to me or just join our Hikari chat room and or get in contact with me at the ECMAP assembly thank you right on time you
|
In this talk I will outline my journey implementing my X11 window manager `hikari` and the corresponding Wayland compositor shortly after. `hikari` is a stacking window manager/compositor with some tiling capabilities. It is still more or less work in progress and currently targets FreeBSD only but will be ported to Linux and other operating systems supporting Wayland once it has reached some degree of stability and feature completeness. This talk covers: * a brief explanation regarding differences between X and Wayland * some of `hikari`'s design goals and motivation * choice of programming language * an overview of libraries that were used * tools for ensuring code quality and robustness * obstacles * resources that helped me to implement the whole thing
|
10.5446/53069 (DOI)
|
And now we come to the talk where for here, I mean you're not here to listen to me, you're here to listen to Naoto Hiera. He will have the nice talk title, Algorithm, Diversion. And when I read the abstract, I was really like, okay, this sounds interesting. It's something about neurodiversity and digital art, but I don't try to explain whatever this is about, because this is his job. And so please welcome him with a big round of applause. Naoto, stage is yours. Good evening. My name is Naoto Hiera. My talk title is Algorithm, Diversion. And this is the outline. No, I usually don't start with outline. In fact, my slides are made with my own program on processing, and this is an actual part of the code snippet. So it might crash, or it might behave weird, because I haven't tried with this resolution, but this is part of the performance or presentation. And I hope that this is a safe space, of course, but I just say it over and over, because then if I say it, then it's safe now. I'm sure nobody is going to attack anyone else based on race or gender or whatever, but just to say it. Because it's about myself, and yeah, it's always needs some a bit of courage to talk about myself. This is actually part of my code. It's called Lorentz Attractor. It's stuck over there, but basically it's particle behavior based on a few equations. And although it's set equations, because of how computer calculates, it won't converge and it makes this amazing shape, or sometimes it's stuck at the top. I'm starting my talk with talking about autism, because I'm on the autism spectrum, and how I work with art is heavily related to the fact of my neurodiversity. This is a picture I took from Wikipedia, and the caption is, a young boy with autism who has arranged his toys in a row, which I really love. This text is amazing to me, and how he arranged it is amazing, and he's sleeping, like everything is amazing in this photo, I think. I don't know what you know about autism spectrum. You might think people who are not good at communication, who are good at math, but sometimes not. That's probably the idea. I think it's partly true, but also it's not always true, and I found it interesting to work on being autism or high-functioning autism or whatever, and work with art, because sometimes people think that we don't have emotions. Because art is about emotions, triggering others' emotions, autistic people cannot make good art, which sounds a bit logical, but it's not logical at all, because first of all, art is not really about emotions. Sometimes it's just shapes. Sometimes it's really irritating. Also, we do have emotions. It's just that we lack cognitive empathy, so that we have difficulty expressing our emotions. Right now, I don't know if I'm happy to be here, or I'm sad to be here, or I'm nervous, or I'm relaxed. You can just say it. I can just say I'm happy to be here, to be invited, but also at the same time, then I start thinking, but I kind of regret that I accepted this invitation. I'm so nervous, but also I like to be on the stage to perform, so that makes it a bit relaxing. This weird thing, I think, really describes how autism is. I knew about it when I was taking a dance workshop by choreographer Mario Hasabi and gatherist Jan Mot. They had this week-long workshop, a dance workshop. We were doing really weird stuff, like 20 of us walking in a street super slow, and then we write about how we felt. That was the key. We always had to have a notebook, and we have to write from my own perspective how I felt this experience, which was really hard for me, because I always observe shapes, numbers, or patterns, but it's hard for me to say it was comfortable or uncomfortable, what I learned, even that's so hard. I can say at 2.50, I started walking, and then at 3.45, I stopped walking. This exact thing with numbers, it's really easy for me. In fact, in my diary, I have all these numbers of the time when I arrived at a specific place, and this has been my habit for the last seven years. I was talking about autism, and then yes, the notebook. It was really hard for me to write from my own perspective, and I looked at someone else's notebook, and he was really good at making like, it was even not his perspective, but it's like persona he made up, because he's an actor, and he's really good at thinking about this first person perspective, and how he felt through this person, and which was like totally not possible for me to do it, and then I started to think that I'm something wrong with my writing or perception, and then interestingly, after that workshop, I started working in a neuroscience institution, and then people told me about autism, and then I googled it more and more, and then I found that I'm on the spectrum, but I don't have a proper diagnosis, so I cannot actually, I shouldn't actually publicly say it, but then what's the benefit of having a diagnosis that doesn't really benefit myself, because I don't really get like a health insurance or whatever, like a support from the government, because there are quite a lot of high-functioning autism people, and for low-functioning autism people, there are support, but for us, it's because we can talk like other people, it's a bit weird, but we can talk, we can live by ourselves, so they think that we are normal, but anyways, so anyways, I have my problem with my brain, and this is actually a photo of my brain, which is partly, it's half true and half false, because you cannot really take a photo of your brain unless you open my skull, this is, well actually, I took MRI of my brain, and I 3D printed it, I was really fortunate that I got the data, and I 3D printed, and this is a photo of my 3D printed brain, and I started working on type S3 work, which is actually based on the pattern of my brain, I just, I want to talk about the tapestry or textile, because it has quite a lot to do with the computation, if you look at the history, the punch card of the programming came from the jack card loom, these punch cards, the patterns, and then they became Fortran or whatever, programming language as punch card, but this history is really interesting, but for me it's more interesting to look at how we can translate one knowledge to another from like more, not just a surface, I don't say that punch card is only the surface of these two, this connection of these two things, but there must be something more, like something deeper between computation and for example, tapestry, or I talk about other things, but so this is a program I wrote to generate tapestry pattern based on computer vision, some kind of analysis of my brain shape, and I made this automatically generate these patterns, and I did weaving by hand, it took like a month to do it, and this is not really finished yet, I mean this one thing is finished, but I want to continue working on it and think about what's possible, like how I can translate knowledge from computer programming for example, to weaving or vice versa, the other thing, well there are quite a lot of things around programming I'm interested in merging together, and one thing is movement, I'd like to be on a stage like now, and I like to dance, and if you look at the history there's also something with dance, and let's say notation, not so much of a programming, but notation that builds the, for example, algorithm, this is LeBan notation made in like 1920s, and basically what he did is I always find this figure really funny, but basically the idea is he assigned different symbols to each part of the body, by the way this is from Wikipedia, and so each part of the body has different symbols, and based on the height or directions, you could add some colors or like different filling, and then you can describe the position or the direction of the movement, and with this, like it's like a musical notation, like in a time domain you can have more notations and describe a dance movement or dance piece, but the idea is not to describe like a ballet movement, for example, it's just one way of looking at the movement, and also people, I haven't learned this notation, but people learned this notation can look at this score and do the movement, which is really interesting because it's not really about archiving the dance movement, but transforming the dance movement to something else, and you can potentially do some kind of, like run some kind of algorithm to morph it and play it again, or like this gives like so many possibilities, but I think so many things have been done by John Cage and these amazing artists, so I wouldn't really propose something new here. The other relationship between movement and computation or geometry is, for example, Oscar Shilema. I think artists here, they are really tired of this Bauhaus thing this year, but yeah, I had a project supported by this part in the frame of Bauhaus and Good Institute and Bloomberg to work with choreographer Rafael Hilebrand and dancers from Hong Kong Academy for performing arts to create a 21st century version of Oscar Shilema, and I have a video. So basically there's a video camera, which actually shows, it's actually shown here, is tracking, well, capturing the dancers and we're using algorithm or the library called OpenPose to track 10 dancers and visualizing it on the screen, and it's funny that like Shilema did the same thing without technology or technology, like analog technology, sticking these sticks on the body, but we did really complex computation to track bodies and showing it here, which is not even like augmented, which is like on the flat screen and you can see a bit delayed bodies moving in the screen with the lines. Okay, so that was like my interest with first one is tapestry, the second one is movement, but also like I'm really interested in just these lines, like Shilema was interested back then on a stage, but I'm interested with the pixels or screens and how to show or choreograph these dots and lines. But this itself is not actually a new thing, like in like 21st century we can download these amazing tools for computer graphics and make these things happen, but back in 1982 there was this program called 10 print, which is just one line of code in basic. 10 is the address and print character at the address of 205.5 plus random, which becomes either forward slash or back slash and go to 10, which is like a classical go to loop. And if you run it, it shows forward slash shiz and back slashes just in a random sequence and after a while you can see interesting patterns show up. Well, this is a recreated version with processing. This is the processing. I use processing to create these slides and I use this to make videos or the video that I showed with the 21st century of stick dance I used the processing to make it. Processing is made by Ben Frye and Casey Riss in 2001 and the idea of processing is to make coding easy. For example, if you want to show a triangle on a screen with open GL, like C language, you have to write all these lines and this is just a part of the code just to show a triangle. But it makes more sense to have a draw function and have triangle function. So this was like a revolutionary thing with processing that you don't have to think about these crazy things about open GL and you can just have triangle, ellipse, circle, rectangle to draw shapes, which actually shows a triangle with the same code. And I started drawing things like grids or particles to make a star field. I'm really obsessed with sine waves and more sine waves. And also I'm interested in using the code with other modalities like I showed the movement. But for example, like text and oh, sorry, this was a different slide. No. Okay. I was doing this experiment to have a little bit of meditation every day and think about the shape in myself. And I was drawing shapes with handwriting or processing and at the same time I was doing some movement exercises, which is kind of funny to look at it now because I was just recording whatever movements back then. And it was not really made for showing to other people. So it's really funny to see some of these movements. But these movements were meant to describe the same shape as I drew in processing or with the handwriting and drawing. And then I was also interested in text or poem. And how I see poem is not always based on text, but often it's based on, like, it evokes shapes or patterns, colors. So I started to draw shapes that was evoked. And this is why I started this is because there was an interesting workshop about machine learning where we were asked to categorize different points. And it was really hard. Like, you can just categorize them based on the style or whatever emotion it triggers. But I decided to first convert them to a shape and then based on these shapes I can easily categorize them. Is it a circle or is it more square or triangle? And for example, this shape is more like a rectangle or more like a grid. Other ones are more like 3D shapes. I also collaborated with writer or dancer, Jenny Harrington, to make these kind of interactive videos with text. Okay, so it was like going to different directions. But now I want to go back a bit to the diversity part about programming. This is P5JS, which is a spin-off of processing. And it's started by Lauren McCarthy in 2012. And the idea is to make processing like creative coding environment but on the web. Because processing, you still have to, it's really easy to use, but you still have to download the program processing IDE and then you have to use that one. And you cannot easily share your program unless you take a screenshot or a screen recording of the video to put it on Instagram, for example. But if it's on the web, you can just send a link and then everyone can see it or you can either use platforms like open processing where you can just start writing your code with P5JS and it's going to show up in a gallery and people like each other's sketches and even people fork your sketch to make something new based on your sketch. What is interesting about P5JS is its community. I'm going to read it. It's a bit long, but I mean, it's not long text, but this is the longest text in my slide. P5JS community statement. P5JS is a community interested in exploring the creation of art and design with technology. We are community of and in solidarity with people from every gender identity and expression, sexual orientation, race, ethnicity, language, neurotype, size, ability, class, religion, culture, subculture, political opinion, age, skill level, occupation and background. We acknowledge that not everyone has time, financial means or capacity to actively participate, but we recognize and encourage involvement of all kinds. We facilitate and foster access and empowerment. We are all learners. We like these hashtags. No code snobs because we value community over efficiency. New kid love because we all started somewhere. Unassumed core because we don't assume knowledge and black lives matter because of course. This is a bit different direction from processing because processing was still lowering the barrier to start coding for everyone, but still it was more like a white man dominated based on this media art, old media art culture. P5JS is explicitly breaking this thing and assuming that everyone is a learner. For example, if you ask questions on GitHub, this doesn't work. Usually most of the projects, they will just close your issue and they say, just look at the document. But P5JS has a different approach. For example, if people ask something on the issue, they think that that means the documentation is not good enough or they have to point them to the proper documentation and don't just kick them out. That's the inclusion which I like about P5JS. Quite different from the past examples with lines and dots. I really like this example from Processing Community Day, made with P5JS, which I have this... Well, if you go to their website, you can see the real-time version. It's just a particle system. It's the same thing as doing dots and lines, but it shows the emojis of different people. This really makes... I really think that the examples in Processing and P5JS is quite different in this aspect. I said Processing Community Day Basel, but what is Processing Community Day? It started in 2017. Processing has quite long history as a programming language, but actually they only started an official meetup in 2017. That was the first Processing Community Day in Boston. This year, they started a worldwide movement so that people in different communities host their own chapter of Processing Community Day. The nice thing is written here in the PCD, Organizers Kit, is that they are about to celebrate art, code, and diversity around the world. It's not only about celebrating art and code, but it's about art, code, and diversity. I was really interested in doing PCD, bringing that in Tokyo because I participated in Boston and then I really thought that we had to do this in Japan because I'm from Japan. At that time, I was not living in Japan, but I felt that I have to go there and I have to organize this thing. Last February, it happened. There were three organizers, Yastel and Ayumu and myself, and we had nearly 150 people coming to the event. We had Keynotes. We had Workshop. I particularly like it because it's hard to see it in the screen, but it was an introductory workshop in P5JS to draw flower patterns, which is really colorful. Also, we had interesting lightning talks. For example, this one was about teaching grade four kids to write Processing. It is difficult not just because of learning programming, but they don't know how to type to begin with. He explained us different ways for them to type. Actually, he doesn't really introduce how to code interactive programs, but just to start with this line commands or triangle commands so that you can draw something like a static image on the canvas, and then you can go to more complex examples. We'll have a PCD Tokyo 2020 next February first. If you happen to be in Japan, please check this out. This cover image, we just made it with amazing artist, Reilana. She did this wave texture thing in the square. Takao, he did tiling. Chinuri, she did post editing of this thing with the logo. It's really a collaborative thing. It's not like there's a guru in Processing, and everyone follows that person, but it's more about making a community and working together and finding something new. Also, I don't have a slide for this, but I organize weekly meetup in Cologne right now. I have a creative code Cologne, but the acronym is not good, so we call it Creative Code Kern, so CCK every Thursday. If you are in the area, please check that out. In this PCD 2020, we will have a new session for live coding. Live coding can be interpreted in different ways, but for us, it's performance. To be on the stage and write program and perform at the same time. For example, this is a photo of Cody. I think it's in New York City, and they make visuals and sound together at the same time, with manipulating the code on the stage. This is really also interesting community, this live coding, because it's still really small community, and it's really widespread across the world, and there's no hub right now. Which makes people, all the artists, really know each other, they respect each other, and that makes really diverse community, which I really like. And I think I still have quite a lot of time, and I want to introduce you to my website. This is an archive of sketches I'm making, and recently I started to write about Hydra. It's an online book, I just started as an article. Hydra is a live coding environment, web-based, and you can make visuals really quickly. And if you open Hydra interface, you can start with the example, it shows up with the example, and then also you can just start from scratch, simple oscillator, and you can add modulation, and you can chain them, and it goes crazier and crazier. And this is really nice, because I'm just now live coding and performing, and it's really nice to do it on the fly, and it goes crazier and crazier, but because I'm interested in algorithm, I started to think about how to make this more systematic, not just randomly finding crazy patterns, but how can I make more theoretical understanding of this language, because this is really interesting. And by the way, Hydra is made by Olivia Jack, she's a really amazing artist from Colombia, sometimes she's in Berlin, so you might happen to meet her. So the idea of this book is to start with basic textures and discuss different filters, and how to make it systematic way, or understanding what is actually happening in the code, or how is it these code interpreted, and why sometimes things work, why, this is my interest. My interest is often not about making pretty visuals, which is maybe not, which might be the reason why I'm not the great artist, that I'm not really focusing on the appearance, but I'm more interested in what is happening behind. But it's interesting to think about, for example, if you start with square, oops, yeah, it's okay, you can repeat the pattern, and you can layer them with a bit of offset. I actually don't know. You can make these patterns. And then what I proposed in my book is extending this idea to make RGB pixel pattern, and also chaining this with another oscillator to make it really look like low resolution RGB LCD. So I just paste it and run it. It works. I can add some animations, adding here. So it's also like a cookbook that you can just take this snippets and modify it. I try to explain what's behind Hydra, but if you're interested, please look at this online book. And I also have some sketches that are maybe not this one. This is made with P5JS, and I took a code from P code, and it's made by Akiyo Kubota and Yosuke Hayashi. And basically this little line of code is evaluated into musical notes, which is kind of like really like an old synthesizer. But I added P5JS reactive visual, and you can do live code with this as well. It sounds really broken, but it's actually working properly. This is also a fun part of live code, because it's not always about success. You get some weird results, and that might lead to a new discovery. But in this case of Hydra, I was writing this online book to compensate this try-on error part by analyzing it. And I think both has to happen at the same time. Sometimes you need to really analyze it, but sometimes you just have to give it a try and see how it goes. Okay, so next I want to go to another version of Hydra, which is also made by Olivia. It's called Pixeljam. It's really amazing. I hope it runs here because it needs online connection. It's starting. Basically Pixeljam is a collaborative live coding environment, online live coding environment. You can add more things. Basically it's like a chat. You type your name. And I asked people to join around this time, but no one's here. I should say hello from... So basically the idea is someone modifies the buffer here and then someone else edits here. I don't know what would be interesting. I can add colors or... Oh, there's someone. What? I don't know these guys, but they just came at the right time. That's great. I don't know how to continue this talk because I was thinking about just starting to live code. I think it's really interesting to live code and talk about it. Oh, there's someone. That's great. Well, it's quite a lot of people. So maybe I can just let them play and I can talk through. This is an amazing idea because live coding on the stage is already amazing because you see people coding and it's more intimate because you get the process and you just see what's happening behind. This is so amazing because you don't have to go through the live coding performance. You just have to sit on a couch at home and then you can code with others and it becomes really amazing performance. I like this idea. I don't know. Probably some of you are doing this, but you can just modify the code and you can play with it. Just add layer. And again, shout out to Olivia, Jack, because she made this and she's maintaining all these things. She's just amazing. But this is not really crazy yet. Oh, no. But they're not like new people, right? They know how to do it. If you don't know the syntax, it's so hard to write. Okay, now I'm really... This is actually... I think this is really amazing performance or talk because now I don't know what's happening, but it's just amazing. But that's something I wrote before. I think someone took from the history and wrote it. Yeah, I don't know. Is it possible to do the Q&A questions simultaneously or is it so distracting? Or I just like to be freestyle, but I think this place, I feel there's a bit of a structure that doesn't allow me to freestyle too much. Or I can just leave it like this and move on to the Q&A. I think that works. Yeah, so big round of applause. Thank you so much. That's when you do live demo sings, which people can join in. This will happen at Congress. People will join in. They will find your code, don't copy, they'll play around with it. So you still have to learn. Yeah, there's a lot of errors. So now you have the chance to ask a question. We still have a bit of time. If you want to ask a question, please line up at the microphones. We have three of them, so you have the choice. We have somebody at mic one. Please start. Well, thank you so much for your talk. It's really, really inspiring. A very small question, and that is, how does that whole thing relate to the demo scene? Is that like the 2019 version of the 80s demo scene, or is that something completely new? Oh, like this live coding thing? The live coding as well as kind of the combination of visual coding and sound and all these things. I mean, this is nothing new. People are doing this quite, like, this has really long history, and I think I really have to learn it. But I think it's just, oh, it's really blank. We just have to keep reinventing because as soon as it, there's a saturation. Like, this is, right now, it's really nice, but maybe after 10 years, like, only me and Olivia is just doing this or something like that. And then we become kind of grew, and it became really hard for other people to join this community, for example. And I think it's, we have to keep rebuilding and also noticing who's in the community, and yeah, like, it has to, like, this metabolism or, yes, it has to keep the community really healthy, which is, I think, really interesting and important. I don't know if that answers, but that's what I think about this P5JS or processing community day or this live coding scene that is really interesting about, and it's not only about the aesthetics or the audio-visual thing. Okay, question from mic number two. Yes, did you ever try to work with translation from dance notations like the Laban Square notations and put them back into code, into these live coding environments and then somehow create a feedback loop between this and maybe actual physical movement? Not specifically with Laban Notations, but I think that's really interesting idea. But I always, like, I'm always a bit careful about doing this feedback loop because as a concept, it's really interesting, like, adding different things, but as soon as it gets really, like, so many components and you don't know what's affecting what, it's just become chaotic. And sometimes I just, then I was doing, like, machine learning to understand movement and how to relate to visual and these things I've tried a bit, but then I started to think that my brain is more interesting than machine learning. So why not just think about take one movement or take a shape and relate it to something else. And also I have a new project upcoming web residency, so I'm going to reside on web to create online archive. So it's continuation of my archive, but making a bit more, thinking about curating and doing exhibition of my work online and then maybe do another exhibition of exhibition or exhibition of exhibition of exhibition. And breaking this current politics around galleries, museums and how people sell artworks, like, in a physical way. Sorry, I think it went somewhere else. That's fine. We have a question from the internet. Please. Is this working? What do you think is the easiest way to get started and getting ideas for programming something graphical with processing? I think that's really interesting question that, yes, for me, I think I'm not a good example because I studied engineering and I already knew how to code. But I think it depends on the level. If you already know programming, then it's easy to just look at examples on processing. If you download processing, it comes with examples, like a bunch of nice examples, and you can mix them, play with them and to understand what you can do. But if you're new to programming, there's really amazing YouTube channel called The Coding Train by Daniel Schiffman, and I recommend you to watch this YouTube channel. Microphone number one. Thanks for the cool talk and the cool tools that you introduced. I was playing around with Hydra when you told us, so I missed the rest of the talk, but I have quite a stupid question. When you 3D printed your brain, what color did you choose? That's super important, actually, because I didn't really choose the color because I was working at the university and I just sent the G code to whatever available 3D printer and it was light blue. And it turned out that recently I do my nails and that's light blue, and I like this color now, and I also like my brain, and my printed brain. We have one more question from the internet. Did you manage to find other autistic people with similar interests through your art? Sorry, can you say it again? Did you manage to find other people on the autism spectrum with similar interests through your art? Yes, I think this was really important for me because at first when I noticed that I'm autistic, I was really discouraged. Maybe I shouldn't do art because I'm good at programming, but maybe this art thing is not my thing. But I met an artist who is working with the Bouto movements and autistic Bouto movements, which I don't really understand, but she's really like positively analyzing, understanding the neurodiversity. I still don't know how she's managing to relate these arts and neurodiversity, but I learned that it's possible. At first I was really having a hard time, like how to relate it. At first I was interested in EEG, like brain waves, and somehow relate this because it's brain signals, and I can do media art controlled with brain waves, and blah, blah, blah. And then at one point I thought, okay, I should just do meditation and think about shapes and movements, and this is already an interesting way of connecting this neurodiversity and art or digital art. So yes, there are not many artists, but for example Erin Manning, she's teaching in Concordia University in Montreal, her books are describing about art and autism, and I highly recommend her books. Okay, I haven't questioned myself. That's a nice thing, I have also a chance to get a question out. Have you saw it about doing something like this as an art installation at Congress, so that with a short introduction or something, that people can play around during Congress. I mean we have a lot of art installations which are programmable in a way, and people love them. Yes, sorry, it was not refreshing, so I restarted it. Yes, I'd love to do it, and please invite me. Yes, right now my portfolio is really scattered, and I honestly don't know what I'm doing, but this is what I made before, so it's not this person's creation, this is my quote. But this is actually the important part, like remixing someone else's quote and doing something else. Sorry, going back to the question, yes, I would love to do something. This year, it's the first time here, and actually it was really short notice. Like yesterday I was asked to give a talk, so yeah, I was not really prepared, or maybe like tomorrow, maybe I can show something somewhere. Yeah, like maybe it's possible, everything is possible. And we have another question at microphone number two. Yeah, well, during your talk, I saw that you have a lot of interest in dance movement and in creating these really weird visuals. I came up with the thought of how would it be to combine those both skills with a green screen, which weathermen use, you know, how would that be? If you could block yourself out, or if someone else could, well, if you wear green, you could do an interactive programming and just green appears on the screen, you're away, or I mean you're blocked out, or you're on screen. Just an idea, my brain came up with that. Yes, thank you for that. I think I've done something similar, but it was not, we didn't have enough time to explore it, but I'd love to work on it because I think I want to work more on the physicality of, and oh, what? Okay, okay, sorry. Yeah, I want to work with physicality and also about digital things. I think this is what you said. It's not myself, but I was working with a dancer and he was wearing something green and hiding inside the digital world. Yeah, I think there's so many ways. I would say the next thing I really want to do is with my nails. I really want to have a generative pattern nails, but I don't know how, what technology I need, I really don't know. Technology, okay, I just need a small display or something, but that's not the point and I have to think about it. Thank you. Thanks for the suggestion. We have enough of the question, microphone number one. Thank you, and thank you for your talk. I was just interested, I mean, you mentioned Buto and you talked about life coding and I would really appreciate if you could think about how you would draw a connection between Buto as a form of dance and life coding. There's actually another artist, Joanna Chicao. She's working exactly on Buto, well, like she studied Buto and she performs with her body and also life coding. And this one also I really don't know much about the concept, but there are people working on it and maybe one day I'm also interested in working in this topic. Buto itself is like, I'm afraid to touch this topic because it seems like I have to study a lot about Buto, like about history and who's doing what and how I interpret Buto because I think there are too many artists who don't really know much about Buto, but they pretend that they know Buto and I think the real Buto dancers are kind of tired of this. But I'm really interested in the idea of Buto, for example, imagining a fish inside the body and move it, move with it. Which is actually kind of related to the meditation idea that I had, to think about the shape and visualize it, but not always with my movement. Okay, thank you very much. We are running out of time, so we have to end this here. But thanks for your talk. Thanks for these amazing impressions. I hope we'll see more from your art at Congress perhaps this time or next time. There's always a next time. And so also thanks for answering patiently all the questions. And please, another big round of applause for Noato. Thank you. Thank you.
|
Before media art has emerged, traditional art and dance are already applying algorithms to make sophisticated patterns in their textures or movements. Hieda is researching the use of algorithm through creation of media installations and dialog with artists, dancers, choreographers and musicians. He also presents his current interest in machine learning and art which potentially exclude (or already excluding) some populations due to the dataset and modality.
|
10.5446/53072 (DOI)
|
The next talk is by David Graber and he's an author, activist and anthropologist. He will be speaking about his talk from managerial feudalism to the revolt of the caring class. Please give him a great round of applause and welcome him to the stage. Hello. Hi. That's great to be here. I've been in a very bad mood this last week owing to the results of the election in the UK. I mean, think very hard about what happened and how to maintain hope. Ah, there we go. Good, good. I don't usually use visual aids, but I actually assembled them. And the thing, what I want to talk about a little bit is what seems to be happening in the world politically that we have results like what just happened in the UK and why there is nonetheless reason for hope, which I really think there is. In a way, this is very much a blip. Probably the most, but there's a strategic lesson to be learned, I think, speaking as someone who's been involved in attempts to transform the world, at least for the last 20 years since I was involved in the global justice movement, I think that there is a real lack of strategic understanding that there's vast shifts that are happening in the world in terms of central class dynamics that the populist right is taking advantage of and the left is really being caught flat footed on. So I want to make a case of what seems to be going wrong and what we could do about it. First of all, in terms of despairing, I was very much at the point of despairing. So many people put so much work that I know into trying to turn around the situation. There seemed to be a genuine possibility of a broad social transformation in England and when we got the results, there's a kind of sense of shock. But actually, if you look at the breakdown of the vote, for example, it doesn't look too great for the right in the long run. Basically the younger you are, the more determined you are to kick the Tories out. The core, actually I've never seen numbers quite like this. The electoral base of the right wing is almost exclusively old. And the older you are, the more likely you are to vote conservative, which is really kind of amazing because it means that the electoral base of the right is literally dying off, a process which they're actually expediting by defunding healthcare in every way possible. And normally you say, oh, yes, so what? As people get older, they become more conservative. But there's every reason to think that that's not actually happening this time around. Especially because traditionally people who either had been apathetic or had been voted for the left who eventually end up voting for the right do so at the point when they get a mortgage or when they get a secure job with room for promotion and therefore feel they have a stake in the system. Well that's precisely what's not happening to this new generation. So if that's the case, the right wing is actually in long run and in real trouble. And to show you just how remarkable the situation is, someone put together an electoral map of the UK showing what it would look like if only people over 65 voted and what it would look like if only people under 25 voted. Here's the first one, blue is Tory. If only people over 65 voted, I believe there would be four or five Labour MPs but otherwise entirely conservative. Now here's the map if only people under 25 voted. There would be no Tory MPs at all. There might be a few Liberal Dems and Welsh candidates and Scottish ones. And in fact this is a relatively recent phenomena. If you look at the divergence, it really is just the last few years it started to look like that. So something has happened that almost all young people coming in are voting not just for the left but for the radical left. I mean Corbyn ran on a platform that just two or three years before would have been considered completely insane and it's falling off the political spectrum altogether. Yet the vast majority of young people voted for it. The problem is that in a situation like this, the swing voters are the sort of middle aged people and for some reason middle aged people broke right. The question is why did that happen? And I've been trying to figure that out. Now in order to do so I think we need to really think hard about what has been happening to social class relations. And the conclusion that I came to is that essentially the left is applying an outdated paradigm. They're still thinking in terms of bosses and workers in a kind of old fashioned industrial sense where what's really going on is that for most people the key class opposition is caregivers versus managers. And essentially leftist parties are trying to represent both sides at the same time but they're really dominated by the latter. Now I'm going to go through some basic political economy stuff in way of background. This is a key statistic which is the kind of thing we were looking at when we first started talking about the 99% and the 1% at the beginning of Occupy Wall Street. Basically until the mid-70s there was a sort of understanding between 1945 and 1975 say. There was an understanding that as productivity increases wages will go up too. And they largely went up together. This only takes it from 1960 but it goes back to the 40s. More productivity goes up. A cut of that went to the workers. Around 1975 or so it really splits. And since then if you see what's going on here productivity keeps going up and up and up and up whereas wages remain flat. So the question is what happens to all that money from the increased productivity? Basically it goes to 1% of the population and that's what we were talking about when we talked about the 1%. The other point which was key to the notion of 99% and 1% when we developed that was that the 1% are also the people who make all the political campaign contributions. These statistics are from America which has an unusually corrupt system but pretty much all of them and bribery is basically legal in America. But essentially it's the same people who are making all the campaign contributions who have collected all of the profits from increased productivity, all the increased wealth. And essentially they're the people who manage to turn their wealth into power and their power back into wealth. So who are these people and how does this relate to changes in the workforce? Well the interesting thing that I discovered when I started looking into this is that the rhetoric we use to describe the changes in class structure since the 70s is really deceptive because really since the 80s everybody's been talking about the service economy. We're shifting from an industrial to a service economy. And the image that people have is that we've all gone from being factory workers to serving each other lattes and pressing each other's trousers and so forth. But actually if you look at the actual numbers of people in retail, people who are actually serving food, I don't have a detailed breakdown here, but they remain pretty much constant. And in fact I've seen figures going back 150 years which show that it's pretty much 15% of the population that does that sort of thing. It has been for over a century. It doesn't really change. Goes up and down a little bit. But basically the amount of people who are actually providing services, haircuts, things like that is pretty much the same as it's always been. What's actually happened is that you've had a growth of two areas. One is providing what I would call caregiving work. And I would include education and health, but basically taking care of other people in one way or another. The statistics you have to look at education and health because they don't really have a category of caregiving in economic statistics. On the other hand you have administration. And the number of people who are doing clerical administrative and supervisory work has gone up enormously. To some degree, according to some accounts, it's gone up from maybe 20% of the population in say UK or America in 1900 to 40, 50, 60%. I mean even a majority of workers. Now the interesting thing about that is that huge numbers of those people seem to be convinced they really aren't doing anything. And that essentially if their jobs didn't exist it would make no difference at all. It's almost as if they were just making up jobs and offices to keep people busy. And this was the theme of a book I wrote on bullshit jobs. And just to describe the genesis of that book, essentially I don't actually myself come from a professional background. So as a professor I constantly meet people, sort of spouses of my colleagues, the sort of people you meet when you're socializing with people with professional backgrounds. I keep running into people at parties and saying, well who work in offices and say, well I'm an anthropologist, right? I keep asking, well what do you actually do? I mean what is a person who is a management consultant actually do all day? And very often they will say, well not much. Or you ask people to say, I'm an anthropologist, what do you do? And they'll say, well nothing really. And you think they're just being modest, you know? So you kind of interrogate them, a few drinks later, they admit that actually they meant that literally. They actually do nothing all day. They sit around and they adjust their Facebook profiles, they play computer games. Sometimes they'll take a couple calls a day. Sometimes they'll take a couple calls a week. Sometimes they're just there in case something goes wrong. Sometimes they just don't do anything at all. And you ask, well does your supervisor know this? And they say, yeah I often wonder. I think they do. So I began to wonder how many people are there like this? Is this something, some weird coincidence that I just happen to run into people like this all the time? What section of the workforce is actually doing nothing all day? So I wrote a little article. I had a friend who was starting a radical magazine said, can you write something provocative, you know, something you'd never be able to get published elsewhere. So I wrote a little piece called On the Phenomenon of Bullshit Jobs. I suggested that back in the 30s, Keynes wrote this famous essay predicting that by around now we would all be working 15 hour weeks because automation would get rid of most manual labor. And if you look at the jobs that existed in the 30s, that's true. So I said, well maybe what's happened is the reason we're not working 15 hour weeks is they just made up bullshit jobs just to keep us all working. And I wrote this piece and this kind of a joke, right? Within a week this thing had been translated into 15 different languages. It was circulating around the world because the server kept crashing, it was getting millions and millions of hits. I was like, oh my God, you mean it's true? And eventually someone did a survey, you gov, I think, and they discovered that of people in the UK, 37% agreed that if their job didn't exist, either would make no difference whatsoever or the world might be a slightly better place. I thought about that, like what must that do to the human soul? Can you imagine that? Waking up every morning and going to work thinking that you're doing absolutely nothing. And wonder people are angry and depressed. And I thought about it and it explains a lot of social phenomena that if people were just pretending to work all day. And it actually really touched me and it's strange because I come from a working class background myself so you'd think that, oh great, so lots of people are paid to do nothing all day and get good salaries like my heart bleeds. But actually if you think about it, it's actually a horrible situation because as someone who has had a real job knows, the very, very worst part of any real job is when you finish the job but you have to keep working because your boss will get mad, you know, you have to pretend to work because it's somebody else's time, it's a very strange metaphysical notion we have in our society that someone else can own your time. You know, so since you're on the clock you have to keep working or pretend to be, make up something to look busy, well apparently at least a third of people in our society, that's all they do. Their entire job consists of just looking busy to make somebody else happy. That must be horrible. So it made a lot of political sense. Why is it that people seem to resent teachers or auto workers? The 2008 crash, the people who really had to take a hit were teachers and auto workers and there was a lot of people saying, well these guys are making $25 an hour, you know? Well yeah, they're providing useful service, they're making cars, you're American, you're supposed to like cars. Cars is what makes you what you are if you're American. How would they resent auto workers? And I realize that it only makes sense if there's huge proportions of the population who aren't doing anything and who are totally miserable and are basically saying like, yeah, but you get to teach kids, you get to make stuff, you get to make cars, and then you want vacations too? That's not fair, you know? It's almost as if the suffering that you experience doing nothing all day is itself a sort of validation of, it's like this kind of hair shirt that makes you, justifies your salary, whereas people, and I actually hear people saying this logic all the time, that well, teachers, you know, I mean they get to teach kids, you don't want people paying them too much, you don't want people who are just interested in money taking care of our kids, do we? Which is odd, because you never hear people say, you never want greedy people, people who are just interested in money taking care of our money, so therefore you shouldn't pay bankers so much, though you'd think that would be a more serious problem, right? Yeah, so there's this idea that if you're doing something that actually serves a purpose, somehow that should be enough, you shouldn't get a lot of money for it. Alright, so as a result of this, there is actually an inverse relationship, and I don't have actual numbers for this, but there's actually an inverse relationship, and I have seen economic confirmation of this, between how socially beneficial your work is, how obviously your work benefits other people, and how much you get paid. I mean there's a few exceptions, like doctors, which everybody talks about, but generally speaking the more useful your work, the less they'll pay you for it. Now this is obviously a big problem already, but there's every reason to believe that the problem is actually getting worse, and one of the fascinating things I discovered when I started looking at the economic statistics is that if you look at jobs that actually are useful, and let's again look at caregiving, remember the big growth in jobs over the last 30 years has been in two areas, which are sort of collapsed in the term service, but are really actually totally different. One is the sort of administrative clerical and supervisory work, and the other is the actual caregiving labor, the work where you're actually helping people in some way. So education and health are the two areas which show up on the statistics. Okay, if you look at the statistics you discover that productivity in manufacturing, as we all know, is going way up. Productivity in certain other areas, wholesale business services are going up. Whatever productivity in education, health, and what's this other services, basically caregiving in general, insofar as it shows up on the charts, productivity is actually going down. Why is that? That's really interesting. I mean, we'll talk in a moment about what productivity actually even means in this context, but here's a suggestion as to why. This is the growth of physicians on the bottom versus the growth of actual medical administrators in the United States since 1970. That's fairly impressive looking graph there. Basically what that sort of giant mountain there is what called a bullshit sector. There's absolutely no reason why you'd actually need that many people to administer doctors. And actually the real effect of having all those people is to make the doctors and the nurses less efficient rather than more because I know this perfectly well from education because I'm a professor, that's what I do for a living. The amount of actual administrative paperwork you have to do actually increases with a number of administrators. Over the last 30 or 40 years, something similar has happened. It isn't quite as bad as this, but something very similar has happened in America, in universities, that the number of professors has doubled, but the number of actual administrators has gone up by 240, 300%. Suddenly you have twice as many administrators per professors as you had before. You would think that would mean that professors have to do less administration because you have more administrators. Exactly the opposite is the case. More and more of your time is taken up by administration. Why is that? The major reason is because the way it works is if you're hired as executive vice provost or assistant dean or something like that, some big shot administrative position at a British or American university, well, you want to feel like an executive. And they give these guys these giant six-figure salaries, they treat them like they're an executive. So if you're an executive, of course, you have to have a minor army of flunkies of assistance to make yourself feel important. The problem is they give these guys five or six assistants, but then they figure out what those five or six assistants are actually going to do, which usually turns out to be make up work for me, right, the professor. So suddenly I have to do time allocation study. I have to do, you know, learning outcome assessments where I describe what the difference between the undergraduate and the graduate section of the same course is going to be. Basically completely pointless stuff that nobody had to do 30 years ago and made no difference at all to justify the existence of this kind of mountain of administrators and just give them something to do all day. Now the interesting result of that is that, and this is where this sort of stuff comes in, it's actually, the numbers are there, but it's very, very difficult to interpret. So I had to actually get an economist friend to sort of go through all this with me and confirm that what I thought was happening was actually happening. Essentially what's going on is just as manufacturing, digitization is being employed to make it much more efficient. Productivity goes up. The number of workers go down. The number of payment that they, you know, the wages are actually going way up in manufacturing, but it doesn't really make a dent in profits because there are so few workers. So okay, that we kind of all know about. On the other hand, in the caring sector the exact opposite is happening. Digitization is being used as an excuse to make lower productivity so as to justify the existence of this army of administrators. And if you think about it, you know, basically, you know, in order to translate a qualitative outcome into a form that a computer can even understand, that requires a large amount of human labor. That's why I have to do the learning outcome studies and the time allocation stuff, right? But really ultimately that's to justify the existence of this giant army of administrators. Now as a result of that, you need to have actually more people working in those sectors to produce the same outcome. These are becoming less and less productive. More and more of your time has to be spent. Oh yes, this is what the average company now looks like. More and more of your time ends up being spent sort of making the administrators happy and giving them an excuse for their existence. This is a breakdown I saw in a report about American office workers where they compared 2015 and 2016 and said, you know, in 2015 only 46% of their time was spent actually doing their job. That declined by 7% in one year to 39%. That's got to be some kind of statistical anomaly because if that were actually true in about a decade and a half, nobody would be doing any work at all. But it gives you an idea of what's happening. So if productivity is going down, these people are just sort of working all the time to satisfy the administrators. So the creation of bullshit jobs essentially creates the bullshedization of real jobs. There's both a squeeze on profits and wages. These more and more money is going to pay the administrators. So you need to hire more and more people. So what do you get? Well, if you look around the world, where is labor action happening? Basically you have teacher strikes all over America. You have professor strikes in the UK. You have care home workers, I believe in France. They had nursing home workers first time ever on strike. Nurses strikes all over the world. Basically caregivers are at the sort of cutting edge of industrial action. The problem, of course, and this is the problem for the left, is that the administrators who are the basic class enemy of the nurses, and I believe in New Zealand, the nurses actually wrote a very clear manifesto stating this. They said, you know, the problem we have is that there's all of these hospital administrators, these guys, not only are they taking all the money so we haven't got a raise in 20 years, they give us so much paperwork we can't take care of our patients. So that is the sort of class enemy of what I call the caring classes. The problem for the left is that often those guys are in the same union and they're certainly in the same political party. Tom Frank wrote a book called Listen Liberal where he documented what a lot of us had kind of had a sense of intuitively for some time that what used to be left-wing parties, essentially the like Clintonite Democrats, the Blairite Labour Party, you can talk about people like Macron, Trudeau, all of these guys have essentially the head of parties that used to be parties based in labour unions and in working classes and by extension the caring classes as I call them, but I've shifted to essentially be the classes of the professional, I mean the parties of the professional managerial classes. So essentially they are the representatives of that giant mountain of administrators, that is their core base. I even caught a quote from Obama where he pretty much admitted it where he said, well people ask me why we don't have a single-payer health plan in America, wouldn't that be simpler, wouldn't that be more efficient? And he said, you know, well, yeah, I guess it would, but that's kind of the problem. We have at the moment, what is it, two, three million people working for Kaiser, Blue Cross, Blue Shield, all these insurance companies, what are we going to do with those guys if we have an efficient system? So essentially he admitted that it is intentional policy to maintain the marketization of health in America because it's less efficient and allows them to maintain a bunch of paper pushers in offices doing completely unnecessary work who are essentially the core base of the Democratic Party, I mean those guys. They don't really care if they shut down auto plants, do they? In fact, they seem to take this glee, they say, well, you know, autonomy is changing, you just got to deal with it. But the moment those guys in the offices who are doing nothing are threatened, the political parties leap into action and get all excited. All right, so if you look at what happened in England, while it's pretty clear that the conservatives won because they maneuvered the left into identifying itself with the professional managerial classes, there is a split between the sort of labor union base, which is increasingly unions representing very militant characters of one kind or another, and the professionals, managerials, and the administrators, both of whom are supposedly represented by the same party. Now, Brexit was a perfect issue to sort of make the bureaucrats and the administrators and the professionals into the class enemy. Now it's very ironic because of course in the long run, the people who are really going to benefit from Brexit are precisely lawyers, right, because they got to rewrite everything in England. However, this is not how it was represented. It was represented to your enemies. I mean, there was an appeal to racism obviously, but there was also an appeal to your enemies are these distant bureaucrats who know nothing of your lives. The key moment in terms of where essentially the Tories managed to outmaneuver labor and guaranteed their victory was precisely by forcing labor into an alliance with all the people like the liberal Democrats and the other remainers who then used this incredibly complicated constitutional means to try to block Brexit from happening. 20 minutes, okay, that's easy. It was fun to watch at the time on TV. We're all trans-thix, all these guys in wigs and strange people called Black Rod in odd costumes appealing to all sorts of arcane rules from the 16th century. It was great drama. It was like costume drama come to life on television. But in effect, and it seemed like Boris Johnson was just being constantly humiliated. Everything he did didn't work. His plans collapsed. He lost every vote he tried. But in fact, what it ended up doing was it forced what was actually a radical party which represented sort of angry youth in the UK into alliance of the professional managerials who live by rules and whose entire idea of democracy is of a set of rules. This is very clear in America. And again, you could see this in the battle of Trump versus Hillary Clinton. Clinton was essentially accused of being corrupt because she would do things like get hundreds of thousands of dollars for speeches from investment firms like Goldman Sachs who obviously aren't paying politicians that kind of money unless they're expected to get some kind of influence out of it. And constantly Clinton's defenders would say, yes, but that was perfectly legal. Everything she did was legal. Why are people getting so upset? She didn't break the law. And I think that if you want to understand class dynamics in a country like England or America today, that phrase almost kind of gives the game away because people of the professional managerial classes are probably the only people alive who think that if you make bribery legal, that makes it OK. It's all about form versus content. Democracy isn't the popular will. Democracy is a set of rules and regulations. And if you follow the rules and regulations, well, you know, that's fine. And these guys, that kind of mountain of administrators, are the people who think that way. And they've become the base of party. They are the electoral base of people like Clinton, people like Macron, people like Tony Blair had been, people like Obama. And Corbyn was not at all like that. He's this person who had been a complete rebel against his own party for his entire life. But what they did was they maneuvered him into a position where there had been a Brexit vote, which represented substance, the popular will. And he was forced into a situation where he had to like ally with the people who were trying to block it through legalistic regulation, essentially, by appeal to endless arcane laws. Thus, identifying his class with the professional managerials. And a lot of my friends who actually were out on doorsteps, you know, they actually seemed to think of Boris Johnson as a regular guy. I mean, this guy, his actual name is Boris Alexander Defeffel Johnson. He is an aristocrat going back like 500 years. But they seemed to think he was a regular guy. And Corbyn, who hadn't even been to college, was sort of a member of the elite, based almost entirely on that. And if you look at people like Trump and people like Johnson, how did they manage to pull off being populist in any sense? They're born to every conceivable type of privilege. Basically they do it by acting like the exact opposite of the annoying bureaucratic administrator who is your kind of enemy at work. That's the game of images they're playing. You know, Johnson is clearly totally fake. He fakes disorganization. He's actually a very organized person, according to people who actually know him. But he's developed this persona of this guy who's all about content over form and is just sort of chaotic and disorganized. So they basically play the role of being anti-bureaucrats. And they maneuver the other side into being identified with administration rules and regulations. And those guys who basically drive you crazy. The question for the left end is how to break with that. So I have, what is it, 15 minutes in order to propose how we can break with that? It strikes me that we need to kind of rip up the game and start over. We're in another world economically than we used to be. And perhaps the best way to do it is to think about, well, when people say their jobs are bullshit, you know, when people say that 37% of people to say if my job didn't exist, probably the world would be better off. I'm not actually doing anything. What do they actually mean by that? In almost every case, what they say is, well, it doesn't really benefit anyone. There is a principle that ultimately work is meaningful if it helps people and improves other people's lives. Thus, you know, caring labor in a sense has become the paradigm for all forms of labor. And this is very, very interesting because I think that to a large degree, the left is really stuck in a notion of production rather than caring. And the reason we have been outmaneuvered in the past has been precisely because of that. I could talk about how this happened. I think really a lot of economics is really theological. It's a transposition of old religious ideas about creation where human beings are sort of forced to, if you look at the story of Prometheus, the story of the Bible, you know, the human condition, our fallen state is one where God is a creator, we tried to usurp his position, so God punishes us by saying, okay, you can create your own lives, but it's going to be miserable and painful. So work is both productive, it's creative, but at the same time, it's also supposed to be suffering. Whereas, so we have an idea of work as productivity. So I was actually looking at these charts, they're talking about the different productivity of different types of work. Now I can see where productivity of construction comes in, but according to this, you can even measure the productivity of real estate, productivity of agriculture, okay, productivity of, I mean, everything is production. What's productivity of real estate? It doesn't make any sense. You're not producing anything, it's land, it's a center. Our paradigm for value is production, but if you think about it, most work is not productive. Most work is actually about maintaining things, it's about care. If you think, whenever I talk to a Marxist theorist, and they try to explain value, which is what they always like to do, they always take the example of a tea cup, they'll say, well, usually they're sitting there with a glass and a bottle of cup, so we'll look at this bottle, it takes certain amount of socially necessary labor time to produce this, say it takes this much time, this much resources, there's always some production of stuff, but a tea cup, a bottle, well, you produce a cup once, you wash it like 10,000 times. Most work isn't actually about producing new things, it's about maintaining things. We have a warped notion, which really, it's very gendered, right? Real work is like mail craftsmen banging away or some factory worker making a car or something like that. It's almost a paradigm for childbirth, right? Because labor is supposed to be, the word labor is very interesting, right? Because in the Bible, they curse Adam to work and they curse Eve to have pain in childbirth, but that's called labor. So there's this idea that there's this, factories are like these black boxes where you're kind of pushing stuff out like babies through a painful process that we don't really understand. And that's what work mainly consists of. But actually, that's not what work mainly consists of. Most work actually consists of taking care of other people. So I think that what we need to do is we need to start over. We need to, first of all, think about the working classes, not as producers, but as carers. The working classes are basically people who take care of other people and always have been. Actually, psychological studies show this really well. That the poorer you are, the better you are at reading other people's emotions and understanding what they're feeling. Just because it's actually the job of people to take care of others. Rich people just don't have to think about what other people are thinking or care. They don't care, literally. And so I think we need to, A, redefine the working classes as caring classes. But second of all, we need to move away from a paradigm of production and consumption as being what an economy is about. If we're going to save the planet, we really need to move away from productivism. So I would propose that we just rip up the discipline of economics as it exists and start over. This is my proposal in this regard. I think that we should take the ideas of production and consumption, throw them away, and substitute for them the idea of care and freedom. Think about it. Thank you. I mean, even if you're making a bridge, as feminists constantly point out, you're making a bridge because you care that people can get across the river. You make a car because you care that people can get around. So even like production, is it one subordinate type of care? What we do as human beings is we take care of each other. But care is actually, and this is, I think, something that we don't often recognize closely related to the notion of freedom because normally care is defined as answering to other people's needs. And certainly that is an important element in it. But it's not just that. If you're in a prison, they take care of the needs of the prisoners, usually, at least, to the point of giving them basic food, clothing, and medical care. But you can't really think of a prison as caring for prisoners. Care is more than that. So why isn't a prison a caregiving institution, whereas something else might be? Well, if you think about care, what is the paradigm for caring relations of a mother and a child? A mother takes care of a child, or a parent takes care of a child, so that that child can grow and be healthy and flourish. That's true. But in an immediate level, you take care of a child so the child can go and play. That's what children actually do when you're taking care of them. What is play? Play is like action done for its own sake. It's in a way the very paradigm of freedom because action done for its own sake is what freedom really consists of. Play and freedom are ultimately the same thing. So a production consumption paradigm for what an economy is, is a guarantee for ultimately destroying the planet and each other. I mean, even when you talk about degrowth, if you're working within that paradigm, you are essentially doomed, we need to break away from that paradigm entirely. Even freedom on the other hand are things you can increase as much as you like without damaging anything. So we need to think what are ways that we need to care for each other that will make each other more free? And who are the people who are providing that care? And how can they be compensated themselves with greater freedom? And to do that, we need to actually scrap almost all of the discipline of economics as it currently exists. We're actually just starting to think about this. I mean, because economics as it currently exists is based on assumptions of human nature that we now know to be wrong. There have been actual empirical tests of the basic fundamental assumptions of the maximizing individual that economic theory is based on. It turns out they're not true. It tells you something about the role of economics that this has had almost no effect on economic teaching whatsoever. They don't really care that it's not true. But one of the things that we have discovered, which is quite interesting, is that human beings have actually a psychological need to be cared for, but they have an even greater psychological need to care for others or to care for something. If you don't have that, you basically fall apart. That's why old people get dogs. We don't just care for each other because we need to maintain each other's lives and freedoms, but our very psychological happiness is based on being able to care for something or someone. So what would happen to microeconomics if we started from that? We're doing actually a workshop tomorrow on the Museum of Care, which we're going to imagine in Rojava, which is in northeastern Syria, where there is a woman's revolution going on, as you might have heard. But it's in places like that where they're trying to completely reimagine economics, the relation of freedom, aesthetics, and value, because at the moment the system of value that we have is set up in such a way that this kind of trap that I've described and the gradual bolsterization of employment, where production work has become a value unto itself in such a way that we're literally destroying the planet. In order to actually reimagine a type of economics that wouldn't destroy the planet, we have to start all over again. So I'm going to end on that note. David, thank you so much. I think it's very interesting to also have some political views now that we mix in all sorts of technology, and it goes very good in the theme of Congress. Please if anyone has any questions, line up by the microphones and we'll go for that. Unfortunately in the beginning I forgot to mention that you can ask questions over the internet through IRC, Mastodon, or Twitter, and remember to use the channel Borg, and we'll make sure that they get answered. So please, microphone number one. When you observe the productivity in healthcare going down, do you have an explanation according to new liberal thinking why hospitals, more administrators, one with less administrators, don't have a competition outcome that the hospital with less administrators wins? Yeah, well one of the fascinating things about the whole phenomenon of bolsterization and bolster jobs is that it's exactly what's not supposed to happen under a competitive system, but it's happened across the board in every equally in private sector and public sector. That's a long story, but one reason seems to be that, and this is why actually I had managerial feudalism in the title, is that the system we have is essentially not capitalism as it is ordinarily described. The idea that you have a series of small competing firms is basically a fantasy, and it's true of restaurants or something like that, but it's not true of these large institutions, and it's not clear that it really could be true of those large institutions. They just don't operate on that basis. Essentially increasingly profits aren't coming from either manufacturing or from commerce, but from rather redistribution of resources and rent extraction. When you have a rent extraction system, it much more resembles feudalism than capitalism as normally described. You want to distribute, if you're taking a large amount of money and redistributing it, well you want to soak up as much of that as possible in the course of doing so. That seems to be the way the economy increasingly works. If you look at anything from Hollywood to the healthcare industry, what you've seen over the last 30 years of creation, endless intermediary roles which grab a piece of the pie that's being distributed downwards. I mean, I could go into the whole mechanisms, but essentially the political and the economic have become so intertwined that you can no longer make a distinction between the two. This is where you go back to the whole thing about the 1%. You're using wealth, political power to accumulate more wealth, using your wealth to create more political power. You have an engine of extraction whereby the spoils are increasingly distributed within these very, very large bureaucratic organizations. That's essentially how our economy works. Great. Thank you, Simon. I mean, I could talk for an hour about the dynamics, but that's basically it. You could call it capitalism if you like, but it doesn't in any way resemble capitalism in the way that people like to imagine capitalism would work. Great. Awesome. Questions from the internet, please. How to best address this caregiver class? When the context of the proletariat is no longer given to awake their class consciousness? How to address the caregiver when the proletariat is no longer what? Please repeat the question. How to best address the caregiver class? When the context of the proletariat is no longer given to awake their class consciousness? Given to awake? I'm not sure what you asked about. Yeah. I mean, the question is how do you create a class consciousness for that class? Yeah. Yeah. Well, that is the question. First of all, you need to actually think about who your actual class enemy is. I mean, I don't mean to be too blunt about it, but the problem we have, why is it people are suspicious of the left and people like Michael Albert, and we're pointing this out years ago, that one reason that actual proletarians were very suspicious of traditional socialists in many cases is because their immediate enemy isn't actually the capitalist who he rarely meets, but the annoying administrator upstairs. To a large extent, traditional socialism means giving that guy more power rather than less. I think we need to actually look at what's really going on in a hospital in a school. I use hospitals and schools as examples, but they're actually very important ones because in people have shown that in most cities in America now, hospitals and schools are the two largest employers, universities and hospitals. Essentially work has been reorganized around working on the bodies and minds of other people rather than producing objects. The class relations in those institutions are not, you can't use traditional Marxist analysis. You need to actually reimagine what it would mean. Are we talking about the production of people? If so, what are the class dynamics involved in that? Is production the term at all? Probably not. That's why I say we need to reconstitute the language in which we're using to describe this because we're essentially using 19th century terminology to discuss 21st century problems. And both sides are doing that. The right wing is using like neoclassical economics, which is basically Victorian. It's trying to solve problems that no longer exist. But the left is using a 19th century Marxist critique of that, which also doesn't apply. We just need new terms. Thank you. I hope that answered the question from the internet. Microphone number two, please. Yeah, so, okay, I guess, so the question is basically to what extent can technology help? And the subtext here is there's actually a lot of, really lots of projects now whose function at some level is to automate management. And to the extent to which that can be molded into kind of removing this class that you're talking about or somehow making it too painful for them to exist. And some of these projects are companies, but some of them are very independent things that have very self-marked ideas, but with tens of millions of funding. So, yeah, well, that's the interesting thing that people talk about it all the time. And there's this, but this is where power comes in, right? I mean, why is it that automation means that if I'm working for UPS, the delivery guy gets like tailorized and downsized and super efficient and to the point where our life becomes a living hell, basically. But somehow the profits that come from that end up hiring dozens of flunkies who sit around in offices doing nothing all day. It's not, I've actually, one of the guys who I, when I started gathering testimonies, I gathered several hundred testimonies of people with bullshit jobs or people thought of themselves as having bullshit jobs. And one of the most telling was a guy who was an efficiency expert in a bank. And he estimated 80% of people who work in banks are unnecessary. Either they do nothing or they could easily be automated away. And what he said was that, I mean, it was his job to figure that out. But then he gradually realized that he had a bullshit job because every single time he proposed a plan to get rid of them, they'd be shot down. He'd never got a single one through. And the reason why is because if you're an executive in a large corporation, your prestige and power is directly proportional to how many people you have working under you. So there's no way are they going to get rid of flunkies. I mean, that's just going to mean the better they are at it, the less important they'll become in the operation. So somebody always blocked it. So I mean, this is a basic power question. You can come up with great technological ideas for eliminating people. People do all the time. But who actually gets eliminated and who doesn't has everything to do with power. Great. Thank you. And last question, please, from microphone number five. Can we maybe have one question from a non-mail person? Yeah, that'd be nice. Non-mail person. Sorry. No, no, the other person's just left. Do you want to? Sorry. We're not choosing questions based on stuff, we're kind of choosing all around the hall. Please, microphone number five. Yeah, thank you for the opportunity to speak. I heard that you, like I really like your description of a paradigm or that a few people are stuck on production and consumption and that you would like to change the paradigm to a paradigm towards more care and freedom and so on, etc. And for me, it kind of sounds a little vague and that's why I myself think of basic income as a human wide as the actual mean to break with the current hegemonic macroeconomic paradigm, so to speak. And I was interested in your point of view on that basic income. Yeah, well, I actually totally support that. I think that one of the major objections that people have to universal basic income is essentially people don't trust people to come up with useful things to do with themselves. Either they think they'll be lazy and won't do anything or they think if they do do something, it'll be stupid. So we're going to have millions of people who are trying to create perpetual motion devices or becoming annoying street mimes or bad musicians or bad poets or so forth and so on. And I think it actually masks an incredible condescending elitism that a lot of people have, which is really the mindset of the professional managerial classes who think that they should be controlling people. Because, okay, if you think about the fact that huge percentages, perhaps a third of people already think that they're doing nothing all day and they're really miserable about it, I think that demonstrates quite clearly why that isn't true. First of all, the idea that people, like if given a basic income won't work, actually there are lots of people who are paid basically to sit there all day and do nothing, and they're really unhappy. They'd much rather be working. Second of all, if 30 to 40 percent of people already think that their jobs are completely pointless and useless, I mean how bad could it be? Even if everybody goes off and becomes bad poets, well, at least they'll be a lot happier than they are now. And second of all, one or two of them might really be good poets. If just 0.001 percent of all the people on basic income who decide to become poets or musicians or invent crazy devices actually do become Miles Davis or Shakespeare or actually do invent a perpetual motion device, well, you've got your money back right there. Great. Thank you so much. Unfortunately, that was all the questions that we had time to. If you have any more questions, please, I'm sure that Dewo will just take a few minutes to answer them if you come up here. Thank you so much, David Graeber for your talk. Please give him a great round of applause.
|
One apparent paradox of the digitisation of work is that while productivity in manufacturing is skyrocketing, productivity in caring professions (health, education) is actually declining - sparking a global wave of labour struggle. Existing economic paradigms blind us to understanding how economies have come to be organised. We meed an entirely new discipline, based on a different set of values.
|
10.5446/53075 (DOI)
|
Our next speaker, Regine Debati, will help you and explain you the internet of rubbish and things and bodies and basically everything around it e-waste. Thank you very much and welcome. Can I get less light in my face? Hello, good evening everyone. First of all, I want to say thank you to Nora, to Gregor and to everybody at the Chaos Communication Congress for welcoming me again this year. I've been tasked with the mission in 2018 to present to you some of the most interesting and exciting works and technology of this past year. Just like last year, I kind of went on my own way and went on a tangent and started adopting a tunnel vision and for some reason I realized I was obsessed with e-waste. So you're going to hear a lot about e-waste and nuclear waste in the coming hour, but still I promise it's still going to be reasonably interesting, hopefully, and most of the projects are anyway from 2019. So why did I get interested and why did I think it would be a good idea to talk to you about e-waste? First of all, there was the theme of this year's Congress, resource exhaustion. I just decided to put a more, let's say, ecological twist on it. And then the second reason why I wanted to talk about e-waste is that a couple of months ago, I went to see an exhibition of a Swiss photographer who has spent four years, something like that, travelling around the world and trying to understand why transhumanists wanted to change their engines, so-called augment and improve their body. And so he documented everything he found and how humans nowadays are changing their bodies, either to go from disabled body to able body or from able body to super able body. So one way you can augment your body is, you know, it's with RFID chip that you can implant and that allows you to get access to offices, open your car door, or even pay for public transport. And I've been told recently that RFID chips are the new tattoo. And then you can also get magnet implanted underneath your finger that allows you to sense electromagnetic fields. And actually the first time I heard about this was at least ten years ago. It was one of the first Chaos Communication Congress I attended in Goodold, Berlin. And there was this journalist called Queen Norton and she came to explain that she had just had a magnet implanted, explained a new experience and how she felt magnetic fields. Anyway, I could multiply the example, but of course the people who are really at the cutting edge of body improvement and augmentation are the transhumanists. So this is one of them, Igor Trapeznikov is part of the Russian transhumanist community. He had a number of implants, you know, the usual RFID chips, but also a device that turns sites into sound, which is usually useful for people who have vision problems or who are blind. And then of course there are additions to the body or corrections of the body that are there for therapeutic reasons, so that's how some find themselves with bits of titanium in the knee or in the shoulder. And they use this kind of screw, I think that's one of the images in the series that I found the most impressive, the idea that when I get old I might get this kind of screws inside my body. A pacemaker of course, and what makes pacemaker interesting is that it was one of the first electronic devices that found its way inside our body, so it became kind of emblematic of the coming mechanization of the human body. And then of course there are the devices that communicate with electronics that you insert in the body and that communicate with phones and computers, so that's why some are talking about the Internet of Bodies, you know, after the Internet of Things, the Internet of Bodies. So this is bioartificial pancreas for people who suffer from diabetes type 1. Okay, so there are so many smart devices that can be added on your body or inside your body. I've heard about smart contact lenses and also smart prosthetics. And after seeing this exhibition I started looking at people around me with a different eye and wondering who else had bits of metals and electricity and electronics inside their body. So that's where my obsession with e-waste came because my immediate question after this is what happens after you died? What happens to that? So I don't know if you're interested but I had to do some research. So if you have buried the traditional way, you're buried with all your gadgets and your gizmos and anything orthopedic. If you're cremated, you're not cremated with your pacemaker because it contains batteries they could explode. And then I learned about a new very interesting service. I mean to me it sounded quite interesting. It's that when you are cremated, of course, the titanium or any metal, they are not burnt. So there's this company who retrieves all the metal that is found among the ashes. There's a couple of companies around the world who do that. And they just recover all the metals, they divide them according to the types of metal and then they melt them down into ingots, which they sell to the medical industry but also to the car industry and the aeronautical industry. So that means that parts of these bodies are going to become part of a car or a plane one day. And I also started imagining that probably in the future, archaeologists will find skeletons with e-waste. We will be buried simply with our e-waste. They will find bits of metals in the shape of bones and then rusty electronics. So that's really what set me on the path of starting to see e-waste absolutely everywhere. Yes, I never miss an opportunity to show my dog because I realize that he has an RFID chip and apparently all pets are buried with them. Anyway, splendid animal. So since I'm talking about e-waste, it's difficult not to mention one of the icons of e-waste. It's at Bob Lachey. I'm sure you've seen the picture. I mean, I'm going to show images but I'm not going to tell you the whole story because I'm sure you've seen all these images. So it's located in Accra, the capital of Ghana, and that's where full containers of electronic trash end up. And the press really like to talk about it and say, oh, this is where your data and your device is going to die. It's a huge place and you've seen the images of these young people who spend the whole day dissecting your devices and trying to separate different types of precious metals such as aluminum, silver, copper, etc. Of course, they work in terrible condition. It's very toxic to work there. They all suffer from terrible headaches, difficulty sleeping, respiratory problems, and treated wounds, etc. But it's really usually the way the press is depicting what's happening at Bob Lachey is really not depicting the full story. There are bits and pieces missing in the narratives. First of all, I think we are not conscious enough of the role that these really important role these people are doing for us because they retrieve metals and we have the feeling that metals are everywhere, but on the surface and underneath the earth, actually some of them are going to be more and more difficult to retrieve. This year UNESCO declared the year of the periodic table. You can see on this periodic table that the dots in red and in orange are the types of elements that are going to be more and more difficult to mine and to retrieve. I like to show this graph. It's a bit old. It's not very precise, but it shows the trend. If ever you have a child or a brother who was born in 2010, by the time your child or your brother or your sister is 20, there will be no antimony, no lead left to mine, very little indium, very little zinc, very little silver, almost no gold. You see, these elements will still be there, but it's just going to cost more and more energy and more and more money to retrieve them. That's why you have this project of mining very deep into the ocean or even mining asteroids for the same kind of metals that we need. I think also that metals, we can have a blind spot for metals. We forget. We take them for granted. When we think about the challenges of the environment, we tend to focus on energy. Energy is important. When you think about renewable energy, we have the feeling that it's zero emission, but it still has a big impact on the environment. I like to talk about this solar thermal plant. It's a famous one. It's a very, very big one. It's in Southern California. As you can see, it's vast. When it was being built, conservation biologists were very concerned because it was actually built on really environmentally intact desert habitat for a number of animals which were endangered, such as turtles that had to be moved by hands and put on a truck and relocated elsewhere. Some of them died out of stress. You might have heard that also birds and bats by the thousands every year they die because of the heat and the radiation. You see the surface of the earth that this is occupying. It's really huge. I think it's not a coincidence if the opening sequence of the sequel of Blade Runner 2049 starts with this seemingly endless landscape of solar panels that are necessary to power modern life. As I said, I think we don't perceive the importance of materials and how many components extracted from the earth are inside any object that surrounds us. I'm going to show a couple of examples of design work and artworks that try to make visible the elements that are inside our electronics, our electric objects. The first one is by these designers called Studio Drift. They have a series where they compose all kinds of objects. I'll show a few of them. They can be bikes, they can be mobile phones, in this case it's an iPhone from a model from 2010. It can be hovers. I'll show a few examples. They dissect them and they analyze the elements that compose them and then they restitute these elements in the form of this geometrical shape. I'm going to show a few of them. That was the iPhone. That's Gudol Nokia from 1999. When you see them in exhibition, they are really tiny and I think also some elements are simply not represented. If you think about rare earth, some of them are presented in smartphone in really tiny quantities, like less than half a gram. I couldn't find another image but this is very big. This is the representation of Volkswagen Beetle from 1980. That took a lot of space. You see elements that appear that I was really not expecting, such as horse hair and also cork. This is an electric cable, so a bit of copper and a lot of plastic for one meter. That's my favorite, the Kalashnikov. You see the bullet in the front. When I went to see the first time I saw this exhibition, if I didn't look at the label, I had no idea what was what. I could not recognize anything. It shows the level of ignorance of at least people like me. That I could recognize, the pencil that I could manage. That's the last one. That's the light bulb. That's one way to visualize the elements that surround us. It's very design-y, it's quite elegant and charming. I also like artists who adopt a more brutal strategy. Danny Puger also wanted to show the inside, that was one of his objectives at least, to show the inside of electronic devices. The first part of his project started back in 2017. He infiltrated Philips and got hired. You see this guy at these consumer good fairs that try to sell you gadgets and say, well, this is going to change your life and he's what this can do. He was hired to sell a highly sophisticated shaver for men. The reason why he wanted to be there is that he wanted to understand exactly what were the dynamics and the logic behind the constant rapid innovation behind our electronics. That was the first part. He got his hands on confidential material that really explained how people at Philips, they have these teams, I say Philips but I'm sure it's the same elsewhere, of course, they really analyzed the different types of consumers that we are and find the best way to trick us and lure us and seduce us into buying new products. And then the second part of his project was this year where he bought this machine. It's a machine that is called an industrial stress testing machine. So there are several of these machines. You can buy them, you bought this one on Alibaba in China and they're usually used by makers of what shivers, mobile phones, whatever, and they simulate all kinds of accidents that can happen to your device, like walking on it or dropping it. I'm going to show a video of this one. So that's our Porsche ever inside. I like the machine because it has a bit of a vintage feel inside the object that it tells are quite sophisticated and very much 21st century, but when you see the control panel, it seems to be stuck in the mid-20th century. It has this kind of old design. Apparently it was a nightmare to get it shipped to Europe. It costs more to get it shipped than the machine itself. And if you're like me and you could watch this video for hours of the arm rotating and breaking things, and if you live in Leipzig, the machine is going to come to the Museum of Fine Art in March as part of an exhibition called Zero Waste. Anyway, so the reason why he did this obscene display of destruction of luxury goods was that he had two reasons for doing that. The first reason is that usually when we think about consumer culture and how we consume too much and throw away too much and waste, the onus usually is on the consumer. The consumer is wrong and we are wrong, we buy too much. But then the wrong starts actually much further upstream. It starts in every way in the line of production, especially in the design and research and development team who engineer constantly new ways to seduce us and to convince us that we have to update. And then another reason why he did this project is that he wanted to make a comment on maker spaces, which you know they are very interesting but they are still feeding on this vocabulary of entrepreneurship, innovation, progress. But they never really truly and deeply, and they don't really challenge in a systemic way consumerism. So he saw this laboratory of electronic aging as a kind of maker space for unmaking things. And also you get to see what's inside this piece of electronic. But I would like to go back to Alba Blouchier again, because if there are people who know very well what's inside our electronics and what they are made of, it's the workers in Alba Blouchier. And sometimes when we think about this kind of place, I take the example of Alba Blouchier, as I'm sure you know, they are E-Waste site elsewhere in Africa, also in Asia and Bangladesh, etc. We have the feeling that this is a big junkyard and it's a kind of huge mass and it's dark and it's dirty, but this is a map done by architects, designers and activists who spent a lot of time in Alba Blouchier. And you probably cannot read, well you probably all have better eyes than me, but it's still written very small. They map the activities at Alba Blouchier and if you read the map you see that there is a maker space, there are spaces dedicated to disassembling, there are places to eat, to pray, to disassemble, to repair, to have fun. And also we have the feeling that well, they are just people discarding and dissecting our electronics. But actually the way they work is very sophisticated and very fine. So first of all they get these huge containers that contain pretty much anything we don't want, that's electric or electronic including entire cars, and they divide them in streams. So this is a heap of photocopy machines. They also, they can detect when something cannot be repaired, but they also can detect which part of the machine or the computer is still, which component can be saved and reused to repair something else. So that also goes on the pile. I think they are much better than us at differentiating different types of plastics. They have places dedicated to weighting the kind of metal that they retrieve. And I would say they are quite like you, certainly not like me, so when I have a computer and it doesn't work, I think for me it's a black box, I'm never going to open it, but these people, their job, their business is to open our old computers. And better than that, most of them, not most of them, but many of them can repair and give them a second life, a really new lease of life. Which you know it's important because we think these old lease objects are dead, but actually they give them a second life. And then they repair them, they sell them to communities around Al-Bublashi or in Ghana or neighboring countries, to people who otherwise not be able to afford a new printer, a new TV or a computer. And then as you can see, they have places to have fun and there too, I'm quite fans of the Premier League. Yeah, okay. So the map I was showing just before with all the activities in Al-Bublashi has been made by architects and designer DK Oziwazare and Yasmin Abbas. They spent a lot of time trying to understand how the community was working and they designed this kiosk with anything they could find on the scrapyard. And it's a meeting place where they invite the community of Al-Bublashi to meet people such as graduates or young students in science, technology, engineering, math. And so the students and graduates who have very theoretical vision on technology, and then they get to meet these people who have an extremely sophisticated, very deep know-how. And together they exchange ideas and they've developed ways, for example, traditionally to recover the copper from the cable. They would burn the cable, the plastic, which is very toxic, and also damages the copper. So they find a way to recover the copper without burning the plastic, which means that it's less toxic, it's safer, and they can sell the copper for more money. So it's kind of win-win. They also developed together new processes, new ways of working, new tools. They also have fun. So with bits and pieces they found the scrapyard a couple of years ago, they made these drones that flew over the junkyard. So it's a project I like a lot because it pays homage to a kind of work that doesn't get a lot of recognition, but that's really important. You know, when you think of, as I said, the metals we are using to make our electronics and also even the solar panels and the wind turbine, et cetera, all our renewable energy, they also rely on the same metals that are disappearing, at least are becoming less available. So that's one of the reasons I really like this project. And also it reminded me of this performance from already 40 years ago by Mirla Laderman, who came in. She kind of decided she would be an artist in residency at the New York Department of Sanitation. So that's the people who collect the trash. So she spent a year with them. Every day she was there at 6 in the morning and she followed them. And she had this gesture of she wanted to meet every single one of them. There were 8,500 employees. So she wanted to shake them with them and say thank you for keeping New York City alive, which I think is a lovely gesture because suddenly this anonymous mass of bin men, you realize that they are individuals and yeah, she documented and talked with them. And yeah, it's a way to bring attention to a profession that we don't value a lot. And also she wanted by spending so much time among them to make people realize that the work of these people should be valued at least as much as her own as an artist because of the role they play for the collective society. Yeah, okay, I see still like Bob Lusci, but if I had time I would talk about other projects in Africa. This one is a startup, African Bond 3D that makes 3D printers that at least a third of the components come. I found in second hand, on second hand, in discarded electronics, let's say. And I think what they show is a kind of mindset and an attitude that should inspire us. I'm just going to show you a few examples of this attitude I think is really important is trying to make the most of what you have instead of jumping on the next gadget and gizmo, but really trying to be as creative as innovative as possible. So in India they have this movement called Jougade where in the West we translated as Frogal innovation. So this is a guy who needed, he had only a motorcycle, he needed a plow of a machine for his land, so he adapted his motorcycle and he turned it into a kind of... I can take a travelling mic, yeah? That battery. So I should keep... No, it's not, it's my hair. Yeah, yeah. In the meantime, the question, you know it... Oh, no, I think the meantime is over. No, still. You know the question and you know the case, 6 to 1, someone, now, 6. So 6 hours of sleep a day, 2 meals, no, oh, boring. Anyways, to remember it, 6 hours of sleep, 2 meals a day and one shower, please remember and to make some minor announcements, don't forget your trash. Do we have audio back? I think so. Perfect, then, please. I can't believe you've told people to watch. Yeah, Jougade is answering precise needs instead of creating demand and sometimes, you know, it responds to a specific problem. Sometimes it can be scaled up, so this Indian inventor made a fridge that doesn't use electricity, it uses evaporation to keep the food cool, simply because there are people who live in rural areas who don't necessarily can afford a fridge, who don't have continuous access to electricity. And of course the best example and the most famous example of Jougade is the Indian space mission. Since 2014, India has a probe orbiting Mars and the Indian engineers and pretty much everybody in India jokes about the fact that it costs less money for them to do this space mission, costs less money than your average Hollywood blockbuster about space. And, you know, it's not about making things cheap, it's about reinventing the process of designing, of designing, of manufacturing and of distributing. Okay, I don't have that much time, so I'm going to skip a few things. Yes, so I've been to India, Africa, let's get back to Asia because I really like this project. It's a project by an artist who also saw images from Agbo Blushie and similar, Jean Kjart. And so he was interested in e-waste as well. He found himself in Taiwan and he found this really cutting-edge company that deals with e-waste no one wants. So usually when you find a company that deals with recycling metals, usually they focus on the nice metals, the ones that are valuable, like copper, for example. And then you have pieces of electronic waste that no one wants, such as, I don't know, the one that contains lead or CRT monitors or glass fiber from printed electronic boards. This kind of thing that really no one knows what to do with it. Well, they found a way. So the company is called Super Dragon Technology. And the trick they found is that they just take all these bits and pieces of waste that no one wants and they crush them into a powder, they mix them with epoxy, and they make these planted objects out of them. And because I think they're quite crafty, they can imitate very well the appearance of bronze, of marble, of porcelain, you know, very noble material. And also they play with the proportion of epoxy and this toxic powder to make it feel like it's very heavy if it's supposed to imitate bronze, et cetera. And so they are decorative objects with the particularity that they contain a lot of lead, so you're not supposed to manipulate them too much. It might be dangerous. They also have in common that they have a very peculiar aesthetic, as you can see. This is trophy. And then they have this wonderful little thing where they actually make fake rocks, which looks absolutely eccentric and crazy, but the story is that these are not just any type of rocks. They are supposed to be scholar stones. So during the 10th, 11th and 12th century in China, there was this passion for the so-called scholar stones where an intellectual from China would go in the countryside and admire the landscape, and suddenly they would find these rocks that seemed to be sculpted by the elements that had really strange and interesting shapes. And they saw that it was the manifestation of the creativity of nature, and so they would bring them back and use them to contemplate and meditate. And nowadays you can still buy some of these scholar stones in auction. They are quite valuable. But what I really, really like is that this super dragon technology is the way they think. So they can figure out that there's a difference between applied art that makes useful things, like plates and furniture, and then fine art that makes basically useless things. And since the way no one wants is useless, well, that kind of fits with the requirement to make fine art. So that's why all these objects have been showing to you. They call them green art, and they have a green art gallery. And apparently it's quite successful. Taylor Coburn bought some of these stones and he actually accompanies them with a text or an audio narrative where you have the story of one single grain of sand that tells its many stories, or it started as a sand, and then became a piece of electronics and then a piece of trash, and then ended its life as art. Found it very charming. I don't have time for this. Yes, very quickly. I mean, I'm famous for giving very depressing talks, so now I'm trying to insert a little bit of happiness in my talks. Things are changing a bit. That's why you have now repair cafes that are quite successful. There are also shops you can find where the people working there are absolutely not affiliated with Apple, Samsung, and Nokia, but they found a way to give a second life to your objects and repair them. So usually they are much nicer than the guys at the Genius Bar, if you've ever had to deal with them. There are more and more websites that sell secondhand goods that even come with a one-year warranty. You've heard about the Fairphone, of course. And then brings me quite abruptly to the other type of waste I wanted to talk about, which is nuclear waste. I thought I needed to talk about nuclear waste because it's waste, because it's energy, and so it's very technological and extremely problematic. So what do you do when you have your country and you have a lot of nuclear waste? I mean, the most toxic can be toxic for 100,000 years up to a million years. It's going to be very, very dangerous. You have two choices. Either you keep it at hand, buried but not too deep, so that you hope that in the near future someone will have a brilliant idea and be able to know what to do with it and handle it safely. Or else you do like they are doing in Finland and in other countries, and France is thinking about that. You bury them very, very deep under the ground in a deep geological repository. But then you have a problem because it's going to be dangerous for thousands of years. So how do you inform people like future generation that it's dangerous and they shouldn't go nearby? Either you use text but then language is changed, language is disappeared. I mean, we might think that everybody speaks English, but everybody used to speak a bit of Latin, and now if you're confronted with the text of Latin, you might not find it easy to understand. Even if your mother language is English, dealing with an English text from the Middle Ages is going to be difficult. I speak French, dealing with the French text from the Middle Ages is difficult. So how about having some nice icons such as the skull and crossbones? You would think that everybody associated with toxicity. Absolutely not. It's quite new, it's only since the 1800s that the skull and crossbones has been associated with toxicity in the Middle Ages, Christians, when they saw skull and bones, they thought renewal and resurrection. I guess if you show the children the skull and crossbones, they are not going to say, oh, toxicity, they are going to say, oh, cool pirates, treasure island. You see, that's tricky, aren't the symbols? So I'm going to show a couple of ideas that some mutations and artists and philosophers and thinkers have thought about to deal with how to signal to very, very distant generation that there is something special there and that you shouldn't go near it. Oh, yeah. Okay, I forgot this one. Even if you found a suitable message, let's say you found a suitable message, you also have to make sure that people will take it seriously. All, like, the thing, the coastline of Japan, there are these so-called tsunami stones that basically say, please don't build beyond this point, because it's very dangerous. There's been tsunami, they will come back when they, no one paid attention. Anyway, let's go back to my artist and semi-autisticians idea. So in the early 80s, a semi-autician had the idea of, you know, to preserve the memory of the presence of very toxic, very dangerous nuclear waste. You have to make it part of society, to include it in the fabric of society. So he suggested the creation of an atomic priesthood, so that would be a religious order that would have as its mission to keep the memory of the danger alive using rituals and myths and folklore. Some artists thought, well, we should plant a forest, like a forest of genetically modified trees, on top of one of those repositories, so that in the autumn, the leaves would bloom and become blue and then they would fall and then they would be a nice blue carpet. So people, like in thousands of years, will interpret it as, oh, this is a secret place, we should leave it in peace and respect it. I really love this idea, I find it a bit naive, because, you know, human beings don't have a great track record of respecting beautiful pieces of landscape, so, yeah, I'm not very optimistic. Yeah, and then there's this kind of cynical idea of modifying a cat, so that the cat would start glowing or changing the color of its fur in the presence of high radio, oh, I'm 20 minutes again, he keeps adding time, but I'm almost done anyway. So that would be a cat that changes colors or glow in the presence of radioactivity. This is not one of those cats, it's an artistic interpretation by Marcel Rickley. So, yeah, I think it's not very nice, but to be honest with you, I like all those ideas of trying to transmit and connect with very, very distant time and with people who live in moments in times we cannot even comprehend and imagine, but I'm not totally convinced. I mean, they are charming, but I don't think they would stand the test of deep time, mostly because it's just, I mean, the periods we're talking about, like even 100,000 years, it goes beyond human experience, how do you relate to it? It's just so far away that it becomes abstract and almost unreal. So I've been thinking, you know, about this project, I've been interested in nuclear waste for quite a few years now, so I've been thinking other projects, artistic orders, that can relate to, like, physically or emotionally with very, very distant past, but in the distant times, but in the past, because we don't know the future, we don't know what is going to be there, but the past, we kind of have an idea. And I couldn't find any project that really, that I was convinced could give a good experience, a good idea of relating to a distant time in the past. Until last month, there was this big art fair in Turin, and I found this butt plug. By a Swedish artist called Thomas Amen, and the particularity of this butt plug is that it's made of coprolite, so it's basically dinosaur poo from around 140 million years ago. And that's where I discovered that there's actually a big market for this incredible, for fossilized dinosaur excrements. You can buy it, I checked, you can buy it on Amazon, I'm not sure it's going to be authentic, but it's available on Amazon. So I thought, yeah, that's it. I mean, if you were to buy it and use it, you would actually be in close connection with a living creature in intimate contact, I could say, with a creature that lived at a very, very distant time. Okay, so we're almost at the end. I have three lines of conclusion that I'm going to read, but then I want to show you a film. And my conclusion would be that all of this is very fanciful, especially the end, but it might still be useful to keep in mind that something like renewable energy that keeps us so passionate is very often viewed as a long-term solution to the climate emergency. But unfortunately, renewable energy relies on physical resources that are neither infinite nor renewable. And right now, it looks like we are busy implementing a future that was dreamt and conceived a few decades ago. At a time when society wasn't worried about exploding climate crises, the erasure of wide life and growing materials scarcity, accompanied by unmanageable heaps of waste. Anyway, I've just been talking about the future, and I would like to end with a short movie set in the future by Alexandre Loupachko, who's a young Russian filmmaker. And I really wanted to show it to you for several reasons. One of them is that when I knew I would come back, my first thought was, yes, I'm going to be able to show 2050 because I really love it. And the second reason is that I really didn't want to finish my presentation with a butt plug. menjadi entsche focal point The last news. Yesterday in the main office of the company Robot X was a solemn presentation of robots of the fourth generation. In the coming year of 2050 the trade center Robot Man worked in the other. And then Yoshek said the robot. Hello. Hello. Hello. Welcome to the company of ARIFLAME. We are in the company of the new production line. We are glad that you are interested. All my loving, I'm welcome to you All my loving, darling, I love you. Close your eyes and I'll be sure No, no It's a key! Key! Guys, here, keys, keys! Take the gun, I'm not a robot, I'm a human! Okay Take off the safety glasses Which den is this boy trying to kill? All my love is you, all my love is you, all my love I will send to you I forgot to say it had nothing to do with waste but it was just irresistible I had to show it to you Okay, that's it for me unless you have questions. Thank you very much, Regine Debati. As usual, if you have questions, we have microphones over there and microphones over there, please stand in line and of course we have questions from the internet. In the meantime, if you want to read more about Regine, she writes also on the website, it's called wemakemoneynotart.com, which is at least as I'm concerned a brilliant title. And you write about what? I've been writing, I think it's for almost 16 years about the way designers, artists and hackers are using science and technology in a creative but more importantly socially engaged way. So it's quite political, I like to think. Which suits perfectly to this event. So, questions, I see, not so far, not so far. In this case we can, number four is this, in this case, I heard an um, no question. In this case we can chat a little bit more about your website. So, if you want, if you want. Oh yeah, it's not going to interest people. I'm pretty sure it does. Because I had a quick look and it was pretty impressive. You write about AI as far as I know. You know, that's the thing. When I knew I was coming to talk about the best of 2019, I was like, this year I've had to deal so much with AI even outside of my blog. I said no more AI except maybe for the film at the end. That kind of, you know, it's a bit AI but otherwise. No AI. Yeah, on my blog there's been a lot of AI. I'm a bit sick of it. Actually one of the last I wrote about was by Julienne Oliver who spoke just before me now. Yeah. Cool. In this case, no more questions. Thank you very much. Please a warm applause. Thank you.
|
Once you start looking at electronic trash you see it everywhere: in laptops of course but also increasingly in cars, fridges, even inside the bodies of humans and other animals. The talk will look at how artists have been exploring the e-junk invasion. Régine Debatty is a curator, critic and founder of http://we-make-money-not-art.com/, a blog which has received numerous distinctions over the years, including two Webby awards and an honorary mention at the STARTS Prize, a competition launched by the European Commission to acknowledge "innovative projects at the interface of science, technology and art". Régine writes and lectures internationally about the way artists, hackers, and designers use science and technology as a medium for critical discussion. She also created A.I.L. (Artists in Laboratories), a weekly radio program about the connections between art and science for Resonance104.4fm in London (2012–14), is the co-author of the “sprint book” New Art/Science Affinities, published by Carnegie Mellon University (2011) and is currently co-writing a book about culture and artificial intelligence.
|
10.5446/53086 (DOI)
|
Our next speaker works as a security consultant at Payatoo Software Labs and he loves finding security flaws in the Microsoft Edge browser and incidentally this is the topic for this next talk. So please give a big round of applause to Nikhil Mittal. So welcome to the talk, Breaking Microsoft Edge Extension Security Policies. My name is Nikhil, I work at Payatoo Labs and I'm into web and browser vulnerability research. So to start with this presentation, I would like to know how many of you use browser extensions in general? Like, oh nice, so many hands. Okay, so a browser extension is something that extends the functionality of a web browser. We have typical examples like at Block Plus which I think most of the people use to block the ads on certain sites like YouTube and Grammarly and some sort of password managers as well. So these extensions are capable of managing most of your data because they can handle the cookies, bookmarks, storage, passwords, history and what not. So that being said, we all have to agree on a point that these extensions are powerful because they can deal with your cookies, bookmarks and other sensitive information in the browsers. So here's how simple Adblock Plus extension looks like on Microsoft Edge which is pretty much doing its job. Now have you ever tried to figure out what this extension is capable of doing in your browser? So if you look at the settings, here we have a couple of permissions which I have listed down on the next slide. So a simple Adblock Plus extension can read and change content on websites that you visit. It can read and change your favorites. It can see the websites you visit. It can read and change anything you send or receive. And it can also store personal browsing data on your browser. And it can also display notifications as well. So there are so many things a simple Adblock Plus extension can able to do in your browser. So you might ask like how browsers recognize these permissions? Like extension is able to do so many things in my browser, but how does browser recognize like where are these permissions coming from? So here's a permission model in browser extensions. So under the source of every extension, we have a file called as manifest.json. And inside a manifest.json file, we have a permission array. So here's a quick example of a permission array where we have some permissions. So the first one is a CDPS www.google.com, which we'll see right after this slide. The next permission we have is bookmark and cookies, history, storage, and tabs. So let's suppose a book extension has a permissions with the bookmark and cookies. So that means that extension can handle your bookmarks. It can manipulate them. It can edit them. It can remove them and whatnot. So the same goes with the cookies, history as well, and there are other important permissions as well available for the browsers. So apart from these permissions, the most interesting permission that I was looking for is the host access permissions. So a host access permission is something that defines on which certain domains your browser extensions should be able to run. So in this case, let's suppose we have assigned permissions to scdpe www.google.com. So that means this extension should be able to run on Google.com only, not even the subdomain, that is developer.google.com or mail.google.com. So this you can also verify by the tiny box that says this is allowed to read and change content on some sites, www.google.com. Now the second permission we could have in this here is scdpe astrick.google.com. So basically this also covers the subdomains as well. And the third possible permission we can have is astrick colon slash slash astrick.google.com. So basically this says now not only I'll work on Google.com, but basically on all the protocols as well, which is scdpe, scdps might be FTP. That belongs to the particular domain. So apart from these three permissions, we have the another permission in the row, which is all underscore URLs. This permission is so special because once a browser extension is assigned to all underscore URL permissions that can execute JavaScript code on every domain that you visit. So let's suppose you are on Google.com or maybe you are on Bing.com or anything else, it will work on most probably on every domain. But there are a few restrictions with the all underscore URLs permissions. That is it cannot run on privileged pages. So a privileged page in browser is something that contains some sort of sensitive settings and your browser data. So you might hear it off Chrome slash settings, which contains the password manager for Chrome and also you can identify the credit card and debit card information on Chrome slash settings as well. So you can imagine a situation once the extension is able to run a JavaScript code on Chrome settings page, then it can probably read or it can steal all of your passwords and credit and debit card information as well. So on the edge, we have a similar page, which is about dot flags. So here you can see once extension with all underscore URL permission is assigned, it can read and change content on website you visit as per the edge. So here's a quick snap of about dot flags and edge. So if you look at the first part, you will figure out there are a few important permissions like you can enable Adobe Flash Player, you can also enable developer features, and also you can enable and disable allow undistricted memory consumption for the web pages as well. And it also has some standard previews features like you can enable, disable some experimental JavaScript features as well. So now you can imagine the sensitivity of this page contains. So let's quickly build the extensions, so that will break most of the things in edge. So as I said, every extension has a manifest.json file, which has all the permission and other configurations. The second file that we will be needing is popup.stml. So popup.stml is nothing but it's just an interface for the browser extension. So basically you might have noticed as soon as you click on any of the browser extension, a popup appears on your window for that contains some sort of functions. That is nothing but just a popup.stml file. And then again, we have a popup.js, which has all the JavaScript code that executes according to the extensions chosen by the popup.stml. So this is how our extension should have looked on the edge. So we have seen a tiny Microsoft logo. And as soon as you click on it, a popup will appear, which says I'm the evil extension. And I have two options. The first one is open, the second one is execute. So as soon as you click on the open button, what it does is it will load google.com on the browser. And as soon as you click on the execute button, it will just alert one for you. So basically the interface is written in popup.stml. And again, as soon as you click on execute, so the work is done by popup.js. So let's quickly look at the source code for the manifest.json file. The thing to notice here is that you can figure out the permissions, permission array, online number 10, which is set to sttp colon slash slash www.google.com. That means it's clear that this extension should be able to run on google.com only, I mean not on the subdomains even. So here's the source code for the popup.stml, which is just a simple stml file that has two buttons. The first one is open, the second one is execute, and it has a popup.js and the end. So here we have the popup.js. So in very brief manner, what it does is as soon as you click on the open button, it loads google.com. And as soon as you click on the execute button, it alerts documented domain for you. So there are so many APIs available for the browser extensions that you can use, like history API and some sort of proxies API, tabs API. But for me, this tabs API was so interesting because it allows you to play with different tabs. Like it has some, it has some function methods inside like tabs.create. So what it does is it allows you to create a new tab with any arbitrary domain. And it also has tabs.update, and what it does is it allows you to update the page with the next URI. And tabs.deplicate is also important because it allows you to make an exact replica of a already opened tab. The next method is tabs.execute.script. So this is pretty simple. This is allowed you to execute JavaScript code. And tabs.hide and tabs.relate, which is pretty easy. So and there are so many other methods as well. So out of them, the most interesting one for me was create an update and also the duplicate method. So let's see if you want to load a new, so let's see if you want to load Bing.com on a new tab using a browser extension. So you can just write this five lines of code that says that calls browser.tabs.create and then it passes a URL, which is stdps.www.google.com. So this is as per the documentation, and this is for the good boys, like not for us. So as an evil mind, like I was interesting to know, like what would happen if I try to load local files instead of a normal domain? So then I replaced the Bing URL with a particular local file URI to try to figure out, like how browser will treat it? Will it open it or not? So the next moment, Edge gives me this nice error, like, okay, I can't reach this page. And you make sure you have got the right web address, that is ms-browser-extension, and then the path for the extension, and it depends the file you are a path in the last. So basically, it assumes that this is a relative path, and I'm going to add it with the extension path, and I'm going to try, and I'm going to open it. So since that particular part doesn't exist, it gives us an error. So this is not a thing with the extension as well, but this is in general, like any of the browsers, they don't allow you to load local files at any cost, because this might lead an issue to steal your local systems files. So you can see the image and the Edge and Chrome browsers. So here I'm trying to load local files using the JavaScript. So every time it says, okay, we are not allowed to do that because we care about our users and we will protect them. So since we figured out this browser.tabs.create method was not working for us, the next method that I was looking for, the update. So I tried the same thing with the update method, and somehow it worked for me. So next, once I figured out, okay, now I can load the local files. Now I want to load the privileged pages because they are also interesting for me, and it was also working fine for me at the moment. So here you can see, as well as you click on the open button, browser loads a local file for me and also a privileged page on Edge. So I've reported this bug to Microsoft, but they quickly responded back to me saying we don't support download API. So even if you load the local files, you have no way to steal it. Like, you literally cannot do anything by loading the local files. And we are not going to fix it. So I said, okay, let's do it another way. So the next moment, the idea came to my mind is to use the JavaScript URI. A JavaScript URI is something that starts with the JavaScript protocol. It has a particular syntax, like first JavaScript, and then colon, and then the JavaScript code. Here we have a simple example, like as soon as the AHRF JavaScript colon alert one, it gets rendered in the browser, and you click on the test, a JavaScript code will pop up on your browser. So the good thing about the JavaScript URI is that they execute in the main domain's reference unlike the data URIs. So you can look in the image. We have JavaScript URI, and the data URIs, as well, that points to alert documented domain, and one JavaScript URI says, I'm on HTML.squarefree.com, while the data URIs say the null domain. So basically, the data URIs was supposed to execute on the main domain's reference a couple of years back, but then it creates a lot of mass with the browser. So browser vendors, they decided to execute and the null domain reference just to make it too safe. So at this point of time, I decided, okay, JavaScript URIs are like the best candidate for us, so why not to try it? So I've tried the same JavaScript URI with browser.tabs.create, and again, it doesn't work for me. But again, we have a friend or called update method, and I've tried the same thing with the JavaScript URI that points to browser.tabs.update, which again calls JavaScript call an alert documented domain, and it worked for me this time. So you can figure out with this picture, this extension should have been able to run on Google.com, and now we are on a Bing.com, and if you click on the open button, we have a JavaScript code execution on Bing.com. This is how bad it was, because that's a total violation of the privacy because the user believes that this extension shouldn't be able to run on the other domain except the Google.com. So this was again reported to the Microsoft saying, okay, so in the last time I've reported like I'm able to load the local files, but you said I'm not going to fix it, and now we have a JavaScript code execution as well. So then again, they said, okay, like we got your concern, we understood what you're trying to say, but can you also alert users cookies as well? Like, is it possible to steal the user's cookies? And I said, okay, why not? So instead of document.domain, you can just use document.cookie to pop up users cookies as well. So since we have host access permission bypass on edge, so we can steal Google emails, even Facebook data or anything like that. So to demonstrate this attack, let's suppose we have a simple Google email which says, I'm a secret email and I have some coupon code for $1000 cashback and there we have some random coupon code. So to demonstrate this attack, you can see I'm using browser.tab.update that points to a certain JavaScript URI, and what it does is it fetches the particular email with a particular ID and opens a new tab and send it to the leak.stml. And further, what leak.stml does is it copies the value from location.hash and write it into the, write it onto the page. So as soon as you click on the open button, if you are on mail.google.com, it will steal the particular email and displayed back on the attackers domain. So this is how I was able to steal the Google emails. So this proof of concept was sent to the Microsoft and the same thing with the local files as well. Like I thought, okay, now it's working for the domain. Now what if we tried the same thing with the local files as well? So yeah, in this case, it worked as well. So if you remember, in the past, when we were able to load local files, but Microsoft says, okay, we are not going to fix it because we don't support download API. And now we have a JavaScript code execution on local files as well. So we can chain both of these bugs to steal the local files as well. So that's a simple proof of concept. So at the first, what we are doing is browser.tabs.update, but that points to a file URI. And again, browser.tabs.update, that points to a JavaScript URI. So Microsoft was like, okay, now we have to fix. But what is next? So far, we have JavaScript code execution on local files. We also have host access permission bypass. Now what is next? So the next thing that came to my mind is always the privileged pages, as I already explained the sensitivity of the privileged pages. So the next moment I was so excited that this will work on the privileged pages as well. So again, I wrote this five line of code and tried to execute in reference to avowed.flex. And surprisingly, it wasn't working for me. And I was so surprised like why this is not working and shaking my head like what is wrong? So the next moment, I was trying to figure out what is wrong with this implementation, like why it is not working. Maybe there are some errors in the console. So I tried to open the developer console to figure out the possible errors, but you can see there is no such errors at all. So the reason for that is most of the pages, like the sensitive pages in the browsers like Chrome, Firefox, and even on the edge are protected with the CSP to make sure there shouldn't be any JavaScript code execution. But we cannot see any CSP errors here as well, which was pretty strange for me. So then again, I asked myself why this black magic is not working on privileged pages, even when we don't have the CSP error, maybe this time edge is playing smart, or do we have any other way to load avowed.flex in edge? Then the next idea that came to my mind is to use the RES protocol. So RES protocol is something that is used to fetch some certain of resources from a module. So instead of avowed.flex, we can call res colon slash slash age HTML dot dll slash flag dot stm. And the next moment, it worked. So this way we have now JavaScript code execution on privileged pages as well, which is pretty bad. So once you have JavaScript code execution on privileged pages, you can enable and disable add a flesh player, and there are other methods which we have other possible options which we have already discussed can also be possible with the same thing. So again, what we need to do is to call browser dot types of update that points to HTML dot dll slash flags dot htm, and again, again, some sort of JavaScript URI to fetch get element by ID and then click on it so it will toggle the add a flesh player setting on the edge. Again what is next? So this was pretty enough for me, but again, like I was trying to figure out if we can do something else as well, and then I stuck with the reading mode. So a reading mode is a feature implemented in edge which renders a page in a way that is kind of pretty easy to read. So in this process, age makes sure that there shouldn't be any JavaScript code execution on the page. The main purpose for reading mode is that to provide the users to provide a simplified page to the users. So basically there should not be any advertisement or something like that. So for that reason, browser vendors, they make sure there shouldn't be any JavaScript code execution on reading mode. And there was one bug with the reading mode as well, like you cannot put any document in the reader mode until unless browser identified it, identified its compilability. But you can append the read column protocol in the first and then the URL that points to some certain of domain, and then age will load the particular resources in the reading mode as well. So fortunately, I've tried the same attack on the reading mode as well, but since the reading mode was protected with a certain CSP, and then so you can see the CSP error which says we do not allow inline script and it will be blocked by the age. So reading mode was kind of safe, at least for the test cases. But in some certain test cases, it worked for me, but I was not able to reproduce it further, so that's why I marked it as safe. The other possible features we can have is the JavaScript code execution on other extension pages. Like again, you can imagine a situation, you can imagine a situation when one extension is able to disable another extension in browser like how bad it will be. So again, now we are on an internal page that belongs to Adblock Plus, and if we try to run our extension on this page, then again we have a CSP violation issues. So yeah, that was safe. The next thing was some CSP privilege issues because the host permission will not work if there is any CSP error. So next I try to figure out if we can use the execute script API to figure out how they deal with the CSP. So let's assume we have a page where the CSP is implemented properly and we have a host permission for the same. So you can see the code where we are saying the content security policy which is set to default SRC self, and we are using browser.tab.executed script which says code and then where we have to pass the JavaScript code which is just simple alert document or domain. So the way extensions deal with the CSP is that most of the browsers, they will allow JavaScript from any extensions until unless they will try to change the DOM tree of particular document. So let's suppose we have the first example right here. In this case, so as I said, let's assume we are on a page which has a perfect CSP in place like this and we try to change the DOM for the particular page. So the possible base we have is either we can use document.write or we can use document.body.innerSTML and then sort in JavaScript code and then another possible way we have is to generate a random element and then write inside it. So all these ways to manipulate a particular DOM tree on a CSP protected page was not allowed by most of the browsers like Firefox and Chrome but it was not protected in case of edge. Like the Execute script API is straightforward as execute any of the JavaScript code on any domain whether you try to change the DOM on a CSP protected page or not like it doesn't matter for it. So to conclude with this presentation is that edge extensions are still in development. Most of the APIs are not supported till the time because in the edge that it has moved to the new chromium based browser as well so I'm not sure whether they are still developing extensions API or not but the ActiveTab is one of the interested permission to work on because it allows you to execute JavaScript code on the current domain so if you are able to perform the same attack with ActiveTab API as well so pretty much you can have all what I presented here as well. So Microsoft they finally decided to fix this bug in March 19 update with the highest possible bounty they have with the CV 2019 0678. So thank you Nikhil for an interesting talk. If you have questions about the talk we have three microphones one, two and three in each one of the aisles if you have a question please come up to the microphone we'll start from microphone number three. Hi, hi and thank you for the interesting talk. I have one question is this bug or is this API also relevant for the new edge coming in January based on chromium engine? No I guess so the APIs are same but since the new edge is running on Chrome so they will not support this API because of they use some others calling conventions I guess I believe. Is that answer your question? Yeah but I have a second one. Yeah go for it. Okay the second one is you tried to open the pages via the RES protocol but the functionality of those pages is it also handled by edge while opening it through the RES protocol not about the about protocol? Yes I guess. Okay we are also working. Yeah. Okay thank you. Any more questions from the crowd or from the internet? Okay then another round of applause for Nikhil for a great talk. Thank you. Thank you. Thank you. you
|
Browsers are the ones who handle our sensitive information. We entirely rely on them to protect our privacy, that’s something blindly trusting on a piece of software to protect us. Almost every one of us uses browser extensions on daily life, for example, ad-block plus, Grammarly, LastPass, etc. But what is the reality when we talk about security of browser extensions. Every browser extensions installed with specific permissions, the most critical one is host access permission which defines on which particular domains your browser extension can read/write data. You might already notice the sensitivity of host permissions since a little mistake in the implementation flow would lead to a massive security/privacy violation. You can think of this way when you install an extension that has permission to execute JavaScript code on https://www.bing.com, but indeed, it allows javaScript code execution on https://mail.google.com. Which means this extension can also read your google mail, and this violates user privacy and trust. During the research on edge extensions, we noticed a way to bypass host access permissions which means an extension which has permission to work on bing.com can read your google, facebook, almost every site data. we noticed using this flow we can change in internal browser settings, Further, we ware able to read local system files using the extensions. Also in certain conditions, it allows you to execute javaScript on reading mode which is meant to protect users from any javaScript code execution issues. This major flaw in Microsoft Edge extension has been submitted responsibly to the Microsoft Security Team; as a result, CVE-2019-0678 assigned with the highest possible bounty. Outline 1. Introduction to the browser extension This section is going to cover what is browser extensions, and examples of browser extensions that are used on a daily basis. 2. Permission model in browser extensions This section details about the importance of manifest.json file, further details about several permissions supported by edge extensions and at last it describes different host access permissions and the concept of privileged pages in browsers. 3. Implementation of sample extension In this section, we will understand the working of edge extensions and associated files. 4. Playing with Tabs API This section includes the demonstration of loading external websites, local files and privileged pages using the tabs API. 5. Forcing edge extensions to load local files and privileged pages Here we will see how I fooled edge extensions to allow me to load local files and privileged pages as well. 6. Overview of javascript protocol This section brief about the working and the use of JavaScript protocol. 7. Bypassing host access permission The continuing previous section, here we will discuss I was able to bypass host access permission of edge extensions using the javascript URI’s. 8. Stealing google mails Once we bypassed the host access permission, we will discuss how edge extension can read your Google emails without having permission. 9. Stealing local files The continuing previous section, here we will discuss how an edge extension can again escalate his privileges to read local system files. 10. Changing internal edge settings This section details how I was able to change into internal edge settings using edge extensions, this includes enabling/disabling flash, enabling/disabling developer features. 11. Force Update Compatibility list This section details how an extension can force update Microsoft compatibility list 12. javascript code execution on reading mode? Here we will dicuss about the working of reading mode and CSP issues associated with it. 13. Escalating CSP privileges. This section describes how edge extensions provides more privilages to the user when dealing with content security policy
|
10.5446/53087 (DOI)
|
Hashtag delete Facebook has been around for a while. And still, for many reasons Facebook has a tight grip on various communities depending on the platform for organizing, mobilizing and distributing content. This is also true for alternative culture's needs, but an opposition is rising. Our next speakers, Elle and Rosa Raef, will talk about approaches of activists and artists to use art against Facebook, from graffiti and net art to calls for a Facebook exodus. Elle is an independent art historian from Berlin. He was a member of the Ninja Academy of the Cult of the Dead Cow and Hacktivismo. Elle is sharing the stage today with Rosa Raef, who are part of the Berlin Reclaim Club Culture Network. So let's give a warm welcome and applause to Elle and Rosa Raef. Thanks so much for being here. The topic of my talk is artful resistance against social media monopolies. Today I am presenting as an art historian. I report to you of some interesting conflicts between art and Facebook. I do this faceless and nameless. This is meant as a homage to basic strategies against social media. Some 10 years ago, we, the inhabitants of the digital industrial countries, started to rely heavily on mobile computers and so-called smartphones. This led to a growing importance of social media services. Suddenly they had a use. They were the ideal platform for capturing the time and attention of the always online users. Last week, Sascha Lobo neatly summed this up in his review of the digital decade. The rest is history. We entered the age of surveillance capitalism where a few big services process our expressions and activities as data. And they use it for the most elaborate advertising industry in history. It's the end of the decade and we can't imagine the system to fall apart. No world without Google, Amazon and Facebook seems possible. The social change fostered by these services is really a mess. Attention spans are down to a minimum. Journalism is under pressure. A whole generation is occupied with image feeds and headlines and to texts that no one reads. I completely agree with Sascha Lobo and many others that the new rise of right-wing movements and the horror clowns they elect is directly connected to the superficial feed services where propaganda statements circulate fast. For the neo-fascists, these feeds provide a means of propaganda and panic. For the rest of us, they serve as a system of governance to quote Caroline Wiedemann. She explains that the interface of Facebook serves as a tool of self-evaluation, self-control and competition, pushing out earlier uses of the net like playing with identities, collaborating, peer-to-peer sharing. With Sascha Natsubov-Antel book, surveillance capitalism, we can say that Facebook is a behavioral capture mechanism. So please, if you have a Facebook account, use it to this behave. As I want to show with this talk, art is here for you for inspiration and techniques, some of which can be used as open art technology like the Facebook graffiti that I will show. I hope more tools will come out helping people to manipulate their attention sphere themselves. With my talk Art Against Facebook, I invite you on a tour of contemporary conflicts around Facebook. My aim is to show you artful resistance against social media monopolies. These are open movements that rely on your support to grow and go viral. So what is the current conflict of art in Facebook? In my field, the field of cultural production, Facebook is very influential. Basically Facebook has a tight grip on the cultural scene, on the one side with its events calendar and the other side with Instagram as a spectacular image feed. For example, the art market has merged with social media. If your painting is not online, it is as if it doesn't exist. Some collectors just buy after seeing digital images. The same for cultural events. And organizers feel that if your event is not online on Facebook, you will not have guests. This is even true for spaces to try to stay under the radar in every other sense, for example because they are very underground and don't even have a license. Both of these examples are very specific, but they show how Facebook has horrible effects on art and culture. Images and visibility in general are not negotiated outside the commercial space of hyper advertising anymore, but are directly wired to its cybernetic loops. It's a stage of cultural industry that seemed unimaginable not so long ago. So we need theory and practice against the social media monopolies and the harmful effects they have on our private life and on politics and culture. And my humble contribution today will be to document some forms of artful resistance that I saw. My talk has three parts. First example, there is graffiti in the ruins of Facebook. This is about how users can destroy the Facebook interface with unicode texts. It's risky, it's illegal soon and it's great fun. Then comes the intermission. Here we will hear a sermon to the users. A lamentation of hate against Facebook, an initiation to the movement against it. This will be a bit dry and lots of quotes. In fact it is just quotes, so it's just a quick and dirty manifesto. But we need radical theory for our radical practice. Which then will be followed by the second example, club culture against Facebook. How to organize the Facebook exodus of the cultural scene. How to break the monopoly of a global networked events calendar. Here we will have a guest appearance. The name is Rosa Raif, a radical raver from the future who will tell us how to get rid of our problems of the present. So exciting. The first example, there is graffiti in the ruins of Facebook. So Facebook looks like it's in ruins. After countless data scandals and the rising awareness of filter bubbles and psychological manipulations, it has lost its attractiveness as a host of intimate information to many. The service even has to resort to silly games and challenges like 10 years have passed, upload your face twice, viral challenges to keep users engaged. The ruin look of the feed, so this empty, boring feed, is just a front. In fact they don't need the user's uploads that much anymore. Just browsing the feed or interacting with the many tentacles of the service throughout the net generates the necessary data for this advertising machine. The normality it produces and reproduces is still highly problematic. When before it was an open competition of beauty standards and distinction of social status, it is now a more and more subtle micro targeting machine to manipulate the always connected individual. How can we reach those individuals and interrupt this manipulation machine? I have a long interest in graffiti writing and the interventions of urban art in general. This wild art growing everywhere is of high significance in the conflicts around urban space. So I was highly delighted when one day something comparable started to appear in the feeds. One day there was something like a crack through my feet. From top to bottom, crossing images, text and video. Scrolling down some posts I found the source. Some cryptic letters with little extra characters attached to them stacked on top of each other. Following the creators I found out about groups where people collected the most effective letter combinations, recombining them into powerful little copy-paste interventions. I subscribed to all of them and enjoyed some months of completely destroyed feeds with waves of letters growing from posts, comments, notification boxes, menu bars and everywhere in between. This was only the first step. But let's pause for a second and explain what are we seeing here. So Facebook aims at being a global platform and therefore it supports large parts of the Unicode spectrum for letters. Which means over a million different letters and special signs from all kinds of different languages and sciences. A lot of these have very specific rules. For example they always go under the letter before. And it turns out a lot of these can be combined and being stacked on top of each other. So some curious artists found that with some small Arabic signs you can actually combine more than a hundred signs on top of each other. Technically you can combine many more but then you reach a security limit of Facebook. Until it is enough to traverse multiple posts with your digital graffiti. This is only one example. There are many more. Let's just look at some tendencies. So there's graffiti for humans. So digital graffiti for human readers only. You can write with signs that look like Latin letters but have a different meaning. Shape catcher is a useful tool online to find such similar characters. You just draw your letter and it shows you a sign that looks like it. This means this cybernetic machine of Facebook is interrupted. You create content that is not meaningful to the server but only to the user. A slogan that is only readable by users. ASCII Art Crossing the Line. A lot of this reminds us of ASCII Art. This means painting with letters like in the early days of the internet. And you can actually use ASCII Art to cross the social media interfaces. So cute and radical form of painting with text. Then we have text bombs. Also interesting are these very dense forms. I call them text bombs. In the middle of a lot of text is weaved into itself like a spray can scribble and then some kind of antennas reach out and grow through the feet. Or you can write outside the box. There are some cute little letters used in mathematics that can be attached to these antennas. That way you can write into the post next to yours. Style writing. This is more for the graffiti nerds. It's not only about crossing the feet but you can actually do style graffiti in the feet. Graphity as an art movement relies on handwriting. But with one million different signs in this global system of digital letters you can actually type graffiti online. Again the tool shape catcher helps to find the right letters that you need. So while urban graffiti only circulates on trains, these graffiti can be forked. They can replicate, copy and paste and recombined. Again for the tracking and analysis of Facebook itself, this will not be meaningful. This only has meaning as graffiti art. This one I call hybrid letters. In the combination of various techniques often something powerful comes out. So this account uses upside down letters with some ornamental outlines. Patterns. A pioneer of this letter based feet art was the artist Glitcher. The rhythm of his interventions disrupted the feet in a very happy and powerful way. We don't really have the time to enjoy the piece but once a musician friend of mine even used these patterns as notations and we could listen to it. We wing. One artist I follow likes to create patterns of high density. These sometimes even block out the interface completely. You can use techniques like this for example for digital ad-busting if you move it out of your post over the advertising. These are called destroy lines. There's one particular nasty form in urban graffiti, so in the old graffiti with the spray cans, it's used for quick destruction of large territory. It is the destroy line. So what these people do is they walk past the building or stand next to a moving train and just draw one long wavy line with your spray cam which is very nasty and very effective. So while many examples of digital graffiti that I did show, they're very going much along the feet, there's also beauty in these horizontal destroy lines. I like to think that they pull the user out of the linearity of the feet like in this old anti-cybernetic slogan, please step out of the line now. So to sum up this first part, graffiti in the feet is about the digital distortion of letters using the wide typeface of the Unicode system. This system contains millions of diverse characters from global writing and notation systems and is supported by platforms such as Facebook, U2D, International Consumer Base. It is possible to decorate words in the digital platforms with fragments of the Unicode spectrum. Individual letter elements then grow in different directions or intertwine with each other. This is a qualitative leap from ASCII art or other ways to play with science-like emojis, of course. The Unicode spectrum can be used to introduce digital graffiti and concrete poetry into the feeds. A special challenge for this digital graffiti is to override the layout of Facebook, so to assemble letter buildings that exceed the existing frames. And Facebook is just really badly programmed because it would be so easy to stop this. This art form pushes the medium to its artistic limits at the medium of Facebook. It is a clash between the aesthetics of the interface and wild forms of art. This circulating art adapts to the specific economy of social media, like posting, but is subverting and undermining it by breaking the rules. Which means that it carries the potentials of art directly into the digital economy and at the same time creates a radical break. Two years ago at this very conference we did a little self-organized session about this phenomenon. The question was how can we turn this thing from glitch art and text art into an open graffiti technology. We wanted to unlock digital graffiti and vandalism for all in order to spread chaos and defeats and thereby generate some digital fog in the cybernetting system of social media. Or just give people a tool to annoy fascists online. We call it net graffiti and as I showed you before, browsing the hashtag is really great. Luckily, there were some friendly hackers at this conference that volunteered to write an editor that lets you combine some very effective symbols. We call it lettercoder and you can find it on the URL here. So then please use the hashtag net graffiti to mark your creations so others can remix them. So this is all for the topic of net graffiti. Now it's time for the intermission. Why all the hate? Why this vandalism? What do these anti-Facebook extremists want? There are some remarkable texts against the current world of social media. In fact, they are countless. Some are particularly spicy. The following is a collage of some of them. It is a sermon to the users. Featuring net critique old-schooler patrols. Also featuring the invisible committee of the imaginary party for coming insurrection. And furthermore featuring the Ipollita collective. This is a sermon to the users. We the users are all suspects whose most intimate details must be known so we can satisfy our compulsive craving for new and immediately obsolete objects. The problem of privacy is endlessly discussed but only enters the public discussion once it has already been violated. This issue is usually coupled with complaints about the immoral of an authoritarian system that divides people into categories. In the era of big data conspiracies arrive. But the real problem is much more concrete and distressing because it affects us all personally and not as an anonymous mass. While certain individuals want to be profiled for the others whatever we do in order to avoid profiling our digital footprint is in as capable. There is no way we can opt out once enlisted in the army of the data suppliers. We no longer shape a discourse. Data is to have the last word. This is the Chimera of data driven society where the role of the human subject is practically irrelevant. The role of humans now is one of the self-acquistions where we relinquish our ability to choose and desire. It seems a parody of the ancient dervish maxim know thyself and instead the messianic promise of the quantified self movement self knowledge through numbers. Give us even more powerful machines hand over all your data, be transparent and we can predict the future. The future of the market of course. While cybernetic governmentality already operates in terms of a completely new logic its subjects continue to think of themselves according to the old paradigm. We believe that our personal data belongs to us like our car or our shoes and that we are only exercising our individual freedom by deciding to let Google, Facebook, Apple, Amazon or the police to have access to them. The object of the great harvest of personal information is not an individualized tracking of the whole population. If the surveillance insinuate themselves into the intimate lives of each and every persons it's not so much to construct individual files as to assemble massive databases that make numerical sense. It is more efficient to correlate the shared characteristics of individuals in a multitude of profiles with the probable developments they suggest. One is not interested in the individual present and entire but only in what makes it possible to determine their potential lines of flight. The advantage of applying the surveillance to profiles, events and virtualities is that statistical entities don't take offense and individuals can still claim they are not being monitored at least not personally. Behind the futuristic promise of a world of fully linked people and objects when cars, fridges, watches, vacuums and dildos are directly connected to each other and to the internet there is what is already there. The fact that the most poorly valent of sensors is already in operation. Myself. I share my geolocation, my mood, my opinions, my account of what I saw today that was awesome or awesome libano. I ran so I immediately shared my route, my time, my performance numbers and their self-evaluation. I always post photos of my vacations, my evenings, my riots, my colleagues of what I'm going to eat and who I'm going to fuck. I appear not to do much and yet I produce a steady stream of data. Whether I work or not, my everyday life as a stock of information remains fully valuable. The social factory of Facebook as well as Amazon, Google and any other larger commercial online platforms will turn to a model of commodifying and monetizing data by feeding it value extraction methods that are run by machine learning algorithms. The underlying proprietyization of data is the central strategic point to attack. The discourse of regulatory law as well as ethical commissions will not prevent the next levels of alienation, surveillance and oppression that are coming with machine learning and big data driven AI. The economic inequality and the property relation should be the first common issue beyond all minority based struggles to connect various fights and not obey to the framings and neutralizing offers of liberalism. A more object oriented social network as possible with subject groups around issues, goals, projects, events and the individual is not just the ultimate product in the center of the social graph anymore. End of the intermission. So this was pretty dark, right? From the ruins of the feed and the new net critique, let's now move to the future. How to organize the Facebook Exodus. At the last KAUS communication camp this summer, the Berlin based network Reclaim Club Culture announced their campaign idea of a Facebook Exodus. They want to motivate the club and cultural scene to support free alternatives by moving their biggest digital capital which are the events announcements. Once the information monopoly on events is cut through, maybe more people will make the step to leave the platform they don't even like anymore. Because very often what you hear is that one central thing that keeps people on Facebook are the events. Today we have as a guest the party political spokesperson of Reclaim Club Culture. She's a raver from the future. Please welcome Rosa Rafe. So it's over Facebook. I will delete Facebook for today. Superb, I'm over you. I'm over it all. It likes me, I get into the group. Fuck Facebook. I can promote my events by myself. My movement is my, our movement of company. Rosa you have to help me. With this campaign we want to defeat the master valians. We want to call for a human strike online. We are all Rosa. We want to have targeted ads to destroy Facebook. We want to try to shape a better future. We want to share our list of alternatives. Facebook makes good parties and bad parties. Facebook should not know where I am going. We want to get rid of the addiction Facebook. And now it says here because we are all Rosa's that the crowd should shout fuck Facebook. Fuck Facebook. We want to use flyers and stickers because they are cool. We want to check out the fatty verse. Don't be the consumer, be the producer. Don't be an Instagram DJ. Facebook stole my friends. You won't find me on Facebook anymore. We want to have a cruel dystopia where Facebook, the people who are on Facebook are zombies like him perhaps. Don't find me on opium, not on Facebook. Kill it. Kill it. I'm sorry for something. And if you want to join the movement we have here some flyers because flyers are cool. You can join the movement and there is an address. Thank you. Rosa for the sparkling intervention. We do have time for questions. Should you have any? Please use the microphones. I asked the angel, no questions from the internet. What a surprise. Actually, remarks maybe? No questions, everyone convinced? Maybe one question in the room, who will delete Facebook now? We do have a question on mic one. Sorry. Thanks for this great talk and the presentation. I just wanted to ask if I don't have Facebook anymore, which I don't. Is there a way to help destroy it? Absolutely. These interventions work. The graffiti works on any kind of internet service that is harmful for our life. The campaign is hosted elsewhere. So please join the Rosa's campaign. All right. Thanks. And now mic one please. Thank you a lot for the talk. Thank you for the idea. That's also something that I feel very relevant. I'm also from the art field. But I have another question. I heard someone a few years ago, I don't remember who, saying that actually the social media could be put down as soon as we would look into the question of advertisement. And I was wondering, I mean, if we make credible that advertisement is not credible. That is, that is as in work. I think that there's a good also power in bringing down also those platforms through this means. And so I was wondering if there was any idea going into this direction also. Thanks. That's a very central point, of course. And I tried to quote some people that are really working only on this. What is this economically? What is this culturally? So the critique is there. Facebook has a lot of power, so I guess we have to continue on this and the artistic level on the level of theory and critique political campaigns. I mean, it's really, I'm just reporting from global movements that are actually really big and growing and very professional interventions and research are done in this way. So thanks for making this point. And we now have a question from the Internet. The Internet asks why I hate specifically Facebook. Why not the other companies like Microsoft, Twitter and Google? I want to give an example. There was this service in some countries that don't have a stable Internet and people actually could use Facebook there because Facebook is so friendly. And more and more people in these countries believe that Facebook is the Internet. Actually, I think that even in some Western countries, European countries, people actually, or kids actually believe that Facebook is somehow the Internet. So maybe we don't realize this, but for many people this is the point. So it's a good point, a good starting point for an intervention. Question from Mike. Number two, please. Thank you for a great and interesting talk. I was just wondering when you see graffiti in the urban space, there is no segmentation of it when the tram comes by and graffiti on it. You're kind of forced to watch it, but on the Internet, especially Facebook, you have these algorithms that sort you into your interests. So how can you, as a net graffiti artist, escape the virtual algorithmic bubble to expose people that don't really have the same interests as you? Yeah, you should always write congratulations and happy birthday and everything in this post because then they are more effective. You just have to research the basic mechanism. So make really popular posts with videos and lots of seasonal greetings and write about food and love and life. And then that's how it works. Oh yeah, buy a lot of likes. Alright, we have another question from the Internet. Did Facebook try to fix the broken characters? Not yet. Easy answer. And then question from Mike. Number one, please. Yeah, thank you, Rosa Raves, for your enjoyable intervention. Could you maybe stress out a little bit how this Exodus is planned or organized? Yes, we want to do a call in the Berlin Club scene that people, that the clubs are like also use different networks. Yes, all clubs, all club culture in the world, use newsletter, communicate with your community, with your friends, with your society, and then you can communicate the change in another platform or another way. And we want to also make clear that it's not just that Facebook is something where I can promote my event or something like that, but that Facebook is really collecting data from the visitors or the participants from these events and having this political dimension in the campaign. Yes, fuck Facebook. Fuck all companies, I mean Google too, but first fuck Facebook. And we have another question from Mike. Number two, please. I tried to look at the Glitter Twitter account and I was disappointed that they fixed it so that the tweet, the text doesn't bleed outside the tweet section. By the way, I read that this weird text that you said human can read but the computer cannot read, but actually it's a problem for voiceover, so the blind people cannot read the text and I wonder if there's like a accessible option for the NetGraph ID. No, because the second question, no, it's a contradiction if the machine can't read it, people who need help reading text also can't. Yeah, so it's not possible. Yeah, but it's like a riot or something, it's not to spread information that is valuable to people, of course it will exclude people. The first question, yeah, of course, Glitter is also not active anymore, but I wanted to give Glitter credit for starting this all and some of Glitter's methods are outdated. If you check the NetGraph ID hashtag, I can post some lines that are still working. You can copy and paste them. All right, and another question from Mike. Three, please. Yes, how long can I and other people see this text? For one hour, for one second, how long does it take that Facebook, yeah, we refresh the site and you can see it anymore. Yeah, it's a very specific question. I think the beauty of it is it actually crosses the whole feed. So if you're reading something else, suddenly there's graffiti in the other post. So yeah, you will scroll past it, but you will have seen it a long time before. So the temporality of it, your concerning question is really kind of beautiful because it can be reposted, then it appears elsewhere, gets re-contextualized, then you push it to somewhere where you need it, maybe on a fascist web post. So it's reusable. It's continuously, it's just text, copy and paste. You can use it on the phone, on the computer, the individual posts, they are like any social, useless social media posts. It's only for some seconds, but it's very effective. So there was this number that the attention span is now down to eight seconds, but yeah, actually in eight seconds you can do amazing things. Okay, thank you. And another question from Mike, two, please. Thanks again for the great talk and as well showing this new way of rebel against those big companies. My big wonder about it is if we do the Exodus from Facebook and we try to open or build a different alternative community that inevitably will be part of the network or like online, because this is where we go in today, we're not going to create more flyers, we're not going to print more stuff, it probably will work in the digital sphere. How would you prevent a new force that's raising exactly like Facebook, how the new generation of those companies to prevent them to repeat themselves and actually really manage to create online and alternative independent social network? Basically this only can be part of a larger social political struggles to regulate companies that are this harmful for our personal life or psychological life. So it has to be a bigger struggle on different fields and I mean now it's even getting more serious because already kids are using these services so they are affected much earlier on. So yeah, it has to be in every way possible but the role of art as I see it, that's why I wanted to show artistic examples, it's really to provide inspiration, to provide some fantasies of basically a world after Facebook and we still have to believe in it and artists maybe have food. Another question from Mike One. Hi, thanks for the talk, it's great, I'm not on Facebook so I'm wondering if the Facebook graffiti is documented somewhere else, especially since I assume eventually they're going to ruin your fun, I want to see it somewhere forever. Actually I made these screenshots while not being signed in on the site so you can browse it apparently. Yeah, so it works, you just use the hashtag net graffiti on Twitter and Facebook and without being signed in and you can view it, you can have the fun of it without giving them your data. Alright, I think that wraps it up, let's again give a warm round of applause for Elle and Rosa Rae. Thank you for watching.
|
There is graffiti in the ruins of the feed and the event-info-capital is emigrating. Currently Facebook has a tight grip on the cultural scene with its events-calendar and with Instagram as a spectacular image feed. But an opposition is rising. Graffiti and net-art are merging with hacking. Activists are using facebook graffiti, through circulating UTF-8 textbombs that cross the layout of the feed. The Berlin network Reclaim Club Culture meanwhile is calling for a Facebook Exodus. They want to motivate the club and cultural scene to support free alternatives, by moving their biggest information capital, which are the event announcements.
|
10.5446/53090 (DOI)
|
Good morning again here in Dagstam. First talk for today is by Hannes Maynard. It will be, it's titled Leaving Lackancy Behind. It's about reduction of carbon footprint through micro kernels in Mirage OS. Give a warm welcome to Hannes. Thank you. So let's talk a bit about legacy. So legacy we have nowadays we run services usually on a UNIX based operating system, which is demonstrated here on the left a bit. So at the lowest layer we have the hardware, so some physical CPU, some block devices, maybe a network interface card and maybe some memory, some non-persistent memory. On top of that we usually run the UNIX kernel, so to say, that is marked here in brown, which consists of a file system. Then it has a scheduler, it has some process management, it has a network stack, so TCP IP stack. It also has some user management and hardware and drivers, so it has drivers for the physical hard for the network interface and so on. The branch of the kernel runs in privilege mode, it exposes a system call API and or a socket API to the actual application we are there to run, which is here in orange. So the actual application is on top, which is an application binary. It may depend on some configuration files distributed randomly across a file system with some file permissions and so on. The application itself also depends likely on a programming run time that may either be a Java virtual machine if you run Java or a Python interpreter if you run Python or Ruby interpreter if you run Ruby and so on. Then additionally we usually have a system library, libc, which is the run time library basically of the C programming language and it exposes a much nicer interface than the system calls. PSL may have an open SSL or another crypto library as part of the application binary, which is also here in orange. So what's the job of the kernel? So the brown stuff actually has a virtual memory subsystem and it should separate the orange stuff from each other. So you have multiple applications running there and the brown stuff is responsible to ensure that the different pieces of orange stuff don't interfere with each other so that they are not randomly writing into each other's memory and so on. Now if the orange stuff is compromised, so if you have some attacker from the network or from wherever else who's able to find a flaw in the orange stuff, the kernel is still responsible for strict isolation between the orange stuff so as long as the attacker only gets access to the orange stuff it should be very well contained. But then we look at the bridge between the brown and orange stuff, so between kernel and user space and there we have an API which is roughly 600 system calls, at least on my 3BC machine here in syscall. So it's 600 different functions or the width of this API is 600 different functions, which is quite big and it's quite easy to hide some flaws in there and as soon as you're able to find a flaw in any of those system calls you can escalate your privileges. And then you basically run in the brown mode, so in the kernel mode and you have access to the raw physical hardware and you can also read arbitrary memory from any process running there. So now over the years it actually evolved and we added some more layers which is hypervisors, so at the lowest level we still have the hardware staying but on top of the hardware we now have hypervisor which is, which responsibility is to split the physical hardware into pieces and slice it up and run different virtual machines. So now we have the white stuff which is the hypervisor and on top of that we have multiple brown things and multiple orange things as well. So now the hypervisor is responsible for distributing the CPUs to virtual machines and the memory to virtual machines and so on. It is also responsible for selecting which virtual machine to run on which physical CPUs, so it actually includes a scheduler as well. And the hypervisor's responsibility is again to isolate the different virtual machines from each other. Initially hypervisors were done mostly in software, nowadays there are a lot of CPU features available which allows you to have some CPU support which makes them fast and you don't have to trust so much software anymore but you have to trust in the hardware. So that's extended page tables and VTD and VTX stuff. Okay, so that's the legacy we have right now. So when you ship a binary you actually care about some tip of the iceberg that is the code you actually write and you care about, you care about deeply because it should work well and you want to run it. But at the bottom you have the sole operating system and that is the code the operating system insists that you need it. So you can't get it without the bottom of the iceberg. So you will always have a process management and user management and likely as well the file system around on a unique system. Then in addition back in May I think there was a block entry from someone who analyzed from Google Project Zero which is a security research team, a red team which tries to find a lot of flaws in widely used applications and they found in a year maybe 110 different vulnerabilities which they report and so on. And someone analyzed what are these 110 vulnerabilities about and it turned out that more than two thirds of them that the root cause of the flaw was memory corruption. Memory corruption means the arbitrary reads of writes from arbitrary memory which the process is not supposed to be in. So why does that happen? That happens because we on the unique system we mainly use program languages where we have tight control over the memory management. So we do it ourselves. So we allocate the memory ourselves and we free it ourselves. There is a lot of boilerplate we need to write down and that is also not a lot of boilerplate which you can get wrong. So now we talked a bit about legacy. Let's talk about the goals of this talk. The goals is on the one side to be more secure, so to reduce attack vectors because C and languages like that are from the 70s and we may have some languages from the 80s or even from the 90s who offer you automated memory management and memory safety. Language is such as Java or Rust or Python or something like that. But it turns out not many people are writing operating systems in those languages. Another point here is I want to reduce the attack surface. So we have seen this huge stack here and I want to minimize the orange and the brown part. Then as an implication of that I also want to reduce the runtime complexity because it is actually pretty cumbersome to figure out what is now wrong, why does your application not start and if the whole reason is because some file on your hard disk has the wrong file system permissions, that is pretty hard to get across if you are not yet a UNIX expert who has lived in the system for years or at least months. And then the final goal thanks to the topic of this conference and to some analysis I did is to actually reduce the carbon footprint. So if you run a service, you certainly that service does some computation and this computation takes some CPU ticks. So it takes some CPU time in order to be evaluated. And now reducing that means if we condense down the complexity and the code size, we also reduce the amount of computation which needs to be done. These are the goals. What is a Mirage as Unicoral? That is basically the project I have been involved in since six years or so. The general idea is that each service is isolated in a separate Mirage as Unicoral. So your DNS resolver or your web server don't run on this general purpose Unix system as a process but you have a separate virtual machine for each of them. So you have one Unicoral which only does a DNS resolution. And in that Unicoral you don't even need a user management. You don't even need process management because there is only a single process. There is a DNS resolver. Actually a DNS resolver also doesn't really need a file system so we got rid of that. We also don't really need virtual memory because we only have one process so we don't need virtual memory and we just use a single address space. So everything is mapped in a single address space. We use a program language called OCaml which is a functional programming language which provides us with memory safety. So it has automated memory management. And we use this memory management and the isolation which the program language guarantees us with by its type system. We use that to say, OK, we can all live in a single address space and it will still be safe as long as the components are safe and as long as we minimize the components which are by definition unsafe. So if we need to run some C code there as well. So in addition, now if we have a single service we only put in the libraries or the stuff we actually need in that service. So as I mentioned, the DNS resolver won't need a user management. It doesn't need a shell. Why would I need a shell? What should I need to do there and so on. So we have a lot of libraries, a lot of OCaml libraries which are picked by the single service or which are mixed and matched for the different services. So the libraries are developed independently of the whole system or of the unicolonel and are reused across the different components or across the different services. Some further limitation which I take as freedom and simplicity is not even we have a single address space. We are also only focusing on single core and have a single process. So we don't have a process. We don't know the concept of process yet. We also don't work in a preemptive way. So preemptive means that if you run on a CPU as a function or as a program, you can at any time be interrupted because something who is much more important than you can now get access to the CPU. And we don't do that. We do cooperative tasks. So we are never interrupted. We don't even have interrupts. So there are no interrupts. And as I mentioned, it's executed as a virtual machine. So how does that look like? So now we have the same pictures previously. We have at the bottom the hypervisor. Then we have the host system which is the brownish stuff. Then on top of that we have maybe some virtual machines. Some of them run via KVM and QEMU Unix system using some VIRT IO that is on the right and on the left. And in the middle we have this Mirage as unicolonels. Where we in the host system don't run any QEMU but we run a minimized so-called tender, which is this SolO5 HVT monitor process. So that's something which just tries to allocate or will allocate some host system resources for the virtual machine and then does interaction with the virtual machine. So what does SolO5 HVT do in this case is to set up the memory, load the unicolonel image which is a statically linked else binary, and it sets up the virtual CPU. So the CPU needs some initialization and then booting is a jump to an address. It's already in 64-bit mode. There's no need to boot via 16 or 32-bit modes. Now SolO5 HVT and Mirage as they also have an interface and the interface is called hypercalls and that interface is rather small. So it only contains in total 14 different functions, which is main function yield way to get the argument vector clock. Actually two clocks, one is a POSIX clock which takes care of this whole time stamping and time zone business. Another one is a monotonic clock which by its name guarantees that time will pass monotonically. Then you have a console interface. The console interface is only one way. So we only output data. We never read from console. A block device, well block devices and network interfaces and that's all the hypercalls we have. To look a bit further down into detail of how Mirage has unicolonel looks like, here I pictured on the left again the tender at the bottom and then the hypercalls and then in pink I have the pieces of code which still contain some C code in a Mirage as unicolonel. And then green I have the pieces of code which do not include any C code but only all camera code. So looking at the C code which is dangerous because in C we have to deal with memory management on our own, which means it's a bit brittle. We need to carefully review that code. It is definitely the OCaml runtime which we have here which is around 25,000 lines of code. Then we have a library which is called Nolib C. It is basically a C library which implements malloc and shrink compare and some basic functions which are needed by the OCaml runtime. That's roughly 8,000 lines of code. That Nolib C also provides a lot of stuff which just exit or return null for the OCaml runtime because we need, well, we use an unmodified OCaml runtime to be able to upgrade our software more easily if we don't have any patches for the OCaml runtime. Then we have a library called Sol of 5 Bindings which is basically something which translates into hypercord which can access the hypercord and which communicates with the whole system via hypercord. That is roughly 2,000 lines of code. Then we have a math library for signers and code signers and Tengen and so on. That's just the open libm which is originally from the 3BC project and roughly 20,000 lines of code. So that's it. So I talked a bit about Sol of 5, about the bottom layer and I will give a bit more into detail about Sol of 5 stuff which is really the stuff you run at the bottom of mid-Rashu-S. There's another choice, you can also run Xen or CubeS at the bottom of the mid-Rashu-S unicolon but I'm focusing here mainly on Sol of 5. So Sol of 5 is a sandbox execution environment for unicunnels. It handles resources from the host system but only aesthetically. So you say at startup time how much memory it will take, how many network interfaces and which ones are taken and how many block devices and which ones are taken by the virtual machine. You don't have any dynamic resource management so you can't add at a later point in time a new network interface that's not supported. Then, and at Mexico much easier. We don't even have dynamic allocation inside of Sol of 5. Then we have a hyper-call interface. As I mentioned it's only 14 functions. We have bindings for different targets. We can run on KBM which is a hypervisor developed for the Linux project but also for Beehive which is a free-beast hypervisor or VMM which is an open-beast hypervisor. We also target other systems such as G-Node which is an operating system based on a micro-colonel written mainly in C++. VITIO which is a protocol usually spoken between the host system and the guest system. VITIO is used in a lot of cloud deployments. QAmo for example provides you with a VITIO protocol implementation. The last implementation of Sol of 5 or bindings for Sol of 5 is SecComp. So Linux SecComp is a filter in the Linux kernel where you can restrict your process that will only use a certain number or a certain amount of system calls. We use SecComp so you can deploy it without a virtual machine in the SecComp case but you are restricted to which system call you can use. Sol of 5 also provides you with a host system tender where applicable. So in the VITIO case it is not applicable in the G-Node case it is also not applicable. In KBM we already saw the Sol of 5 HVT that is a hardware virtualized tender which is just a small binary because you run few emits at least hundreds of thousands of lines of code in the Sol of 5 HVT case it is more like thousands of lines of code. So here we have a comparison from left to right of Sol of 5 and how the host system kernel and the guest system works. In the middle we have a virtual machine so a common Linux QAmo KVM based virtual machine for example and on the right hand we have the host system and the container. Container is also a technology where you try to restrict as much access as you can from a process so it is contained and the potential compromise is also very isolated and contained. So on the left hand side we see the Sol of 5 is basically some bits and pieces are in the host system so the Sol of 5 HVT and then some bits and pieces are in the Unicode so that is the Sol of 5 findings I mentioned earlier and that is to communicate between the host and the guest system. In the middle we see that the API between the host system and the virtual machine it is much bigger that is commonly using VITIO and VITIO is really a huge protocol which does feature negotiation and all sorts of things where you can always do something wrong like you can do something wrong in the floppy disk driver and that led to some exploitable vulnerability although nowadays most operating systems don't really need a floppy disk drive anymore. And on the right hand side you can see that the host system interface for a container is much bigger than for a virtual machine because the host system interface for a container is exactly those system calls you saw earlier so it is around 600 different calls and in order to evaluate the security you need basically to audit all of them. So that is just a brief comparison between those. If we look into more detail what Sol of 5, what shades it can have here on the left side you can see it running in a hardware virtualized tender which is you have the Linux 3VCO OpenVST at the bottom and you have Sol of 5 blob which is a blue thing here in the middle and then on top you have the unicorn. On the right hand side you see the Linux second process and you have a much smaller Sol of 5 blob because it doesn't need to do that much anymore because all the hypercalls are basically translated to the system calls so you actually get rid of them and you don't need to communicate between the host and the guest system because in Seccom you run as a host system process so you don't have this virtualization. The advantage of using Seccom is as well that you can deploy it without having access to virtualization features of the CPU. Now to get it in even smaller shape there's another backend I haven't talked to you about it's called the Muen, it's a separation kernel developed in Ada. So you basically, so now we try to get rid of this huge Linux system below it which is a big kernel thingy here. And Muen is an open source project developed in Switzerland in Ada as I mentioned and it uses Spark which is a proof system which guarantees then memory isolation between the different components and Muen now goes a step further and it says, oh yeah, well you as a guest system you don't do static allocations you don't do dynamic resource management. We as a host system, we as a hypervisor, we don't do any dynamic resource allocations as well. So it only does static resource management so at compile time of your Muen separation kernel you decide how many virtual machines or how many unicunals you are running and which resources are given to them. You even specify which communication channels are there. So if one of your virtual machines needs to talk to another one you need to specify that at compile time. And at runtime you don't have any dynamic resource management so that again makes the code much easier, much less complex and you get to much fewer lines of code. So to conclude with this mirage and also Muen and sort of five and how that is I like to cite Antoine about perfection is achieved not when there's nothing more to add but when there's nothing left to take away. I mean obviously the most secure system is a system which doesn't exist. Let's look a bit further into the decisions of Mirage and so on. Why do you use this strange programming language called OCaml and what it's all about and what are the case studies. So OCaml has been around since more than 20 years. It's a multi-paradigm programming language. The goal for us and for OCaml is usually to have declarative code. To achieve declarative code you need to provide the developers with some orthogonal abstraction facilities such as here we have variables and functions you likely know if you're a software developer. Also higher order functions so that just means that a function is able to take a function as input. Then in OCaml we try to always focus on the problem and do not distract with boilerplate. So some running example again would be this memory management. We don't manually deal with that but we have computers who actually deal with that. In OCaml you have a very expressive and static type system which can spot a lot of invariance or violation of invariance at build time. So the program won't compile if you don't handle all the potential return types or return values of your function. So now a type system you may know it from Java. It's a bit painful if you have to express at every location where you want to have a variable which type this variable is. What OCaml provides is type inference similar to Scala and other languages so you don't need to type all the types manually. And types are also unlike in Java, types are erased during compilation. So types are only information about values the compiler has at compile time but at run time these are all erased so they don't exist, you don't see them. And OCaml compiles the native machine code which I think is important for security and performance because otherwise you run an interpreter or an abstract machine and you have to emulate something else and that is never as fast as you can. OCaml has one distinct feature which is its module system. So you have all your values which are types or functions and now each of those values is defined inside of a so-called module and the simple module is just a file name. But you can nest modules so you can explicitly say, oh yeah, this value or this binding is now living in a sub-module hereof. So each module you can also give it a type so it has a set of types and a set of functions and that is called a signature which is the interface of the module. Now you have another abstraction mechanism in OCaml which is Functors and Functors are basically compile time functions from module to module. So they allow parametrization like you can implement your generic map structure and all you require so map is just a hash map where your map implementation is maybe a binary tree and all you need to have is some comparison for the keys. And that is modeled in OCaml by module. So you have a module called map and you have a functor called make and the make takes some module which implements this comparison method and then provides you with map data structure for that key type. And then Mirafit as we actually use the module system quite a bit more because we have all these resources which are different between Xen and KVM and so on. So each of the different resources like an advocate phase has a signature and target specific implementation. So we have the TCP IP stack which is much higher than the network card. It doesn't really care if you run on Xen or if you run on KVM. You just program against this abstract interface, against the interface of the network device. But you don't need to write in your TCP IP stack any code to run on Xen or to run on KVM. So Mirafit also doesn't really use the complete OCaml programming language. OCaml also provides you with an object system and we barely use that. We also in Mirafit as well, OCaml also allows you with mutable states and we barely use that mutable state but we use mostly immutable data whenever sensible. We also have a value passing style so we put state and data as input. So state is just some abstract state and data is just a byte vector in a protocol implementation and then the output is also a new state which may be modified and some reply maybe so some other byte vector or some application data. Or the output may as well be an error because the incoming data and state may be invalid or may violate some constraints. And errors are also explicitly typed so they are declared in the API and the caller of a function needs to handle all these errors explicitly. As I said, single core but we have some promise-based or some event-based concurrent programming stuff. And here we have the ability to express really strong invariance like this is a read-only buffer in the type system. And the type system is as I mentioned only compile time, no runtime overhead so it's all pretty nice and good. So let's take a look at some of the case studies. The first one is Unicolonial so it's called the Bitcoin Piniata. It started in 2015 when we were happy with from scratch developed TLS stack. TLS is transfer layer security so what you use if you browse through HGPS. So we have a TLS stack in OCaml and we wanted to do some marketing for that. Bitcoin Piniata is basically Unicolonial which uses TLS and provides you with TLS endpoints and it contains the private key for a Bitcoin wallet which is filled with... which used to be filled with 10 Bitcoins. And this means it's a security base so if you can compromise the system itself you get the private key and you can do whatever you want with it. And being on this Bitcoin blockchain it also means it's transparent so everyone can see whether it has been hacked or not. And it has been online since three years and it was not hacked but the Bitcoin we got were only borrowed from friends of us and they were then reused in other projects. It's still online and you can see here on the right that we had some HTTP traffic like an aggregate of maybe 600,000 hits there. Now I have a size comparison of the Bitcoin Piniata on the left. You can see the Unicolonial which is less than 10 megabytes in size or in source code it's maybe 100,000 lines of code. And on the right hand side you have a very similar thing but running as a Linux service so it runs an OpenSSLS server which is a minimal TLS server you can get basically on a Linux system using OpenSSLS. And there we have mainly maybe a size of 200 megabytes and maybe 2 million lines of code so that's roughly a factor of 25 and other examples we even got a bit less code. Much bigger factor. Performance analysis I showed, well, also in 2015 we did some evaluation of our TLS stack and it turns out we are in the same ballpark as other implementations. Another case study is a Kaldaf server which we developed last year with a grant from Prototype Fund which is a German government's funding. It is interoperable with other clients. It stores data in a remote Git repository so we don't use any block device or persistent storage but we store it in a Git repository so whenever you add the calendar event it does actually a Git push. And yeah, we also recently got some integration with Kaldaf ZEP which is a JavaScript user interface doing a user interface and we just bundle that with the thing. It's online open source, there's a demo server and a data repository online. Yes, some statistics and I zoom in directly to the CPU usage. So we had the luck that we for half of a month we used it as a process on a 3BC system and that happened roughly the first half until here. And then at some point we thought, oh yeah, let's migrate it to a Mirage as a unicolonel and don't run the 3BC system below it. And you can see here on the X axis the time, so there is a month of June starting with the first of June on the left and the last of June on the right. And on the Y axis you have the number of CPU seconds here on the left or the number of CPU ticks here on the right. The CPU ticks are virtual CPU ticks which are deeper counters from the hypervisors, so from beehive and 3BC here in that system. And what you can see here is this massive drop by a factor of roughly 10 and that is when we switched from a Unix virtual machine with a process to a freestanding unicolonel. So we actually use much less resources. And if we look into the bigger picture here we also see that the memory dropped by a factor of 10 or even more this is now logarithmic scale here on the Y axis. The network bandwidth increased quite a bit because now we do all the monitoring traffic also via network interface and so on. Okay, that's Galdav, another case study is authoritative DNS servers. And I just recently wrote a tutorial on that, I will skip because I'm a bit short on time. Another case study is Firewall for QPSU S, so QPSU S is a reasonable secure operating system which uses then for isolation of work spaces and applications such as PDF readers. So whenever you receive a PDF you start your virtual machine which is only run once and you, well, which is just run to open and read your PDF. And QPSU Mirage Firewall is now a small or a tiny replacement for the Nodespace Firewall written in OCaml now. And instead of roughly 300 megabytes you only use 32 megabytes of memory. There's now also recently some support for dynamic firewall rules as defined by QPSU 4.0. That is not yet merged into master but it's under review. Libraries and Mirage OS, yeah, we have since we write everything from scratch and in OCaml we don't have now, we don't have every protocol but we have quite a few protocols. There are also more unicornals right now which you can see here. The slides are also online in the far plan so you can click on the links later. Reputants were built so for security purposes we don't get chip binaries but I plan to ship binaries. In order to ship binaries I don't want to ship non-reproducible binaries. What is reproducible? It means that if you have the same source code you should get the binary identical output. And issues are temporary filenames and timestamps and so on. In December we managed in Mirage OS to get some tooling on track to actually test the reproducibility of unicornals and we fixed some issues and now all the tested Mirage OS unicornals are reproducible, which are basically most of them from this list. Another topic is supply chain security which is important I think. And we have, this is still work in progress, we still haven't deployed that widely but there are some test repositories out there to provide more, to provide signatures signed by the actual author of a library and getting across until the user of the library can verify that and some decentralized authorization and delegation of that. What about deployment in conventional orchestration systems such as Kubernetes and so on? We don't yet have a proper integration of Mirage OS but we would like to get some proper integration there. We already generate some lib-vert.xml files from Mirage. So for each unicornal you get the lib-vert.xml and you can do that and run that in your lib-vert based orchestration system. Then we also generate those.xln.xefiles which I personally don't really know much about but that's it. On the other side I developed an orchestration system called Albatross because I was a bit very if I now have those tiny unicornals which are megabytes in size and now I should trust the big Kubernetes which is maybe a million lines of code running on the host system with privileges. So I thought, oh well let's try to come up with a minimal orchestration system which allows me some console access so I want to see the debug messages or whenever it fails to boot I want to see the output of the console. I want to get some metrics like the Grafana screenshot you just saw. And that's basically it. Since I developed also a TLS stack I thought, oh yeah well why not just use it for remote deployment. So in TLS you have mutual authentication, you can have client certificates and certificate itself is more or less an authenticated key value store because you have those extensions in X509 version 3 and you can put arbitrary data in there with keys being so called object identifiers and values being whatever else. TLS certificates have this great advantage that or X509 certificates have the advantage that during a TLS handshake they are transferred on the wire in not base 64 or PEM encoding as you usually see them but in basic encoding which is much nicer to the amount of bits you transfer. So it's not transferred in base 64 but directly in raw basically. And with Albert Royce you can basically do a TLS handshake and in that client certificate you present you already have the uniconal image and the name and the boot arguments and you just deploy it directly. You can also in X509 you have a chain of certificate authorities which you send with and this chain of certificate authorities also contains some extensions in order to specify which policies are active. So how many virtual machines are you able to deploy on my system? How much memory you have access to and which bridges or which network interfaces you have access to? So Albert Royce is really a minimal orchestration system running as a family of unix processes. It's maybe 3,000 lines of code or so. It seems to work pretty well. I at least use it for more than two dozen uniconodes at any point in time. What about the community? Well the whole Miragell project started around 2008 at the University of Cambridge. So it used to be a research project which still has a lot of ongoing student projects at the University of Cambridge. But now it's an open source permissively licensed, mostly BSD licensed thing where we have community events every half a year in retreat in Morocco. When we also use our own uniconodes like the DHCP server and the DNS resolver and so on, we just use them to test them and to see how does it behave and does it work for us. We have quite a lot of open source contributors from all over. And some of the Miragell libraries have also been used or are still used in this Docker technology, Docker for Mac and Docker for Windows which emulates the guest system where we need some wrappers and there are a lot of old code is used. So to finish my talk I would like to have another slide which is that the room wasn't built in a day. So where we are is to conclude here, we have a radical approach to operating systems development. We have security from the grounds up with much fewer code and we also have much fewer attack vectors because we use the memory safe language. We have reduced carbon footprint as I mentioned in the start of the talk because we use much less CPU time but also much less memory so we use less resources. It's Miragell itself and OCaml has a reasonable performance. We have seen some statistics about the TLS stack that it was in the same ballpark as OpenSSL and PolarSSL which is now there's MBIT and TLS. And Miragell Unicomuls since they don't really need to negotiate features and wait for the SCSI bus and so on, they actually do it in milliseconds, not in seconds so they do not have a probing and so on but they know at start up time what they expect. I would like to thank everybody who is and was involved in the solar technology stack because I myself have programmed quite a bit of OCaml but I wouldn't have been able to do that on my own. It is just a bit too big. Miragell has currently spent around maybe 200 different Git repositories with the libraries mostly developed on GitHub and OpenSource. I'm at the moment working on a non-profit company in Germany which is called the Center for the Cultivation of Technology with a project called Roblox. So we work in a collective way to develop full stack Miragell as Unicomuls. I'm happy to do that from Berlin and if you're interested please talk to us. I've some selected really other talks, there are much more talks about Miragell but here's just a short list of something if you're interested in some certain aspects please help yourself to view them. That's all from me. Thank you very much. There's a bit over 10 minutes of time for questions. If you have any questions, walk through the microphone. There are several ones around the room. Go ahead and ask. Thank you very much for the talk. Thank you the speaker can be done afterwards and questions are questions so short sentences and then we have a question mark. Sorry, do go ahead. If I want to try this at home, what do I need? Is a Raspi sufficient? No it isn't. That is an excellent question. So I usually develop it on a ThinkPad machine but we actually support also ARM64 mode. So if you have a Raspberry Pi 3 Plus which I think has the virtualization bits and the Linux kernel which is reasonable enough to support KVM on that Raspberry Pi 3 Plus then you can try it out there. Next question. Currently most Mirajos Unicernals are used for running server applications and so obviously with all this static preconfiguration of OCaml and maybe AdaSpark is fine for that. But what do you think about will it ever be possible to use the same approach with all this static preconfiguration for these very dynamic end user desktop systems for example like which at least currently use quite a lot of plug and play. Do you have an example what you are thinking about? I am not that much into the topic of AdaSpark stuff but you said that all the communications paths have to be defined in advance. So especially with plug and play devices like all this USB stuff we either have to allow everything in advance or we may have to reboot parts of Unicernals in between to allow re-routing stuff. That's how I would understand it. Yes. So I mean if you want to design a USB plug and play system you can think of it as you plug in somewhere the USB stick and then you start a Unicernal which only has access to that USB stick. But having a Unicernal, well I wouldn't design a Unicernal which randomly does plug and play with the auto world basically. So and one of the applications I've listed here is at the top is a picture viewer which is a Unicernal that also at the moment I think has it as static embedded data in it but it is able on QubesOS or on Unix and SDL to display the images. And you can think of some way via network or so to access the images actually. So you don't need to compile the images in but you can have a Git repository or a TCP server or whatever in order to receive the images. So I'm saying, so what I didn't mention is that MirageRise instead of being general purpose and having a shell and you can do everything with it, it is that each service, each Unicernal is a single service thing so you can't do everything with it. And I think that is an advantage from a lot of points of view. I agree that if you have a highly dynamic system that you may have some trouble on how to integrate that. Are there any other questions? Well, to be honest not. In which case, thank you again, Hannes. One more applause for Hannes. Thank you.
|
Is the way we run services these days sustainable? The trusted computing base -- the lines of code where, if a flaw is discovered, jeopardizes the security and integrity of the entire service -- is enormous. Using orchestration systems that contain millions of lines of code, and that execute shell code, does not decrease this. This talk will present an alternative, minimalist approach to secure network services - relying on OCaml, a programming language that guarantees memory safety - composing small libraries (open source, permissively licensed) to build so-called MirageOS unikernels -- special purpose services. Besides web services, other digital infrastructure such as VPN gateway, calendar server, DNS server and resolver, and a minimalistic orchestration system, will be presented. Each unikernel can either run as virtual machine (KVM, Xen, BHyve, virtio), as a sandboxed process (seccomp which whitelists only 8 system calls), or in smaller containments (GenodeOS, muen separation kernel) -- even a prototypical ESP32 backend is available. Starting with an operating system from scratch is tough, lots of engineering hours have been put into the omnipresent ones. Reducing the required effort by declaring certain subsystems being out of scope -- e.g. hardware drivers, preemptive multitasking, multicore -- decreases the required person-power. The MirageOS project started as research project more than a decade ago at the University of Cambridge, as a minimal guest for Xen written in the functional programming language OCaml. Network protocols (TCP/IP, DHCP, TLS, DNS, ..), a branchable immutable store (similar and interoperable with git) are available. The trusted computing base is roughly two orders of magnitude smaller than contemporary operating systems. The performance is in the same ballpark as conventional systems. The boot time is measured in milliseconds instead of seconds. Not only the binary size of a unikernel image is much smaller, also the required resources are smaller: memory usage easily drops by a factor of 25, CPU usage drops by a factor of 10. More recently we focused on deployment: integration of logging, metrics (influx, grafana), an orchestration system (remote deployment via a TLS handshake, offers console access and an event log) for multi-tenant systems (policies are encoded in the certificate chain). We are developing, mostly thanks to public funding, various useful services: a CalDAV server storing its content in a remote git repository, an OpenVPN client and server, DNS resolver and server (storing zone files in a remote git repository) with let's encrypt integration, a firewall for QubesOS, image viewer mainly for QubesOS, ... The experience while developing such a huge project is that lots of components can be developed and tested by separate groups - and even used in a variety of different applications. The integration of the components is achieved in a type-safe way with module types in OCaml. This means that lots of errors are caught by the compiler, instead of at runtime.
|
10.5446/53098 (DOI)
|
Our next speakers are going to talk about the charges against Julian Assange and Wickeleks, which is a topic that's very close to our hearts, I guess, most of our hearts, at least. And it's also something that's incredibly important for us as a community. And it's a threat against the entire tech community, minorities, human rights advocates, activists. So a lot of people you should really care about. And the speakers are Renata Avila, who's the executive director of Fundación Ciudadania Inteligente. Yay! Naomi Cohen, who's the UK program director at Blueprint for Free Speech, which is much easier to pronounce. Thank you so much. And Angel Richter, who's a director and writer and artist and a lot of things. And she specializes on whistleblowing and digital dissidents. And one of the plays, which is a transmedia play, you might know, it's called Super Nerds. So a round of applause for our amazing speakers. And let's begin the talk. Thank you very much. Good evening, everyone. And thank you for coming here tonight. And thank you also for our introduction by the moderator, a very charming guy, as I thought, and also good to give a little bit of lightness to this, for me, very serious issue actually that we are here. Like he said, I'm an artist and for me, WikiLeaks was very important and also Julian Assange, because somehow they were the entrance for me as an artist to this community that became very dear to me in the last 10 years. And I attended some of the Congresses in the last 10 years and learned a lot about things that I never knew before. So I owe a lot actually to WikiLeaks and also to this community because it opened so many things for me up. So yeah, this I wanted to say first. I will also show a little piece of a recent play I did in Zagreb. It's by Slavoj Žižek, who is also a supporter by Julian. It is not so much, it is related to our topic. It's a little bit like a mood board that we want to show before we start. And like he said, this will be about how we can support WikiLeaks and of course Julian Assange, which is also a very personal matter for me because he became a very close friend in the last 10 years, who I also owe a lot. And on the other hand, I think it's not only about him and his life, which is serious enough, but I think that this thing that is happening to him, that he's being charged with the espionage act. This is the first time that something like this happens to a publisher, is a threat to free speech, to all of our freedom. And it means that actually everyone who speaks truth to power can be kidnapped, extradited to the US, and then end up in prison for the rest of his life. And I think that this is, for this community also a threat, especially because we all know that we are trying to be secure from secure free speech is very important issue here. So yeah, we will go into the details in the course of this week. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. We will try to be brief to leave enough time for questions because we think that you have a lot of questions on this case. And so we will alternate and discuss different issues starting with what are we at now? Yes. On the left side, you see Belmarsh prison. This is the high security prison that Julian is housed at the moment. And what I find very chilling about it is that it's actually a place where usually you find terrorists, murderers, and mafia people and so on. So high criminals. And he is only at the moment being held there for extradition reasons, which is also extreme because he is 23 hours a day alone in his cell, which is actually isolation. And then the next picture shows a typical room in a prison in Virginia where Chelsea Menning is held at the moment again. And I think it's, it must be something like 10 months in the meantime that she's still again in prison because she is not willing to testify against Julian Assange in front of a grand jury. So raise your hand if you have less than 30 years. So okay. Thank you. That says a lot because it means that probably your first encounter with WikiLeaks was just only five years ago and you were a teen when many things were happening. And we know that today is Young Hackers Day. So it was important for us to quickly go through the important publications that WikiLeaks published in the last decade. Why? Because there are many misconceptions since 2016. And a lot of misinformation followed the election of Donald Trump. And so we want to show here, and it is of course, it's not a detailed list. You can find a detailed list on WikiLeaks, on Wikipedia. Two concepts at the beginning were often mixed actually. And the same principles followed, I would say. But what I want to show here is the most impactful publications by WikiLeaks that changed the course of history in many places. And also highlighted in green are the political persecution moments of not only against Julian Assange but against other people that were closely connected to these developments in the last decade. So 2008 was a very exciting year for WikiLeaks because I think that even if it was created before, was the year that it got mainstream. Why? Because it changed elections in Kenya by exposing extrajudicial killings. And it really changed the outcome of the election. Like people realized that one of the candidates was involved in these extrajudicial killings of young people and that really impacted deeply the African nation. Not only that, you might have forgotten about that, but it was the first publication of the batch of emails from Sarah Palin. And also, there were lots of publications in Latin America. That's how I became familiar with WikiLeaks. I'm very excited about that. The Trogette, a big scandal of corruption in Peru, also publications involving guerrillas and false positives in the Colombian war. It was, it started like from the places from Africa, from Latin America, and also the US. On 2009, I will say that the highlight and why WikiLeaks became very visible is exposed a lot of the censorship lists of China and Iran and other countries. The internet was not what it used, it is not what it is today. Censorship was tangible. You will see a blocked website. And now, as we will discuss later, now is different forms of censorship. And so WikiLeaks at that moment was guardian of this free internet. And also, it was the big moment in Iceland and the big opportunity for WikiLeaks as well in a jurisdiction to become not only a publisher, but a designer of new ecosystem of freedom of information. So it exposed the corrupt involving the financial scandal there. And it got really, really exciting there, things like the EME initiative and all the things that are now part of our history. Then 2010, and then 2010 was the year when things started to get really complicated. Why? Because instead of touching countries in the periphery or developing countries, it is okay, it is always cool to expose the human rights violations of an African or Latin American person in power. But when you touch the center of power, when you touch the most powerful military in the world, you get into trouble. So on 2010, collateral murder video was published, the Afghan war diaries, Iraq war logs and cable gate. And that was the moment when Julian Assange was arrested. It was not arrested because of the publications. It was just few days after the publications started that he was arrested on behalf of Sweden. No charges were, it was not because of charges, it was because of ongoing investigation. 2011, the Gitmo files, Spy files, Spy files was the first batch of publication, 160 companies involved in mass surveillance, private mass surveillance. That was priest Noden, remember that. And that, at that moment, Julian only spent a little in the same prison that is now, only few days, then he was released on bail. But from that moment, from the moment that he put, he presented himself, he surrounded himself to the police, he never had. He just voluntarily went there when he was requested. From that moment, his life became a hell of surveillance. He was not only had a tag on his ankle, following him everywhere, but he had the most strict bail conditions that you can imagine. He could not even give a talk in London because he will have to go back, he had ridiculous hours to report himself to the police. He was watched all the time. He had to report to the police on a daily basis. Someone suspected of terrorism was enjoying more relaxed conditions that someone who wasn't charged. And that's a constant in this case and other politically motivated cases. You are not the rule. You are the exception. And exceptionally harsh, the system treats you. And 2012, Stratford emails and also the Syria files. The Syria files is a publication that is not often mentioned, but it was very relevant exposing all the dealings of the Syrian elites. And Julian is granted political asylum in Ecuador. He could have requested political asylum much earlier, but he wanted to go through all the legal process in the UK and all the appeals. And it was his last chance to exercise that right. Then 2013, the TPP text, spy files too. And that was the moment when Manning was sentenced to 35 years in prison. Snowden is granted asylum in Moscow as well. And Jeremy Hammond is convicted and sentenced to 10 years in prison. Jeremy Hammond is the alleged source of the Stratford, the global intelligence files. Then 2014, TISA spy files three and the updated TPP text. Then 2015, the Sony archives, the Saudi cables. Actually that Saudi cable publication was one of the most dangerous ones. You saw what happened to journalist Jamal Khashoggi. I mean, it's very, very dangerous publication. And hacking team searchable database and the TPP final texts. This is very important because it really changed the life for better of lots of people. I personally work on global trade issues. And the negotiators of developing countries or representing underrepresented communities like they are so thankful to WikiLeaks for releasing and publishing the TPP IP chapter because it means better access to medicine. It got the people with better conditions for negotiations in key issues such as access to medicines. Then 2016, I will say that I will compare it to 2010. Then you again touch the center of power. WikiLeaks touched again the center of power by publishing the Clinton Podesta and DNC emails. And that changed a lot the narrative. And changed a lot the narrative in a very different world because it was not anymore the tangible censorship or the clear publication. But our information ecosystem as we know had been modified by social networks, by different forms of distributing and accessing content. 2017 under Obama leaves the administration by commuting Chelsea Manning's sentence. And she's later released that year. And WikiLeaks publishes NSA Spine and French Election, Vault 7, which is the toolkit of spying of the CIA. And spy files Russia. 2018, Amazon Atlas, US Embassy shopping list and the weapon leader, dealers details. Here is very important that the conditions of Julian changed radically after 2016 at the Ecuadorian Embassy. And the pressure of the US increased terribly and he was not allowed anymore to do his job as a journalist. He spent most of the year gagged and he could not participate actively directly on his role as editor. And 2019 you saw in the video, as such is arrested, Manning is arrested again. But in spite of that, in spite of all the pressure, WikiLeaks refuses to shut down and continues publishing the Pope orders, the chemical attack and fish drug. So as you can see, Julian has upset and WikiLeaks has upset enough people from the most powerful army in the world to the most powerful governments, to the most powerful corporations that saw their plans frustrated with the TPP collapse and the TTIP collapse and the TISA collapse to even the Pope. So if you upset, if you expose so many people, you have very few allies left. You have basically the people as your allies. So that's why this talk is really, really, really important. You have also the media because over the 10 years WikiLeaks has worked closely with most of the news outlets all over the world. If you check the newspaper tomorrow morning, it's highly likely that it was one of the WikiLeaks media partners. This is just a small sample of over 125 media organizations all over the world that had collaborated closely with WikiLeaks. Yes, and I just want to add a very interesting little detail that John Götz told me who was at that time, he's a journalist. He now works for Süddeutsche and IED. At that time he was working for Der Spiegel, who also worked closely with WikiLeaks at that time, 2010, and they published Cablegate. And it's interesting to know that due to a technical glitch, because the deal was that WikiLeaks publishes first and after that the newspapers follow, Spiegel, New York Times, Guardian, and so on. And due to this glitch, WikiLeaks was not able to publish in time, so they were too late with their publishing and all the newspapers came out already. So technically they published first, which is very important for the case in a way because he's charged because he published it first, the Cablegate, and it would be interesting because what does it mean? It means that actually the journalists from Spiegel and New York Times and Guardian could face the same penalties. And when you imagine that, then I think the impact it has on publishing becomes even more chilling and clear, you know. So I thought to tell you this little detail about the publishing of Cablegate. So what happened on April 11 when he was expelled from the embassy and dragged out is something that goes beyond just Julian Assange. As a human rights lawyer, you know, when I see political unrest, when I see people, dissidents at risk, I always tell them, have a good relationship with a friendly embassy that defends human rights and in case of trouble, get there, get inside an embassy. This is happening now with dissidents in Bolivia, for example, who are like right now in the embassy of Mexico. I will advise any of you to do that, but now with caution because now since the violation and this really brutal way that asylum was taken away from, an illegal way that asylum was taken away from Julian and the way that police from a different country enters an embassy, asylum has been awakened forever until we reverse this. That's why this is yet another reason why this case is very important. Right now, you know, even the government of Bolivia is threatening the Mexican embassy to get inside and take out the dissidents seeking asylum inside their embassy. It is really upsetting to see how an institution that has over 400 years that was designed to protect dissidents is being dismantled by this scandalous case. And well, when he was out, it happened what we had predicted for years. For years we have been saying at the moment when he's arrested, they will unseal an indictment for espionage. And everyone will look at us like back in 2010 and 2011 and say like you are paranoid. There's no way that the U.S. is going to prosecute Julian. He's just hiding from Swedish charges, they were saying, always charges, even they were never charges. And he is a coward and he's a paranoid and this is not going to happen. It happened immediately. Yes. It happened immediately and just as predicted, it was so upsetting to see the result of the Swedish investigation because not only over there, I mean, there was a good journalist doing her job and she discovered over the years different irregularities. Sweden wanted to shut down the case back in 2013 after asylum was granted. It was a collusion and it's really good if you like documents and you got like deep research, get into the documents that are already available and see how the U.K. system put a lot of pressure on Sweden not to prosecute this case as they usually prosecute any case. Things as simple as a video conference could have been taking place back in 2010, back on August, September 2010 and it didn't happen because of a lot of political pressure. So now the charges. There are 18 charges against Julian Assange and there might be charges against more people who are mentioned in the indictment. And the charges that he's facing for publishing amount 175 years in prison. And if to make your life simpler, basically the charges are online publishing, protecting sources and doing journalism. If you read what it is about, it's really chilling and it's especially chilling because look at who's in charge now, right now, all over the world and it is the first time that the Justice Department gets away with it. It is using a very anachronistic law and they obtain an indictment from a grand jury, that's from a group of people who thinks that it is okay to prosecute under espionage charges online publication. If you get a takeaway from tonight, this is the takeaway. This is the serious thing that we are discussing right now. And the thing, this is important because at the center of this is our right to know. Right to publish it on our side is our right to know. And three relevant aspects of the charges. You will read a lot of all of the wikileaks and Julian had blood on his hands. It risked informants and put at risk. These charges have nothing to do with this risk assessment that will not be even known by the court. These reductions and these measures of protection that are over and over in media are not relevant for the espionage charges. And there's also important to notice that it mentions constantly over the indictment wikileaks as an intelligence agency of the people and that mirrors the language of Pompeo, the current secretary of state, who is trying to frame wikileaks as non-state terrorist actor like equivalent of Al Qaeda. And that has huge horrible implications not only on the core wikileaks organization but on supporters, even wearing a t-shirt, reading a book about it, it can place you in a not so nice place. And the important thing that is very worrying is that more people might be detained and charged before or after the extradition takes place. And we don't have to speculate about this dragnet, of course, because it is already here. Already here in its pattern of intimidation and petty invictiveness. Chelsea Manning, one of the great heroes of our time, one month before Julian was expelled from the Ecuadorian embassy and arrested on U.S. charges, just like it always said would happen. One month before, Chelsea Manning received a subpoena to testify before a grand jury in the eastern district of Virginia. She refused to testify and was imprisoned for contempt. She is currently, she's served 10 months back in prison. She is currently being fined $1,000 for every day she spends in prison, not testifying. This is what Chelsea said about what is happening in a statement in May. I believe this grand jury seeks to undermine the integrity of public discourse, with the aim of punishing those who expose any serious, ongoing and systematic abuses of power. The idea I hold the keys to my own cell is an absurd one, as I face the prospect of suffering either way due to this unnecessary impunity of subpoena. I can either go to jail or betray my principles. The latter exists as a much worse prison than the government can construct. In September, Jeremy Hammond, coming to the end of a long prison sentence for his role in the publication of the global intelligence files, he received us, he was called against his will to testify before a grand jury, again in the eastern districts of Virginia. Again he refused to testify. Again he has been jailed for 18 months on contempt. This is what he had to say about it in October. After seven and a half years of paying my debt to society, the government seeks to punish me further with this vindictive, politically motivated legal manoeuvre to delay my release. I am opposed to all grand juries, but I am opposed to this one in particular because it is a part of the government's ongoing war on free speech, journalists and whistleblowers. If this hadn't happened to Jeremy, he would be in a halfway house by now, he would have been released from prison. He might have been participating in this Congress. On the 11th of April this year, the same day that Julian was expelled from the Ecuadorian Embassy and arrested and indicted by the United States, just like he always said would happen, his friend, Ola Binney, was arrested in Ecuador. Ola spent two months in an Ecuadorian prison in absolutely disgusting conditions until he was released by a writ of habeas corpus. Ola has now been charged with charges that suggest that the prosecutors in Ecuador don't really understand what it is that security researchers do every day. Senior Ecuadorian politicians, the most senior Ecuadorian politicians have been on television in Ecuador saying that Ola is guilty before any trial date has been set. Politicians like Amnesty and EFF have said that Ola's prosecution is political and of course they are quite correct. It's all political. Extradition is political. Don't let anyone tell you differently. Extradition is an institution developed as a deal behind closed doors done between sovereign powers. It's only in the past 100 years or so that parts have been transferred into courtrooms, but politicians still have an active role in extradition proceedings and sometimes extradition is used for political purposes. Extradition in the UK is also very political. What is it that every taxi driver in London can tell you about extradition? If you don't believe me, you're welcome to test this out empirically next time you're in town. What is it they'll tell you? They will tell you that the UK has an unfair, unequal, unbalanced, inequitable extradition treaty with the United States. This treaty dates from 2002 when Tony Blair was keen to give the United States everything it could possibly want and more. One of the gentlemen pictured in this slide is Gary McKinnon. Very shortly after the 2002 extradition treaty came into force, Gary McKinnon started a 10-year battle not to be extradited to the United States on hacking charges. He prevailed in the end, but only after he'd been through the entire legal process twice and he was rescued eventually by the say so of a UK Home Secretary. The other gentleman on that slide is Larry Love. In February last year, Larry won his battle against extradition to the United States, again on hacking charges at appeal in the High Court. I was involved in that campaign. I'm glad he won. I'm glad he won because it means we have a hope of saving Julian. He'd be in trouble if he hadn't. Larry won on two different bases. One of them is very relevant. One of the reasons why Larry won his battle against extradition is because judges in the High Court, including the most senior judge in England and Wales, ruled that US prisons are so bad, the conditions are so barbaric, so medieval, that somebody with pre-existing health conditions like Larry, there was no guarantee he'd stay alive in the US prison. You might be hearing more about that in February next year. But there are other big, big issues involved in Julian Assange's extradition case. Big, big issues that don't necessarily involve him that much at all. The first clip on that slide is part of John Stuart Mill's autobiography. John Stuart Mill, liberal philosopher and also British politician for a bit. And in this extract, he's talking about how he battled to change an earlier incarnation of a UK extradition treaty because he didn't want the British government to become, quote, an accomplice in the vengeance of foreign despotisms. And should not be used as a political tool for foreign governments to pursue and punish people it doesn't like, people who are guilty of political offences. It's a fundamental question of sovereignty. If you are Andy Muller-McGoon's excellent talk yesterday morning, you will have heard about the pervasive, surrogate and quite frightening surveillance that was happening at the Aquarius Reductory Embassy for the seven years that Julian Assange was living there. This raises a fundamental issue. If your every legal conference, all of your discussions with your lawyers are being surveilled and allegedly passed straight to the power that's trying to prosecute you. If all of your legal documents are handed over, allegedly, well actually we know that, to the power that's trying to prosecute you, what does that mean for your chances of a fair trial? If you care about surveillance at all, we're going to have to make a stand in this very extreme case because if we don't, how are we ever going to stand up for fair trial rights for anyone? Yes. And before I go further in our topic, I just want to say that I have personal experience with the surveillance happening in the embassy because I used to visit Julian many, many times, maybe 30 times from the moment he entered the embassy till the last time I saw him is nearly exactly a year ago. It was around Christmas last year. And at that point, I mean, I really could see the eroding conditions that he lived in. I mean, just to see a person that didn't see the sunlight for seven years or something was terrible enough, but then the last year when he lived quasi in isolation and had no access to phone or to internet, nothing, because that was the way that he had contact with the world and had no visitors anymore for nearly a year, I think, because we, the people that visited him, we were kind of his door to the world. And it was for me very, very, very weird to be so well all the time when I was there. Sometimes I spent five hours at least there. And after a while, you just feel very uncomfortable. I was so happy when I could leave that building, actually, especially in the last two years. And then I could not imagine staying there like him, having no private moment. I mean, in the end, they even put cameras in the bathrooms and the toilets and so on. I know there was this tiny kitchen. Sometimes we used to hide from the cameras to just have a moment of just talking without feeling surveyed. And then he had also this little apparatus. I think Andy was talking about it yesterday in his talk that was causing white noise. And I was really annoyed, to be honest, by this little thing. And I was always, I was also thinking about, my God, maybe he is too paranoid, you know, because the weird thing is you get used to everything. And somehow, like us now, being surveyed all the time through our phones and laptops and so on, and we get used to it. But he always insisted, even when we were talking like banal stuff about, I don't know, soccer game or something, the little sound machine was on causing white noise. And not only it caused disturbance for the surveyors. It also caused headaches in my head. And so, yeah, it's actually a very sad story. And for me, it was to see the process when, especially after the government, the conservative government came into power in Ecuador. His status very much changed. And so he became more and more something I would describe as a prisoner and not someone who has asylum. Okay. This is on my personal note how I experienced it. And the other thing is, on this picture, you see one of the first protests that we did in Berlin. It was this year in May. It was a little after he was dragged out of the embassy. And we were there with some people, including Szech Kohorovat, Croatian philosopher, and as you see on the picture, also IVY, the Chinese artist and human rights advocate who also openly supported Assange always and also not afraid of consequences, actually. And he also visited him in prison. And what is also an interesting fact that Ava Wei also made the connection between the protests against the extradition law in Hong Kong and he connected it with this very controversial extradition case of Julian in the UK at the moment. So for me, it's sometimes something I could never believe in former times that I will be in a situation where we in the West, who are the good ones and the free West, the so-called free West is somehow actually in the top 10 of having dissidents in prisons, including the ones that we just named, and that no human rights seem to be valuable anymore. And I find this very concerning, I must say, also on a private level. Yes. And I was there too, as you see in the photograph, and before I went to the protest, you can switch it, I was in Moscow and I visited Edward Snowden because I also worked with him together. He helped me a lot on the place I did. And this was the third time, actually, that I visited him. And we also talked about Julian's case and he gave me a letter of support that I was reading out loud on this protest, and I will just read a little bit of it that you can see now. By the government's own admission, Assange has been charged for his role in bringing to light true information, information that exposed war crimes and wrongdoing perpetrated by the most powerful military in the history of the world. It is not just a man who stands in jeopardy, but the future of the free press. Yes, and I think that he is very much right in this case because what does it mean? For me, I'm also in the meantime working as a journalist for Def Reitach. I published a few articles about him and Snowden and basically about whistleblowing in these things. If publishing becomes a crime, telling the truth becomes a crime, and if you are not able to work with sources, to protect sources, and to actively also try to obtain material about truth. Because we live in a democracy where the powers have to be shared and to have a balance in power because as we know, when power gets into a monopoly, it will always be abused. We have time to meet. So yeah, I will cut it short. It has bad implications for journalists, and if this happens to Julian, it is a threat not only to journalism, but to democracy itself. Sorry, we will accelerate because what comes next is very, very important. And yes, we saw immediately after the arrest of Julian, the situation going really badly in Australia. But what I wanted to discuss, we wanted to discuss with you tonight, this is about you, about someone just like you. You can see, I mean, I can see you there, I can see Julian in these pictures, and I can see lots of similarities. You belong to the same species, basically. He was a single father. He was prosecuted at a very, very young age, spent five years of his 20s fighting a legal process, but he was all the time with his computer. I cannot, I really cannot imagine how his life was since April, away from his computer. And you imagine your life away from your computer even for one day. Imagine since April, he has been away from his computer and only having one hour a day outside a prison cell. So while he was raising up a kid as a single parent, and while he was dealing with a hacking legal process, he also was actively working for our communities. He was co-running one of the first public access internet providers in Australia. He was always involved and dedicated thousands of hours to the free software movement. His code was even used by Apple and in other operating systems. So chances are that today, even today, our computers, our Apple devices for the bad people who uses Apple like me, are running part of his code. He was also, and from very early time, trying to find ways for vulnerable groups such as human rights defenders, ways to encrypt their devices. And so he was very active before WikiLeaks. WikiLeaks just was an upgrade kind of on his plans. And I also want to mention that the CCC is mentioned, expressly mentioned in that part of the indictment against Julian. So what happens here, you know, it matters there. And I think that the sole fact that the community is mentioned on an indictment against a journalist is enough reason to stand up and say something about it and organize around it. But it's not only the community name on the indictment and the criminal complaint. It's also our communication practices. Raise your hand if you have a Java account. So yes, the Java server, the CCC server is mentioning the criminal complaint against Julian. Well. Yeah, I mean, what's there is worrying, but what's even more worrying is this isn't the microphone. What's there is worrying, but what's even more worrying is that it's a moving target. Times are still continuing. This is part of a submission the US government made in Chelsea Manning's ongoing proceedings. Talking about an ongoing investigation, there's more to come. And there's even more bad omens. Like this. Like that one. Even more bad omens from across the water in unrelated cases and prosecutorial theories that are being put together, which are very disturbing and all go for very bad things to come. I can't talk about that now, but it's an excellent issue for the Q&A. What happens next? Well, immediately what's going to happen next is that on the 24th of February for three or four weeks, Julian Assange will have his extradition hearing. To give you an indication of the size and scale of this case, Lowry loves extradition hearing, which was quite a big deal and quite big took two and a half days. Julian has three is going to have, Julian's is going to be three or four weeks. It will take place in Belmarsh Magistrates Court in a horrible part of southeast London near the prison. It will probably take place in the courthouse next door because they've got bigger courts, but it will be in that place in London. So what can you do? Okay, do not be afraid to speak up, speak with people and so on. And don't be afraid. We still live in a free country. Realize yourself against propaganda, which is really something that you should be aware that happened massively in the case of Julian. I think you know what I mean. And understand what is at stake. This is a political persecution and it's about everyone. And I want to quote Nils Mensa, the UN Special Rapporteur on torture, who I met recently. And this is a very famous quote of him that he was continuously actually saying to people in power, Assange has been systematically slandered to divert attention from the crimes he exposed. Once he had been dehumanized through isolation, ridicule and shame, just like the witches we used to burn at the stake, it was easy to deprive him of his most fundamental rights and without provoking public outrage worldwide. And I think this is exactly what happened to him. And this is a picture of really courageous journalists from all over the world who stand up and say, like, stop this persecution. And there a community Julian belongs to. But I have seen very few real statements from this community. So our request tonight will be like, please try to organize and try to do a similar effort that matters a lot. Naomi will explain why. It's really important because no man is an island and the UK is not an island, even after Brexit, right? The UK government does care about its international reputation, maybe unlike the US. And the UK government needs to know that the world is watching. The world is watching. They're hosting entirely unnecessarily the most ridiculous, the most important press freedom truck case of a generation, completely unnecessarily. They need to know that we're keeping a careful eye on it. Over the past few months, we've been putting a lot of effort into ensuring that the extradition hearing, the trial, if you like, in February is properly monitored. We have 25 elected parliamentarians from 12 European countries who have committed to being past those monitoring efforts. Reporter San Frontier are going to monitor. We have a whole group of medics who are going to monitor the extradition proceedings. And I think it would be good to have a similar effort from this community too, frankly. Especially because there are many technical issues being discussed. European really matters for this trial, you know, and he can do it. He cannot do it from prison. He counts on you to help lawyers, to help the press, to help everyone understand what is and what isn't online publishing and online journalism. 21st century journalism is at stake on this case. And your voice really matters here. It really does. And you know, he's our friend. And it's not only someone we support, but he's our friend. And he likes to have the final word always. So we can bring him back from the 11 years ago from a congress like this one to have the final word. Oh, oh, hey, C.A. Yeah. No, it's only a glitch. He's going to be frustrated. He's going to be angry. Um. Okay. Should we try again? It's okay. Should we try? If not, we can, in the meantime, we can read it out. We can read it out. So you have better in me. Justice doesn't just happen. Justice is forced by people coming together and exercising strength, unity and intelligence. That's Julian at 25 C3. Should we try? No. Oh my God. He'll be annoyed by that. He would be very annoyed. He's going to be really angry about that. Please do not tell him. Yeah, don't tell him. If you don't tell him, he won't know. Yeah, so we are ready for some questions. I think that we have very little time, but if we don't have enough time, we will be hanging out at the tea house and you can come to us and ask questions on how to help. Thank you so much. It was very insightful, moving and incredibly important. So I remind everyone that we have six microphones. If you have questions, line up behind them and also our wonderful signal angels are going to take some questions from the internet. None of which we're going to answer right now. Okay, the question that which reasons could there be to explain the lack of fair and well balanced media reports in the same case? What are the reasons for the lack of supportive media coverage? Okay. Do you want to answer that? You can start and I will also help. Very quickly, I will say that going back to the slide on who he exposed, the most powerful people, if you have the most powerful people in the world, private sector, public sector, and hidden sector against you and with unlimited resources to take you down is quite easy to kill positive stories. It is really hard in times that journalism is on the resource and that the courageous journalists are not really rewarded. It is really difficult to navigate that ecosystem. Yes, and I want to add that also there's a reason. I think if journalism today would do a proper job of investigating and exposing the powerful, it would not be necessary that WikiLeaks even exists. I think if they would do that job as the so-called fourth state in democracy, then something like WikiLeaks wouldn't even be there. I think that might be a reason that he not only exposed the powerful, but he also a little bit exposed, of course, his colleagues at the so-called established press. I think that every reason that he gave and there was some because he's not perfect. Julian Assange is only human and he did make mistakes like everybody of us. I could say, okay, take the first stone and throw it. I think that, of course, bad news is always good news. Let's say many people who knew him said, let's say, negative things that the press picked up. But when I would say to press, oh, I also know him, I think he's a decent guy. Nobody wants to report that because it's boring and not interesting. So yeah, there are many reasons for that, I think. I'm going to add that. I mean, the fact that there are 10 years of history here definitely makes a difference. But look, I speak to a lot of journalists and I speak to a lot of journalists about this case in the UK and particularly as it's become more obvious that Julian is not doing very well, that he's very unwell. I think people are shocked and I can, you know, people are frightened about it. They might not be talking about it very much at the moment, but they will. It is changing around, for sure. Yes, and then speaking of being frightened, also don't underestimate that people might be afraid. And also, I know that there are many journalists here tonight. This is your opportunity to change the narrative because you are next if you stay silent. Thank you. We're going to take the next question from a man who's wearing a Julian Assange mask. Got to reward the effort. A microphone too, please. Oh, hi. I want to thank you so much for your talk. When we are all facing this situation of asking ourselves what we can do, we should take inspiration from what you just said and what you just did. It is not just about Julian. It is about every one of us here. This is wonderful, but that is not a question. No, but I'm getting there. It's about doing this historical perspective on all these aspects about war, about power, about what we can do, about what the Internet is about to question power. It is about also maybe admitting that... It may be much faster. Much faster. That is not perfect. You may have said stupid things on Twitter like we all did and like anyone would do after seven years in detention. Yet he's one of us. So when asking ourselves what to do, it's a modest contribution from the Internet. There is a wiki that is online for a few days now. On these stickers that you find... Okay, we're going to take an actual question. I am really sorry, but... Microphone one. Microphone one, please. Okay, still thank you. Hi. Thank you for the inspiring talk. So I am a Pakistani journalist. I now live in exile in Berlin. But the story of Assange and what we just saw, this... Everything that happened and the perpetrators, they even put the authoritarian regimes and their leaders in shame, especially how the system of asylum has been preached. That also scares me. I'm actually cold because I'm scared. But my question is, could you as journalist maybe shed some light on the chilling effect for journalists? I mean, I can only imagine that there might be more leaks in line that would have happened, but maybe has not happened because the journalists are also now self-censoring. So what would you advise to such journalists? Thank you. That is really exactly a very tough question. And this is exactly one of the dangers that we are pointing to, you know, that people might just not expose it. And like I said, people are starting to get afraid. What can we say to them? Well... I have something... And I think that Julian has something to say. It's the same as with justice. As a community, with the strength, unity and intelligence, I mean, look at the talent in this room. Look, it's not necessarily just the brilliance of one whistleblower or one person. It's the ecosystem that we need to create to create resilient media. And we need resilient media for democracy to work. And if it cannot happen even here in Germany, with all the resources and with all the brilliant minds, what is going to happen? So I think that we cannot stop innovating and we need to push for the next wave of innovations for the journalism that will serve these needs in our times. And that's why this case matters a lot because it's punishing these innovations, these redistributed power among people. There also needs to be a recognition, a bit of solidarity is necessary here because this isn't just about Julian. As Renata mentioned briefly, things in Australia have gone to pot since Julian was arrested. And more than that, one of the slides I flicked over was the indictment of a drone whistleblower, Daniel Hale. In Daniel Hale, the count one of Daniel Hale's indictment accuses him of unlawfully releasing information but unlawfully releasing information to a journalist who he knew would have used it unlawfully. So this is like the second time in a US indictment we have an accusation of a publisher, a journalist acting unlawfully by publishing true information in the public interest. We need to be aware and we need to raise the alarm because this isn't just about Julian, the threat is very real and it's very broad. Thank you. We have time for one last question and we're going to ask our signal angels again. So there was a question, how can we help and support Manning Assenj and Snowden? Well like we just said also, I think it's very important to show solidarity in different ways by raising your voice. Well even supporting with donation, it's always good. It's good for Manning, it's good for everyone. I think Courage Foundation is someone who's supporting everyone including Jeremy Hammond and Chelsea Manning who are not so much in the focus maybe like Julian but also for Julian I think that his trial will cost, oh my God, hundreds of thousands of pounds. I just hope that the pound goes down after Brexit but okay. And I think speaking up and like Renata also said to have the feeling that we are many and I think exactly this thing that he said, people coming together and sharing and kind of be brave. Courage is contagious is one of my favorite quotes of him and so I think yeah, take a stand, have an attitude and do as much as you can in your possibilities which are not so little I think and I think it is for the good of everyone, not only the names people who are in danger now but for all of our freedom. Resist. Practically, there's a lot to do and there's a lot of work to go around. As we've mentioned in the talk, organizing in the communities you're part of is very important here. In Germany to take an example, we've had parliamentarians coming forward, we've also had the journalist union, we've also had collections of lawyers. All of this is really important and it makes a difference to the work that's being done in the UK. There are lots of different organizations and groups doing work on this case and it's all really valuable. Contribute as you will, find the group that you think is doing good work, either work that you think will make a difference or that are caused with your own ideological perspective and support them. There's a lot of people doing good work here. One of the saving graces of what has been quite a depressing year is meeting so many people who were doing important work on this most dire of issues. We have a lot of faith on you as a community, to be honest. We count on you and this community do not leave behind people belonging here. I think that if we can see, I think that Julian will be incredibly thrilled and Chelsea will be super happy to know that there's organized efforts to follow this case closely and to have delegations present during the hearings. If they know that you are there, even symbolically there, they will feel so much better because more to any community, Snowden, Chelsea, Julian, really love, admire and count on this community. Please be there and find us later. We will explain more detailed ways to help. Thank you so much for attending this talk. It means a lot. It means a lot to have a full room. I know that there's many people watching as well and we'll watch this again. Please continue following this case. We will prepare all the information that you need, but we need you to activate it and to translate it into actions. Thank you so much. Thank you. Thank you. Thank you for inspiring me. Thank you. Thank you. Thank you for inspiring me. Thank you. Thank you.
|
The unprecedented charges against Julian Assange and WikiLeaks constitute the most significant threat to the First Amendment in the 21st century and a clear and present danger to investigative journalism worldwide. But they also pose significant dangers to the technical community. This panel will explain the legal and political issues we all need to understand in order to respond to this historic challenge. We've been warning you about it for years, and now it's here. The talk will dissect the legal and political aspects of the US case against Wikileaks from an international perspective. It will describe the threats this prosecution poses to different groups and the key issues the case will raise. Most importantly, we will explain how we are still in time to act and change the course of history. The unprecedented charges against Julian Assange and WikiLeaks constitute the most significant threat to the First Amendment in the 21st century and a clear and present danger to investigative journalism worldwide. But they also pose significant dangers to the technical community, the trans community, to human rights defenders and anti-corruption campaigners everywhere. If we don't take action now, the ramifications of this case will be global, tremendously damaging and potentially irreversible in times when the need to hold the powerful to account has never been more obvious. This is a historic moment and we need to rise to its challenge. This talk will explain the legal and political aspects of the case against WikiLeaks, the reasons why Chelsea Manning and Jeremy Hammond have been imprisoned again, the governmental interests for and against prosecution, the dynamics of UK/US extradition and what it means to prosecute Assange as Trump runs for re-election. This is a case with destructive potential like no other, with profound implications for the future of dissent, transparency, accountability and our ability to do the work we care about. The situation is frightening but it isn't hopeless: we will conclude with a guide to an effective strategy against the lawfare the journalist and technical communities are now facing courtesy of Donald Trump's DOJ
|
10.5446/53100 (DOI)
|
Our next speaker is the creator of Signal and he is going to tell you what he's doing right now. The next talk is the ecosystem is moving challenges for distributed and decentralized technology by Moxie Malin Spike. Have fun. Hello. These three kids are playing around and they break into the barn of a farmer and they're playing around in the barn and the farmer hears something in the barn and he comes out to investigate. So the three kids have to hide and they see these three empty potato sacks in the barn. So they all jump in the potato sacks but as the farmer comes in they're still moving around a little bit. So the farmer is investing in this situation and he starts walking towards one of the potato sacks and the kid inside sees what's happening and so he says, yeah. And the farmer is like, oh, there's a cat in there. So he starts looking at the other potato sack and the kid inside sees what's going on and so he's like, woof. And so the farmer is like, oh, okay, there's like a dog in there. And so the farmer starts looking at the third potato sack and as he gets closer the kid inside says, potatoes. All right. I'm like the potato kid right now. All these people who got like six hours of sleep last night, you're doing better than me. I'm at like 7%, you know. Jet lag is a crazy thing. I fell asleep at like six last night. I just couldn't stay up any later. And I was like hard to sleep. I felt like I sleep forever. And so I woke up and I was like, wow, I slept for a long time. And I looked at my phone and it said 7.15. So I was like, oh, wow, I slept all night. This is great. I'm waking up at 7.15 in the morning. So I got up. I'm brushing my teeth and shit. And eventually I realized it's 7.15 at night still. I slept for an hour and 15 minutes. Okay. So my name is Moxie. I work on a messaging app called Signal. Signal is a private messaging app, but it is not decentralized. Which is to say that there's no federated mesh, P2P, blockchain, something, something. But every now and then people are like, there should be a federated mesh, P2P, blockchain, something, something. So I want to talk a little bit about decentralized systems from the perspective of the time I've spent at Signal and the work that I've done there and, you know, what we're trying to accomplish. So I think, you know, at a high level, I should say that, you know, while I work in software, I greatly envy musicians, writers, filmmakers, painters. These are all people who create things and can really be finished forever. You can record an album today and 20 years later you can listen to that album and appreciate it just the same. But software is never finished. You cannot write a piece of software today like write an app and then 20 years later just, you know, enjoy that app in the same way. Because software is part of an ecosystem and the ecosystem is moving. The platform changes out from under it, networks evolve, security threats are in constant flux, the UX language that we've all learned to speak and understand together rarely sits still. And as more money, time and focus has gone into this entire ecosystem, the faster the whole thing has begun to travel. The world is now filled with rooms that look like these and buildings that look like these that are packed to the rafters with people who sit in front of a computer for eight hours a day every single day. And all of the energy and the momentum behind that means that user expectations are evolving rapidly. And evolving rapidly is in contradiction with decentralization. How is that possible? What does that mean? After all, the Internet is decentralized. That seems like a dynamic, evolving rapidly kind of place. But when you really look at it like the fundamentals of the Internet, that's not always the case. For instance, if you look at like IP, you know, one of the fundamental protocols of the Internet, how do we get, you know, how have we done with that? Well, we got to the first production version of IP and we've been trying for 20 years to get to the second production version without a lot of success. You know, HTTP, we got to version 1.1 in 1997 and we've been basically stuck there until now. SMTP, IRC, XMPP, DNS, it's all the same. They're all frozen in time. Circa the sort of early 1990s. Once you decentralize a protocol, it becomes extremely difficult to change. You put it out there in the world, there's like many different clients, different implementations, different deployments, and so making any changes becomes extremely difficult. Meanwhile, centralizing protocols has been like a recipe for success. You know, it's what Slack did with IRC, it's what Facebook did with email, it's what WhatsApp did with XMPP. In each case, the decentralized protocol is stuck in time, but the people who have centralized them have been able to just iterate super quickly and develop these products that are extremely compelling to people. So the fact that something like email is decentralized is cool in the sense that I host my own email. I have since 1996. I wouldn't really wish it upon anybody, but I still do it. But the fact that email is decentralized is also what means that my email is not encrypted and never will be, because it's too difficult to change at this point. It's out there in the world and making that change is probably not going to happen. You know, by contrast, WhatsApp is centralized, so I don't run my own WhatsApp server, I don't have my own WhatsApp data store or whatever, but it's encrypted for billions of people by default. And they were able to just roll it out with a software update. So I think this is sort of the fundamental problem that we have to deal with, right, which is so long as decentralization means stasis, while decentralization means movement, that decentralized environments are going to have a lot of trouble competing with centralized environments. But, you know, why do we want decentralization anyway? People talk about decentralization a lot, but what is it that we're actually after? I think when you sort of break it down, the partisans of decentralization are advocating for increased privacy, censorship resistance, availability and control. These are the things that I think people who are into decentralization are really kind of looking for and hope to get out of that world. So let's look at these in turn because these are things that I'm interested in as well, and that I would like, you know, an app like Signal to provide. So, you know, privacy. We've already seen that decentralized systems are not inherently encrypted, in fact, most decentralized systems in the world are not, and they're encrypted by default. And there's nothing about decentralization that makes the things encrypted, you know? And so I think advocates of decentralization have a different take on privacy, which is one of data ownership, the idea that, like, you can, you know, run a service yourself and that you maintain ownership of that data, and those people also point out that that includes metadata, not just, you know, the contents of things like messages or something, but also the metadata about them. And so in a sense that that is better than just some kind of encryption solution. But in a lot of ways I feel like this is somewhat of an antiquated notion that it's left over from a time when computers were for computer people. You know, this, you know, I think in the 1990s, this sort of general thesis for people working in this space was let's develop really powerful tools for ourselves and then teach everybody to be like us. And that's not really how the world developed. You know, at the time we sort of imagined that the internet would look like this, that not only would everyone on the internet be both a producer and consumer of content, but also a producer and consumer of infrastructure. And neither of those things really bore out. In reality the internet looks a lot more like this, you know, that things sort of seem to naturally roll up and converge into these like super nodes that people are making use of, that people aren't all producers and consumers of content or infrastructure. So, you know, given that world, while I host my own email, you know, since the world looks like this, I don't actually have any meaningful data ownership, even though I run my own mail server. I don't actually have any kind of metadata protection or anything like that, because every email that I send or receive has Gmail on the other end of it. Even though I host my own mail server, I might as well just be a Gmail customer because they have a copy of basically every email that I ever send or receive. So, given that I think that the world has developed in this direction, I feel like real data protection is more likely to come from things like intent encryption than it is from data ownership. And that things like metadata protection are going to require new techniques and that those new techniques are more likely to evolve in centralized rather than decentralized environments because centralized environments are where things tend to change. So, for instance, you know, at Signal, this is an area that we've been working on a lot. So, at Signal, you know, we have technologies like private groups, private contact discovery, sealed sender. These are things that mean that, you know, the Signal service has no visibility into group communication or even group state or group membership information, no visibility into your, you know, contacts or even your social graph, and no visibility into who is messaging whom. So, you know, looking at something like private groups, the way that group state is usually maintained is, you know, on a server you have a database and in that database you have a record for every group. And, you know, the group needs to contain information like, you know, what's the group title, what's the group avatar, you know, what's the membership, you know, who's in this group, what are their roles, maybe someone's an administrator, maybe someone's a group creator, maybe someone's like a read-only group member. And then, you know, maybe some group attributes like a pinned message or something like that. And, you know, clients can query this database in order to render group information for the user. And, you know, given that it seems unlikely that we're going to move into a world where everyone is, like, running their own servers in addition to their own clients, that just merely, like, you know, putting this plain text database in the, you know, on everyone's individual servers is sort of unlikely. So, you know, one thing you might think about doing is just encrypting it, right, where you can just have this server side database and in the database all of the entries are encrypted with a key that is shared among group members but that the server doesn't have any visibility into. And so, you know, that seems like a straightforward solution. The problem is that you also need the server to be able to enforce access control and basic rules, right? Like, the server should be able to look at the members of a group and determine whether, you know, a group member is authorized to make changes, like, to change the title or to add another member to a group or to kick someone out of the group or anything like that. But if the data is encrypted, then how is the server going to do that? So, at Signal, we developed an anonymous credential scheme that allows the server to store encrypted membership lists. So, the server has a record for some random group of its members but each member is encrypted so the server doesn't know who the members of the group are. And then, you know, let's say Alice wants to add someone to a group, Alice can construct a zero-knowledge proof and authenticate to the server, proving that she has a signed credential that matches the encrypted contents of one of the group members without ever actually revealing what the contents of that record are or who she is. And then, you know, once authenticated, Alice could add another member to the group, like Frank, and then, you know, once Frank gets out of the group, he can come along and do the same thing where he constructs a zero-knowledge proof and is able to prove in zero-knowledge without revealing who he is or the contents of this record to the server, that he is a group member and he might request a membership list, the server, you know, transmits the encrypted values to him, then he can decrypt them locally with the key that's shared amongst group members and determine who is in the group and display that to the user. You know, so this is an example of some, you know, new cryptography that we developed in order to solve this problem. And, you know, it's, again, like, a new technique in order to offer some privacy-preserving technology in a space that I think is, you know, more likely to happen in places where we can just make these changes and roll them out super easily. All of this adds up to, you know, a world where I can publish the server-side state for my signal account. There's nothing in it, really. You know, even the profile data is encrypted. The only, you know, real unencrypted values here are the last-scene time in day-level precision that I connected to the signal service and when my account was created. There's no group information about, like, you know, what groups I'm in, the titles of those groups, the average hours of these groups, who the other group members are, my contacts, you know, aren't stored there, my social graph isn't on the service, even my profile data isn't visible to the service and, you know, when people message me or I message people, the server doesn't have visibility to that. Meanwhile, my email still isn't even antinecrypted and never will be. But even if we did live in this world where, you know, the Internet looks differently and everyone is both a producer and consumer of infrastructure, this PDP world is not necessarily privacy-preserving in itself. For instance, when we first rolled out voice and video calling and signal, we designed it so that it did establish PDP connections between the both parties of a call. So, you know, if I call somebody, I would establish a direct connection to that device. But when we deployed that, people were like, wait a second, wait a second, like, does that mean that someone could just call me and learn my IP address? I don't want that. What about all the metadata here that, like, you know, my ISP or, you know, someone on Wi-Fi or whatever on the same network as me can see who I'm calling and who's calling me? Like, that's not what I want. Isn't there anything you guys can do about this? Yeah, we can just, you know, route it through a server instead. And so that's what we do in many cases. So, you know, thinking about privacy, I kind of feel like that decentralized systems aren't inherently going to give us the privacy properties that we necessarily desire and that it's more likely that we can develop technology to, you know, offer what it is that people want in centralized environments. Thinking about censorship resistance, this is another area where I feel like the idea of censorship resistance for decentralized environments is that many things are somehow more difficult to block than one thing. If you have, like, many servers that it's harder for a sensor to block access to those than it is to block access to one server. And again, I feel like this is sort of like an antiquated notion left over from a different time that, in today's world, if something is user-discoverable, that it's also going to be a sensor-discoverable in a lot of ways. But the basic idea is that, like, if you're such-and-such at something.com and something.com gets blocked, you could just switch to something else at something else.com, you know, and you can just sort of keep moving around like that. The problem is that every time you do that, you blow up your entire social graph. So, you know, if you imagine a scenario where there's a bunch of different, you know, users who, you know, are affiliated with a bunch of different servers that, you know, one server gets blocked by a sensor that the users who can no longer access that server can switch to different servers. But the problem is that as soon as they do that, they have to be rediscovered by everyone else in the network, because now they have a different address. And it's more likely that if, you know, one server is blocked at any given moment that, you know, all servers are going to be blocked in that moment, and everyone has to switch to, like, a whole other thing. And at that point, you've, like, really blown up, you know, your entire social network. Everyone has to rediscover everyone from the beginning. You're basically playing a game of whack-a-mole that's very asymmetric, because every day that, you know, sensors take an action to block known servers is basically, like, the first day of your entire social network. You're starting over from scratch, where everyone has to rediscover each other all over again. So, you know, to the extent that if your strategy is sort of, like, bouncing around, it's actually, I think, more effective to just have one centralized service with multiple ingress points. So, you know, if you have a service and there's a bunch of users who are using that service, that if access to that server gets blocked to just, you know, spin up another ingress point, you know, a proxy or even, you know, a VPN or something like that, that everyone can switch to. At the moment that people switch, they're, you know, it's the same kind of thing where it's like the switching strategy, but you're not blowing up your social network. Everyone has the same address and can be identified. If that gets blocked, you know, you switch to another one, et cetera, et cetera. So, you're playing a game of whack-a-mole, but it's not as asymmetric because, you know, the switching cost is very low. This is the kind of strategy that apps like WhatsApp and Signal have used to resist censorship attempts in most times that they've been attempted. So, you know, they'll use strategies like domain fronting, which basically, you know, when clients, there's a technique where like a client connects to a CDN that's operated by some large CDN provider and, you know, does a DNS and SNI TLS connection with one host, like, you know, some large service like Google Maps or something like that, but then the HTTP host header specifies a different address, which is like, you know, a proxy. And so in order to block this, like, the censorship block access to, you know, some larger set of services rather than just one specific service. Or using techniques like proxy sharding, which is basically like you set up multiple ingress points and you shard access to them to different users. So, you know, only some users can discover some access points, which means that a sensor can't discover all of the access points very quickly and that as things get blocked, you, you know, keep shuffling around. These are the kind of things that require moving quickly. That like, you know, as people are trying to block access to a service that you're moving very quickly to respond. And again, moving quickly is not necessarily something that is easy in decentralized environments. So, again, when it comes to censorship resistance, I feel like you're sort of more likely to see effective censorship resistance in centralized environments rather than decentralized environments. And in many cases, that's what we've actually seen. And then, you know, availability. So, every time there's like an outage, people are like, you should decentralize, you know. You wouldn't have as many outages. But I think the reality is that you would just have more outages. You know, like, it's, you know, if you think about it in terms of like, if you have a centralized service and you wanted to move that centralized service into two different data centers, and the way you did that was by splitting the data up between those two different data centers, you just have your availability. Because the meantime between failure goes up since you have two different data centers, which means that it's more likely that there's going to be an outage in one of those data centers at any given moment. And since you've split your data between them, you have the availability of that data. So, again, I don't think availability is necessarily something that you're more likely to see a better availability in decentralized rather than centralized environments. And then finally, control. So, I think this is a really interesting moment. The current sort of sentiment in the world today has changed a lot. Now people sort of feel that the Internet is this terrible place in ways that I don't think people used to feel. That the era of utopianism and this vision for, you know, technology and providing a better and brighter future is coming to an end. And I think a lot of that comes down to a feeling that we have a lack of control, that technology is not actually serving our needs in the way that we want it to, and that we don't have any control over or agency over how that is manifest. And so I think, you know, the strategies that the partisans of decentralized environments have for manifesting that control is, you know, either through this, like, switching idea, basically, so that it's like if you have a federated environment that different services could behave differently. So that, you know, if you were a subscriber of one service and your provider started to behave in a way that you felt was inappropriate, that you could just switch to a different provider but not lose access to the entire network, which, you know, I think has a certain appeal. But, you know, if that is true, if that is, you know, a strategy worth pursuing, I think we have to ask ourselves, why do people still use Yahoo Mail? You know, it's, hasn't been updated in like 10 years. They had like, you know, a massive series of security incidents. It's not clear who even owns it anymore. But people are still using Yahoo Mail. Like, a lot of people are still using Yahoo Mail. Why? Because changing email is hard. And I think we're, it's sort of this interesting moment where, you know, switching from Yahoo to Gmail is actually harder than switching from WhatsApp to Telegram to Signal. Because again, every time you switch email providers, you basically blown up your social network. Everyone has to rediscover your new federal identifier. And then if you use a non-federal identifier like a phone number as the basis for your social network, that, you know, switching between, you know, different services that aren't actually connected with each other is actually easier than switching between federated services. And the sort of notifications on your device or your desktop becomes like the federating bridge between those networks in a way that is, in some senses, more effective than the federated models ever were. The other sort of strategy, I think, for maintaining or regaining control from decentralized environments is extensibility. So this is the idea that what we can do is develop a protocol that is designed to be extended so that different people can modify this technology in ways that, you know, it feels like meet their needs. And I think, you know, the best or the most well-known example of this is a protocol like XNPP, which was a chat protocol designed to be extensible. But, you know, in the end, what we ended up with was this like morass of ZEPs, which were the extensions. And there wasn't ever like a feeling of strong consistency, which generated a lot of uncertainty within the user experience. So even today, it's like you want to send a video over XNPP. It's like there's a ZEP for that. Like does the recipient support that? We don't know. You want to send a gift? Like, you know, that's a little dicey. It's like, I'm sure, you know, and none of the extensibility that was built into the protocol could actually adapt to, you know, major changes that needed to be addressed like, you know, adapting to mobile environments. So I feel like in the end, like the whole extensibility thing didn't really provide the control that people wanted because those ZEPs were of little value until they were adopted everywhere, which is hard to do. And then, you know, even in like pseudo-distributed kind of models like Bitcoin, I think that the control that people have, it's sort of interesting that the control that people are seeking has manifested in the form of forks. You know, so that when there's a disagreement, people just, you know, start a new network, you know, like, you know, Bitcoin Cash or, you know, various altcoins that, you know, people take like the existing code base and just start a different service. You know, ultimately, I think that has led to like a lot of confusion in terms of, you know, users that are engaging with these networks, but to the extent that people are manifesting the control that they would like to see, it's interesting to me that it doesn't seem to have much to do with the decentralized nature of these protocols, that it has more to do with the open source nature of these projects, that because these projects are open source, it's very easy for people to take what's there and just change it and, you know, redeploy it as something else. So in a sense, I feel like that, you know, open source is sort of the best tool that we have in terms of manifesting control. But even that, I think, is like a difficult ask, because if, you know, what we want is for technology to better serve us, I think we have the larger problem in my mind is that if what it takes, if what technology demands is rooms that look like this and buildings that look like this full of people sitting in front of a computer for eight hours a day, every day, forever, that it's unlikely that we're going to see technology meeting our needs in the way that we want it to. You know, all the time people, you know, have these ideas where they're like, what if it was like Uber, but it was decentralized, so that like, you know, the money goes to the drivers, you know, wouldn't that be cool? I think that would be cool. But if what it takes to build that is rooms that look like this and buildings that look like this with people who sit in front of a computer for eight hours a day, every day, forever, guess where the money is going to go? It's going to go to those places, you know. So I think if we're serious about like, you know, changing technology so that it better serves us, to me, the best thing that we can do to make that happen is to make technology easier, to make the deployment and development of technology easier. And again, decentralized systems are not the first thing that like, pop into my mind when I think of easy. You know, in a lot of ways I feel like, you know, decentralized systems make everything harder in a world where we should be trying to make things easier for people to deploy. So, you know, on the whole, I feel like, you know, these are the challenges for decentralized systems that in a lot of ways I think that they're, you know, we need to like reimagine how it is that we're thinking about technology, even the direction that the world has gone, and that we can, you know, find the solutions to these problems and the things that we're looking for in ways that are perhaps more effective than building decentralized systems. So I'm not entirely optimistic about the future of decentralized systems, but I would also love to be proven wrong. However, I feel like anyone who's working on decentralized systems today needs to do so without forgetting that the ecosystem is moving. In the words of Marx, we can create our own history, but only under circumstances that are directly transmitted from the past. Thank you. All right. You can see eight microphones in that hall, and also we take questions from the Internet. So if you have a question, please line up on the microphones, and the first question goes to the Internet, please. Twitter wants to know if you can comment anything about post-quantum security. For example, there was a thesis by Enes Duit at the University of Twente about the post-quantum signal protocol. Question from the Internet. Okay. Yeah, I haven't seen this thesis, so I can't really say anything about it. But the way that things are sort of headed is people are trying to develop post-quantum, crypto post-quantum key exchanges and stuff like that. The situation today is that as we develop those things, as people develop those things, we're trying to develop increase in confidence and those algorithms. There's also a little bit of uncertainty about whether those are pre-quantum resistance in some ways just because things are so new. So the way that things are going is people are trying to take a new post-quantum key exchange stuff and mix them into pre-quantum key exchange stuff so that it's like an additive security property so that if those things turned out to have problems even in the pre-quantum world that you're not shooting yourself in the foot. So that's sort of the direction that things are going, and we'll probably start looking at that more as those things mature. Okay. As we have a lot of questions, please keep your questions short. First question to microphone number four. Hi. Thank you for the talk. I was wondering in all this overview how you perceive the efforts and on the standard of messaging layer security of providing this base layer of end-to-end encrypted communication which is to my knowledge decentralized in its thoughts. Yeah. Okay. So I think we're not actually a part of the MLS process. And MLS is like focusing on one specific scenario which is a specific scenario within just the group messaging. I think that's like a whole other conversation that I think that there's a lot of challenges there that they may not be thinking through. But I think that the, I mean, what's interesting is that it is a sort of unique scenario in that it's like a standard process amongst a number of entities that don't actually communicate with each other. So it's not entirely clear what having a standard, the value of having a standard is since there's no real plan for these entities to like federate or have some merging of their networks other than just agreeing on a set of principles for cryptography that everyone feels like are a solid thing to adopt. Okay. Thank you. One question for microphone number one, please. Hi. Thanks for the thought-provoking talk. You said so many things I disagree with. It's tough to pick a question. But the one I'm going to ask is the features you have about private groups and sealed sender, those seem to be protecting data at rest for when the server is compromised. The data that's on it is less useful to the attacker. But if the server is already compromised, it's not really providing a traffic analysis protection. Your metadata protection is effectively a pinky promise-oriented architecture. And you've outsourced the keeping of the promise to a defense contractor owned by the richest man in the world. And so my question is how confident are you that Amazon is keeping the promises that you're making? Well, I mean, the purpose of projects like Signal is to get to a place where it doesn't matter where something is hosted or whatever. And so you point out that what we're talking about here is stored data. And it's like in attacking this problem, I think you have to start from one place and work until you end up where you want to be. And right now, the worst thing that you can have is data and a database. Because what tends to happen on the internet now is when there's data on a database, it just becomes public. There's a lot of data dumps out in the world. So that's the worst possible scenario. And by design, most systems are set up so that in order to function, they need to have data in a database that is at great risk, I think, of ending up public out in the world. And so the first thing you want to do is design your system so that you don't have to have data in a database. And so then what you're talking about is traffic analysis and things like that, that people can look at traffic flows in order to figure out other metadata properties. There's things that users themselves can do today to help that. They can use Tor, they can use VPNs or whatever like that using these systems. And then that's also where we eventually want to go. To keep working up the stack in order to have something that is fully comprehensive. All right. We take another question from the internet, please. I see you want to know what do you think of peer-to-peer solutions that also hide participants' email IP addresses? For example, by exposing them only through rendezvous points like Tor nodes? Yeah. I mean, that could be cool. I mean, I think the challenge is developing a system like that that is actually scalable, that large numbers of people can use, and then it counts for the fact that the ecosystem is moving, that you can develop a decentralized thing that you're able to iterate on quickly. And so I think that is, you know, in developing decentralized systems, I feel like that is the most important question for me just in terms of, you know, the time that I've spent in the space and the work that I've done there, that I'm, like, much more interested in, like, unique approaches to solving that problem than I am, like, the specifics of, you know, IP address hiding with Tor nodes or something like that. Okay. Another question from microphone number eight, please. Apart from the US government, what countries have you received data requests from, and how did you respond? I don't, we've never, the only response that we've ever issued to, you know, any subpoena or, like, governmental request is, you know, on our website we have a section where we post all of the, you know, requests that we respond to, I think it's signal.org slash big brother. And so it's only one request that we've ever responded to. Okay. One more question from microphone number three at the end, please. Hi. Signal has a very big reputation as being good and secure communication tool for activists. This is also being pushed in the global south. I have the honor to work with some global south organizations. They're very suspicious of the signal, especially due to the fact that you have to provide a phone number. So your location can be tracked and all sorts of other problems that everyone here is fully aware of. I would like to know why? Why is that even still a thing for signal to provide, to make people provide phone numbers? Yeah. While still being hyped as a secure tool for activists, I think this contradiction is the important person. Yeah, that's a great question. It's a really complicated question. So, you know, any social network needs, any social app needs a social network. So signal is a social app and it needs a social network. And so the social network that we've chosen to use is the network that exists on your device already that's user owned, that's portable, the address book on your phone. And so there's a lot of cool things about that. If that's your social network, it empowers the users in a lot of ways because you can move that network with you as you go from service to service. Like as I pointed out, you know, moving from WhatsApp to Telegram to signal is easier than switching from Yahoo Mail to Gmail for that reason. On the other hand, there are a lot of people who don't want to, they don't want to be a part of a portable network, that they want to be, they want to use signal in a way where people can't figure out how to contact them through other means for legitimate reasons. The challenge is that if you're building your own social network, you need to store it somewhere. Well, I think there's two challenges. One is that if you're building your own social network, you need to store it somewhere. So, for instance, like there are apps that have successfully built their own social network. If you look at an app like Snapchat or something like that, you can create a username and most people associate it with a phone number, but you don't have to. But you'll notice that if you uninstall Snapchat and you reinstall it, your social network is still there. Where was it all of that time? If you drop your phone in the toilet and you install Snapchat, you still have your social network. Where was it? It was on Snapchat server, right? They have a full copy of your entire social network, social graph, et cetera, et cetera. So the challenge for us, because this is something that we would like to provide, is that if we have this alternate social network, we would need some way to make that persistent. It's bad enough that right now when you lose your phone and reinstall signal, you lose all of your message data because it only exists on your phone. But it would be even worse if you lose your entire social graph at that moment as well, and that you have to rediscover what everyone's identifiers are. And at the same time, we don't want to just store, you know, your social graph and plain text that we have access to that entire social graph, and that if signal is compromised, you know, that whoever compromises signal also has access to your entire social graph. And so, you know, the challenge is in developing something that's actually privacy preserving that allows us to, you know, maintain the social graph every time. So we recently, you know, published a technology preview of something that calls secure value recovery that is basically the first step in order to solve this problem. The other challenge, though, is that it depends on what you mean by not provide your phone number, because, you know, there are plenty of applications that you can use where you, like, use some alternate identifier that isn't necessarily associated with phone number. But the sort of unfortunate reality is that, you know, in all of those cases, your client needs to provide to the server either an FCM or APN identifier, which is used to send push notifications to your device. And that does uniquely identify your device in ways that could probably be used to identify whatever your phone number is. So it sort of depends on what your threat model is there. Okay, thank you very much. I know there are a lot of more questions, but unfortunately time is over. If you want to, the speaker will be around later on. You can gather up here, but the talk is now done. So give a huge round of applause for Moksy Marlin Spike. Thank you.
|
Considerations for distributed and decentralized technologies from the perspective of a product that many would like to see decentralize. Amongst an environment of enthusiasm for blockchain-based technologies, efforts to decentralize the internet, and tremendous investment in distributed systems, there has been relatively little product movement in this area from the mobile and consumer internet spaces. This is an exploration of challenges for distributed technologies, as well as some considerations for what they do and don't provide, from the perspective of someone working on user-focused mobile communication. This also includes a look at how Signal addresses some of the same problems that decentralized and distributed technologies hope to solve.
|
10.5446/53101 (DOI)
|
But now we start what we're here for and I'm really happy to be allowed to introduce Anna Mascal. She will talk about something which is a great title, I love it, Confessions of a Future Terrorist. Because terrorism is the one thing you can always shout out and you get everything through. And she will give us a rough guide to over-regulate free speech with anti-terrorist measures. And Anna works for Wikimedia where she's a lobbyist of human rights in the digital environment and works in Brussels. And she gives a lot of talk and I think it's the first time at Congress for her. Second time? Second time. I haven't really researched really right because I searched for it. So I have to do this again, so this is the second time for Congress and I'm really happy to have her here. Please welcome her with a big round of applause. Anna, the stage is yours. Thank you. Yes, so as you have already heard, I don't do any of the cool things that Wikimedians and Wikipedians do. I am based in Brussels and the L-word. I do the lobbying on behalf of our community. And today I am here because I wanted to talk to you about one of the proposals for laws that we are now observing the development of. And I wanted to share my concerns also as an activist because I'm really worried how if that law passes in its worst possible version or one of the bad versions, how it will affect my work. I'm also concerned how it will affect your work and basically all of our expression online. And I also want to share with you that this law makes me really angry. So I think these are a few good reasons to be here and to talk to you. And I hope after this presentation we can have a conversation about this. And I'm looking forward also to your perspective on it and also the things you may not agree with maybe. So what is this law? So in September 2018 the European Commission came out with the proposal of a regulation on preventing the dissemination of terrorist content online. So there are a few things to unpack here of what it is about. First of all, when we see a law that is about internet and what is about content and what is about the online environment and it says it will prevent something. It always brings a very difficult and complicated perspective for us the digital rights activists in Brussels because prevention online never means anything good. So this is one thing. The other thing is this very troubled concept of terrorist content. I will be talking about this more. I will show you how the European Commission understands it and what are the problems with that understanding and whether this is something that can actually be really defined in a law. So these are already the red flags that I have seen and we have seen when we first got the proposal into our hands. I would like to tell you a little bit about the framework of it. This is probably the most dry part of that, but I think it's important to correctly place it. First of all, this is the European Union legislation. So we're talking about the legislation that will influence 27 member states, maybe 28, but we know about Brexit, so that is debatable what's going to happen there. And it's important to know that whenever we have European legislation in the EU, these are the laws that actually are shaping the laws of all those countries and they come before the national laws. So should this be implemented in any of the form? When it's implemented in any of the form, this is what is going to happen. The next important part of information that I want to give you is that this particular regulation is a part of the framework that is called digital single market. So the European Union, one of the objectives when European Commission creates the law and when other bodies of the European Union work on it is that the laws in the member states of the European Union are actually similar. And the digital single market means that what we want, we want to achieve something on the internet that in a way is already achieved within the European Union geographically, meaning that we don't want the borders on the internet between people communicating and also delivering goods and services in the European Union. And you may ask how that connects with the terrorist continent and how that connects with today's topics. To be honest, I am also puzzled because I think that legislation that talks about how people communicate online and what is considered the speech that we want there and we don't want, there should not be a part of a framework that is about market. So this is also something that brings a concern. Also, as you've seen at the first slide, this piece of legislation, this proposal is called the regulation and not to go too much into details about what are the forms of legislation in the EU. The important thing to know here is that the regulation is allowed that once it is adopted by the EU, once the parliament votes on it, it starts, it is binding directly on the, in all the member states of the European Union, which means that there is no further discussion on how this should be actually used. Of course, in each country, there are different decisions being made by different bodies, but it means for us, the people that work on this and that want to influence the legislative process, that once this law is out of Brussels, there is nothing much to be done about how it's going to be implemented. This is important because for now the discussion about this, because for us the discussion about this is the one that happens in Brussels. There are a few versions of the law and very quickly European Commission proposes the law, European Parliament looks at it, debates it and then produces its own version of it, so amends it or makes it worse. And then the Council of the EU, which is the gathering of all the member states and representatives of the government of the member states, also creates their own version. And then of course when you have three versions, you also need to have a lot of conversations and a lot of negotiation how to put this together into one. And all of those bodies have their own ideas, every one of those bodies have their own ideas on how any law should look like, so this process is not only complicated, but also this negotiation that is called the trilogues is actually very non-transparent. And there is no, or almost no official information about how those negotiations go, what are the versions of the document and so on. This is the part that we are now in and I will talk more about this later on. Today I want to talk to you about the potential consequences of the version that is the original one, which is the European Commission's version. And it's because it would be very complicated and confusing, I guess, if we look at all of the proposals that are on the table, but also it's important because European Commission has a lot of influence, also informally, both on member states. And also to an extent on the whole trilogue process, so whatever gains we have in other versions or whatever better solutions we have there, they are not secure yet. And I promise I'm almost done with this part. There is other relevant legislation that we'll consider. One is the e-commerce directive and in this, the part that is very relevant is for this particular conversation is that the platforms according to the law or the Internet Services or hosting providers are not by default responsible for the content that users place online. So it's a very important premise that also protects us, protects our rights, protects our privacy, that they cannot go after us or they cannot look for the content that could be potentially illegal, which would mean that they would have to look into everything. But of course they have to react when somebody notifies them and they have to see whether the information that is placed by the users should stay up or not. There is also a directive on combating terrorism and this is the piece of legislation that is quite recent. To my best knowledge not all countries in the European Union, not all member states have implemented it yet, so for us it was also very puzzling that we actually have a new law, a new proposal that is talking about the communication part of what already has been mentioned in this directive when we still don't know how it works. We still don't know because the law is physically not being used at all. So this was for us also difficult to understand why the Commission does not want to wait and see what comes out from the directive on combating terrorism. So why would the European Commission and why the European legislators would actually want such a law that again is about the content that people post through different services and why this is an important issue. Why this issue is actually conflated with the market questions and the harmonization in the digital market. So there are some serious numbers here, 94% and 89% and I'm wondering if you have any idea what those numbers are about. I'm sorry? Yes, it's about people, but the numbers are actually presenting, so there was a survey done by Eurostat and those numbers present the percentage of people, first number 94% presents the percentage of people that say that they have not come across terrorist content online. Right? So inversely, only 6% of people actually say that they had access to terrorist content. It's important to underline that they say it because there's no way to check what that content actually was. And of course, we can, you know, here use the analogy of what a certain American judge said about pornography. I know it when I see it. It's not a very good definition of anything really. So I would argue that actually 6% of people being affected by something is not really a big percentage and that the European Union actually has bigger problems to deal with and where they can spend money and energy on. For example, we are all affected by, I don't know, air pollution and that's much more people. 89% are the people in the age range between 15 and 24, but again, we're not affected by something what they would consider terrorist content. Of course, with somebody think of the children, there you go, the children and young people do not also experience it in an overwhelming, overwhelmingly. So, but this rationale is being used, 6% and 11%, as one of the reasons why this regulation is important, why this law is important. The other is the exposure to, the other reason is the exposure to imagery of violent crimes via social media. So of course, we know that platforms such as Facebook and YouTube contain all sorts of things that people look. We also know that because of their business models, they sometimes push controversial content or violent content into people's proposal, the proposals that they give to people to watch or to read. So, so this is actually the second part is not addressed by this, but this proposal at all. But nevertheless, whenever we talk to the representatives of the commission, why this law is there, they start waving. That was my experience at one of the meetings, the person start waving his phone at me and say, well, you know, there are be heading videos online and I can show you how horrible it is, which I consider to be an emotional blackmail at best. But not really a good regulatory impulse. So, so I guess maybe the commission people are somehow mysteriously affected by that content more than anything else. I don't mean to joke about those, those videos because of course, it is not something that I would want to watch and, and it is very violent. But I would also argue that the problem is not that the video is there, but that somebody has been beheaded. And this is where we should actually direct our attention and look for the sources of that sort of behavior and not only to try and clean the Internet. The other reason why, why this law should be enacted is radicalization. Of course, this is a, this is a problem for certain vulnerable populations and people, and we can read about it a lot. And there are organizations that are dealing with strategies to counteract radicalization. Again, when we look at evidence, what is the, what is the relationship between content that is available online and the fact that people get radicalized in different level, in different ways. There, we didn't see any research and the commission also did not present any research that would actually point to at least a correlation between the two. So, again, asked about, so how did you come up with this idea since without really actually showing the, the, the support for your claim that radicalization is connected to that. This is a quote from a meeting that happened, public and journalists were there. Again, the person from the commission said, we had to make a guess, so we made the guess that way. There is the guess being, yes, there is some sort of connection between the content and the radicalization. And then finally, when we read the impact assessment and when we look at the different articles that, or different explanations that the European Commission posts about, about the rationale for this law, of course, they bring the terrorist attack that have been happening. And they make, they swiftly go from naming the, the different violent events that have happened in Europe very recently or, or, or quite recently. And they swiftly make a connection between the, the fact that somebody took a track and, and run into a group of people. Or that somebody was participating in the shooting or organizing the shooting of, of, of people enjoying themselves. They swiftly go from this to the fact that regulation of the content is needed, which also the fact that you put something in one sentence does not mean it makes sense, right. So, so this is also not very well documented. Again, pressed about this, the representative of the European Commission said that, well, we know that, and it has been proven in the investigation that one of the people that were responsible for the Bataklan attack actually used the internet before that happened. Yes. No more comment needed on that one. So, well, clearly there are very good reasons, quote unquote, to, to spend time and time and, and, and citizens money on working on a new law. And I always say that basically these laws are created because not because there is a reason, but because there is a do something doctrine, right. We have a problem. We have to do something. And, and this is how this law, I think, came to be. And the, the, the do something doctrine in this particular case also, of course, encompasses a very broad and blurry definition of that law. I will talk about this more in a, in a moment. It also encompasses measures. We, if we define something that we want to counteract to, we have to basically say what should happen, right. And then the third issue is that the problem is being solved. And there are three measures that I will also explain. One is the removal orders. The other is referrals. And the third are so-called proactive measures. This is, I guess, the part where, where we touch the, the prevention most. And then the third issue is that the, the, the one of the, of the things I also want to talk about is the links between the content that is being removed and the actual investigations or prosecutions that may occur. Because of course it's possible that there will be some content found that actually does document the crime. And, and then what do we do about that? So going forward, I do think that the definition and this law is basically its main principle is to normalize the state control over how people communicate and what they want to say. As it was said before, under the premise of terrorism, we can actually pack a lot of different things because people are afraid of this. And, and we have also examples from other topics, other laws that have been debated in Brussels. One was public sector information directive where everybody was very happy discussing how much public information should be released and where it should come from and how people should have access to it. And part of public information is the information that is produced by companies that perform public services, but they may also be private. For example, sometimes transport, public transport is provided that way. And actually public transport providers were the ones that were saying that they cannot release the information that they have, namely timetables and other information about, about how, how the system works that could be useful for citizens because then it may be used by terrorists. I guess that maybe prevents the potential terrorists from going from bus stop to bus stop and figuring out how the buses go. But we already know that this does not work that way. So, so this is something that actually normalizes this approach. And let's first look at the definition of the, of the proposal as presented by the European Commission. So they say basically, let me read terrorist content means one or more of the following information. So a, inciting or advocating, including by glorifying the commission of terrorist offenses. I do apologize for the horrible, horrible level of English that they use. I don't know why. And that I don't apologize for them, but for the fact that they expose you to it. The commission of terrorist offenses, thereby causing a danger that such acts be committed. You won't believe how many times I had to read all this to actually understand what all those things mean. And encouraging the contribution to terrorist offenses. So contribution could be money, could be some, I guess, material resources. Promoting the activities of a terrorist group in particular by encouraging the participation in or support for a terrorist group. And then starting on methods or techniques for the purpose of committing terrorist offenses. And then there is also the definition of dissemination of terrorist content that basically means making terrorist content available to third parties on the hosting service provider services. And you probably see the dissemination and the fact that third parties are evoked, mean that this law is super broad. So it's not only about social media, because making content available to third parties may mean that I am sharing something over some sort of service with my mom. And that's the third party in the understanding of this law. So we were actually super travel to see that not only does it encompass services that make information available to the public. So the one that we all can see like social media. But also that potentially it could be used against services that may let people communicate privately. So that is a big issue. The second thing I want to direct your attention to is the parts that they put in italics. It's how soft those concepts are. Insighting, advocating, glorifying, encouraging, promoting. This is a law that actually potentially can really influence how we talk and how we communicate, what we want to talk about, whether we agree or disagree with certain policies or certain political decisions. And all those things are super soft. And it's very, very hard to say what does it really mean. And I want to give you an example of the same content used in three different cases to illustrate this. So let's imagine we have a group of people that recorded the video and on those videos they say that well basically they call themselves terrorists to make it easier. And they say that they want to commit all sorts of horrible things in specific places. So that constitutes like some sort of a credible threat. And they also brag that they killed someone and they also say that they're super happy about this and so on. And they also of course encourage others to join them and so on and so on. And the three cases would be one would be that this particular group posts that videos on, I don't know, their YouTube channel. The other case would be that there's a media outlet that reports on it and either links to this video or maybe presents snippets of it. And the third case would be for example that there is some sort of group that is actually following what's happening in that region and collects evidence that can then help identify the people and prosecute them for the crimes they commit. Like the crime that our exemplary terrorists admitted to committing. And do you think that according to this definition in your opinion, do you think that there is a difference between those three types of presenting that content between the terrorist group that is presenting it on their channel, between the media outlet and between the activists? There is none. Because this law does not define in any way that the so-called terrorist content is something that is published with an intention of actually advocating and glorifying. So the problem is that not only does the content that let's say is as we as we may call it manifestly illegal. So somebody kills someone and is being recorded and we know it's a crime. And perhaps we don't want to watch it. Although I do think that we should also have a discussion in our society what we want to see and what we want to see, what we don't want to see from the fact from the perspective that the world is complicated. And we may have the right to access all sorts of information, even that that is not so pleasant and not so easy to digest. So this law does not make this differentiation. There is no mention of how this should be intentional to qualify to be considered so-called terrorist content. And that's a big problem. So as you can see, there is a fallacy in this narrative because these will be the member states and their so-called competent authorities that will be deciding what the terrorist content is. And of course, Europeans have a tendency to think, we have the tendency to think of ourselves as the societies and the nations and the countries that champion the rule of law and that actually respect fundamental rights and respect freedom of speech. But we also know that this is changing rapidly and I also will show you examples of how that changes in this area that we're talking about right now. So I do not have great trust in European governments into making the correct judgment about that. So we have this category of very dubious and very broad terrorist content. And then, so how it's being done, basically all that power to decide what the content, like how to deal with that content is actually outsourced to private actors. So the platforms that we are talking about becomes kind of mercenaries because both the commission and I guess many member states say, well, it's not possible that the judge will actually look through content that is placed online and give proper judiciary decisions about what should, what constitutes freedom of expression and what goes beyond it because it hurts other people or is basically a proof of something illegal. So the platforms will take those decisions. This will be the hosting service providers, as I mentioned. And then also a lot of the reliance that they will do it right is put into the wishful thinking in this proposal that says, well, basically you have to put in terms of service that you will not host terrorist content. So then again, there's a layer in there where the platform, let's say Facebook or Twitter or anyone else actually decides what and how they want to deal with that in detail. Also, one thing I didn't mention is that looking for this regulation and looking at who is the platform that should basically have those terms of service, we realize that Wikimedia, that actually our platforms will actually be in the scope of that. So not only that may affect the way we can document and reference the articles that are appearing on Wikimedia on all those, all those, on the events that are described or the groups or the political situation and what not, but also that, you know, our community of editors will have less and less to say if we have to put a lot of emphasis on terms of service. I do think that we are somehow a collateral damage of this, but also this doesn't console me much because of course internet is bigger than our projects and also we want to make sure that content is not being removed elsewhere. So basically the three measures are the removal orders, as I mentioned, and this is something that is fairly straightforward and actually I'm wondering why there has to be a special law to bring it because, to being because the removal order is basically a decision that a competent authority in the member state releases and sends it to the platform. The problem is that according to the commission, the platform should actually act on it in one hour and then again we ask them why one hour and not 74 minutes and they say, well, because we actually know, I don't know how, but they say they do. Let's take it at face value. We actually know that the content is the most, you know, viral and spreads the fastest within, has the biggest range within the one hour from appearance. And then we ask them, well, but how can you know that actually the people that find the content find it exactly on the moment when it comes up? Maybe it has been around for two weeks and this one hour window when it went really viral is like gone, gone. And here they don't really answer, obviously. So this is one of the measures that I guess makes the most sense out of all of that. Then we have the referrals that we call lazy remover orders and this is really something that is very puzzling for me because the referral is a situation in which this competent authority and the person working there goes through the content, through the videos or postings and looks at it and says, well, I think it's against the terms of service of this platform, but does not actually release this removal order, but writes to the platform, lets them know and say, hey, can you please check this out? I'm sorry, I'm confused. Is this the time that I have left or the time? Okay. Good. Time is important here. So basically, you know, they are basically won't spend the time to prepare this remover order and let the platform tell the platform actually to remove it, but they will just ask them to please verify whether this content should be there or not. And first of all, this is the real outsourcing of power over the speech and expression, but also we know how platforms take those decisions. They have a very short time. The people that do it are sitting somewhere most probably where the content is not originating from, so they don't understand the context. Sometimes they don't understand the language. And also, you know, it's better to get rid of it just in case it really is problematic, right? So this is something that is completely increased this great area of information that is controversial enough to be flagged, but it's not illegal enough to be removed by the order. By the way, the European Parliament actually kicked this out from their version, so now the fight is in this negotiation between the three institutions to actually follow this recommendation and just remove it because it really does not make sense. And it really makes the people that release those referrals not really accountable for their decisions because they don't take the decision. They just make a suggestion. And then we have the proactive measures, which most definitely will lead to overpolicing of content. There is a whole very clever description in the law that basically boils down to the point that if you are going to use content filtering and if you're going to prevent content from disappearing, then basically you are doing a good job as a platform and this is the way to actually deal with terrorist content. Since, however, we define it, again, this is very context-oriented, very context-dependent, it's really very difficult to say based on what sort of criteria and based on what sort of databases those automated processes will be happening. So of course, as it happens in today's world, somebody privatizes the profits but the losses are always socialized and this is no exception from that rule. So again, when we were talking to the European Commission and asking them why is this not a piece of legislation that belongs to the enforcement of the law and that is then not controlled heavily by the judiciary system and by any other sort of oversight that enforcement usually had, they have, well, because when we have those videos of beheadings, they usually don't happen in Europe and they are really beyond our jurisdiction. So of course, nobody will act on it on the very meaningful level of actually finding the people that are killing, that are in the business of killing others and making sure they cannot continue with this activity. So it's very clear that this whole law is about cleaning the internet and not really about meaningfully tackling societal problems that lead to that sort of violence. Also, the redress, which is the mechanism in which the user can say, hey, this is not the right decision. I actually believe this content is not illegal at all and it's important for me to say this and this is my right and I want it to be up. Those provisions are very weak. You cannot actually protest meaningfully against the removal order of your content. Of course, you can always take the states to court, but we know how amazingly interesting that is and how fast it happens. So I think we can agree that there is no meaningful way to actually protest. Also, the state may ask, well, actually, this removal order, the user should not be informed that the content has been taken down because of terrorism or depicting terrorism or glorifying or whatever. So you may not even know why the content is taken down. It will be a secret. For referrals and for proactive measures, well, you know what, go talk to the platform and protest with them. And then, of course, the other question is, so who is the terrorist? Because this is a very important question that we should have answered if we want to have a law that actually is meaningfully engaging with those issues. And of course, well, as you know already from what I said, the European Commission in that particular case does not provide a very good answer, but we have some other responses to that. For example, Europol has created a report and then there was a blog post based on that on the importance of taking down nonviolent terrorist content. So we have the European Commission that says, yes, it's about the beheadings and about the mutilations. And we have Europol that says, you know, actually, this nonviolent terrorist content is super important. So basically what they say, and I quote, poetry is a literary medium that is widely appreciated across the Arab world and is an important part of the region's identity. Mastering it provides the poet with singular authority in Arabic culture. The most prominent Jihadi leaders, including Osama bin Laden and former Islamic State Spokane Abu Muhammad al-Adnani, frequently included poetry in their speeches or wrote poems of their own. Their charisma was closely intertwined with their mastery of poetry. So we can see the arch that is being made by Europol between a very important aspect of a culture that is beautiful and enriching and about the fact that Europol wants it to see it weaponized. The other part of the blog post was about how ISIS presents interesting activities that their members, their fighters have. And one of them is that they are enjoying themselves and smiling and spending time together and swimming. So what do we make out of that? So the videos of brown people swimming are now terrorist content. This is the blatant racism of this communication really enrages me. And I think it's really a shame that nobody called Europol out on this when the blog post came up. We also have laws in Europe that are different. I mean, this is not the same legislation, but that actually give the taste of what may happen. One is the Spanish law against hate speech. And this is an important part. It didn't happen online, but it shows the approach that basically, first you have legislators that say, oh, don't worry about this, we really want to go after bad guys. And then what happens is that there was a puppeteer performance done by two people, the witch and don Cristobal. And the puppets were actually, this is the kind of punch and judy performance in which this is a genre of theatric performances. I'm sorry, that is kind of full of silly jokes and sometimes excessive and unjustified violence and the full of bad taste. And this is quite serious. And the two characters in the two puppets held the banner that featured and made up terrorist organization. And after that performance, actually they were charged with, first of all, promoting terrorism, even though there is no terrorist organization like that, and then also with inciting hatred. And this is what one of the puppeteers said after describing this whole horrible experience, finally the charges were dropped, so this is good. But I think this really sums up who is the terrorist and how those laws are being used against people who actually have nothing to do with violence. We were charged with inciting hatred, which is a felony created in theory to protect vulnerable minorities. The minorities in this case were the church, the police and the legal system. Then again in Spain, I don't want to single out this beautiful country, but actually unfortunately they have good examples. This is a very recent one. So tsunami democratic in Catalonia created an app to actually help people organize small action in a decentralized manner. And they placed the documentations on GitHub and it was taken down by the order of the Spanish court. And also the, and this is the practical application of such laws online. Also the website of tsunami democratic was taken down by the court. Of course both of that on charges of facilitating terrorist activities and inciting to terrorism. So why is it important? Because of what comes next. So there will be the digital services act, which will be an overhaul of this idea that I mentioned at the beginning, which is that basically platform are not responsible by default by what we put online. And European Commission and other, the European Commission and other actors in the EU are toying with the idea that maybe platform should be somehow responsible. So of course, and it's not only about social media, but basically anybody that any sort of service that helps people place content online. And then the one of the ideas we don't know what it's going to be. It's not there yet. It's going to happen at the beginning of the next year. So quite soon, but we can actually expect that the so-called good Samaritan rule will be one of the solutions proposed. What is this rule? This rule basically means if a platform is really going the extra mile and doing a good job in removing the content that is either illegal or again a very difficult category, harmful. I also don't know what that exactly means. Then if they behave well, then they will not be held responsible. So this is basically a proposal that you cannot really turn down because if you run a business, you want to manage the risk of that and you don't want to be fined and you don't want to pay money. So of course you try and overpolice and of course you try and you filter the content and of course you take content when it only raises a question. What sort of content that is, is it neutral or is it maybe making somebody offended or stirred? Of course other attempts we heard from Germany, which is basically that there wasn't a proposal to actually make platforms obliged to give passwords of users of social media, the people that are under investigation or prosecution. And also of course we see that one of the ideas that supposedly is going to fix everything is that well if terrorists communicate through encrypted services, then maybe we should do something about encryption. And there was a petition already on Avas to actually go to actually forbid encryption for those services after one of the terrorist attacks. So of course it sounds very extreme but this is in my opinion the next frontier here. So what can we do? Because this is all quite difficult. So as I mentioned the negotiations are still on, so there is still time to talk to your government and this is very important because of course the governments when they have this proposal on the table that they will be able to decide. Finally who is the terrorist and what is the terrorist content. And also that's on one hand, on the other hand they know that people don't really care all that much about what happens in the EU, which is unfortunately true. They are actually supporting very much the commission's proposals. The only thing that they don't like is the fact that somebody from the police from other country can maybe interfere with content in their language because that's one of the provisions that also is there. So this is what they don't like. They want to keep their territoriality of their enforcement laws intact. But there is still time and we can still do this and if you want to talk to me about what are the good ways to do it I'm available here and I would love to take that conversation up with you. The other is a very simple measure that I believe is always working, is one that basically is about telling just one friend, even one friend and ask them to do the same to talk to other people about this. And there are two reasons to do it. One is because of course then we make people aware of what happens and the other in this particular case that is very important is that basically people are scared of terrorism and they support a lot of measures just because they hear this word. And when we explain what that really means and when we unpack this a little bit we build a resilience to those arguments and I think it's important. The other people who should know about this are activists working with vulnerable groups because of the stigmatization that I already mentioned and because of the fact that we need to document horrible things that are happening to people in other places in the world and also here in Europe. And journalists and media organizations because they will be affected by this law and by the way how they can report and where they can get the sources for their information. So I think I went massively over time from what was planned. I hope we can still have some questions. Thank you. So yeah, talk to me more about this now and then after the talk. Thank you. Thanks for your talk. We still have time for questions. So please if you have a question line up at the max we have one two three evenly distributed through the room. I want to remind you really quickly that a question normally is one sentence and ends with a question mark. Not everybody seems to know that. So we start with Mike number two. So I run a tour relays in the United States. It seems like a lot of these laws are focused on the notion of centralized platforms. Do they define what a platform is and are they going to extradite me because I'm facilitating tour onion service. Should I answer? Okay. Yeah. So they do and they don't in a way that the definition it's based on basically what the hosting provider is in the European law is actually very broad. So it doesn't take into account the fact how big you are or how you run your services. The bottom line is that if you allow people to put content up and share it with again third party which may be the whole room here. It may be the whole world but it may be just the people I want to share things with. Then then you're obliged to to use the measures that are or to comply with the measures that are envisioned in this regulation. And there is a there's a debate also and in the parliament it was taken up and narrowed down actually to the communication to the public. So I guess then as you correctly observe it is more about about the big platforms or about the centralized services. But actually the in the commission version nothing makes me believe that that only them will be affected. On the contrary also the the messaging services maybe. OK next question Mike number three. Is it a follow bit the upload filters the copyright directive. It was really similar debate especially on small companies because at that time the question was they try to push upload filters for copyright content. And the question was how does that fit with small companies and they still haven't provided an answer on to that. And the problem is they took the copyright directive and basically inspired themselves from the upload filters and applied it to terrorist content. And it's again the question how does that work with small internet companies that have to have someone on call during the night and things like that. So even big providers I heard they don't have the means to properly enforce that. So this is this is this is a killer for the European internet industry. And I think that's the question is. I want to give a short remind on the one sentence rule. We have a question from the Internet. Single Angel please. Yes. What the question is wouldn't decentralized social networks bypass these regulations. I'm not a lawyer but I will give a question I give an answer to this question that the lawyer would give. I maybe spent too much time with lawyers. That depends because and it really does because this definition of who is so broad that a lot depends on the context. A lot depends on what is happening what is being shared and how. So it's it's very difficult to say. I just want to say that we also had this conversation about copyright and many people came to me last year at Congress. I wasn't given a talk but but was I was at the talk about the copyright directive and the filtering and many people said well actually you know if you're not using those big services you will not be affected. And actually when we share peer to peer then this is not an issue. But actually this this is changing and there is a there is actually a decision of the European Court of Justice and the decisions are not like basically the law but but basically they are very often then followed and incorporated and this is the and this is the decision on the pirate bay. And in on pirate bay and in this decision the court says that well the argument that pirate bay may was basically we're not hosting any content we're just connecting people with it. And in short and the call and and the and the court said well actually we don't care because you you you organize it you optimize it like the info you optimize the information you bring it to people. And the fact that you don't share it does not really mean anything and and you are liable for the for the copyright infringements. So again this is about a different issue but this is a very relevant way of thinking that we may expect that it will be translated into other types of content. So again the fact that that you don't host anything but you just connect people to one another will not be may not be something that that will take you off the hook. Microphone number three. Do these proposals contain or what sort of repercussions do these proposals contain for filing requests removal requests that are later determined to be illegitimate. Is it just a free pass to censor things or can are there repercussions. You just to make sure I understand you mean the removal orders the ones that say remove content and that's it. Yeah if somebody files a removal order that is determined later to be completely illegitimate. Are there repercussions. Well the problem starts even before that because the again the removal orders are being issued by competent authorities. So there's like a designated authority that can do it. Not everybody can. And basically the order says this is the content. This is the URL. This is the legal basis. Take it down. So there is no way to protest it and the platform can only not follow this order within an hour in two situations. One is that the force measure that is usually the issue. Basically there is some sort of external circumstance that prevents them from doing it. I don't know complete power outage or problem with their service that basically they cannot access and remove or block access to this content. The other is if the request the removal order I'm sorry contains errors that actually make it impossible to do. So for example the there's no URL or it's broken and it doesn't lead anywhere. And these are the only two situations in the rest the content has to be removed. And there is no way for the user and no way for the for the platform to actually say well hold on. This is not the way to do it. And therefore after it's being implemented to say well that was a bad decision. As I said you can always go to court with the with your state but not many people will do it. And and this is not really a meaningful way to address this. Next question Mike number three. How many how much time do we have to contact the parliamentarians to inform them maybe that there is some big issue with this. What's the worst case timetable at the moment. That's a very good question and thank you for asking because this because I forgot to mention that that actually is quite urgent. So the commission wanted to like usually in those situations. The commission wanted to close the thing until the end of the year and they didn't manage because there is no no agreement on those most pressing issues. But we expect that the the the best case scenario is the until March maybe until June. It will probably happen earlier. It may be the next couple of months. And there will be lots of meetings about about that. So this is more or less the timeline. It's there's no sort of external deadline for this right. So so this is just an estimation and of course it might change. But but this is what we expect. We have another question from the Internet. Does it law considers that such content is used for psychological warfare by nations. I'm sorry. Again please. This this content is pictures or video of whatever. Does this law considers that such content is used for psychological warfare. Well. I'm trying to see how that relates. I think the law is does not go into details like that in a way which means that I can go back to the definition that basically it's just about the fact that if the content. Appears to be positive about terrorist activities then that's the basis of taking it down. But there's nothing else that is being actually said about it's not more nuanced than that. So I guess the answer is no. One last question from Mike number two. Are there any case studies published on successful application of like laws in other countries. I ask because we have a like laws in Russia for 12 years and it's not that useful as far as I see. Not that I know of. So I think it's also a very difficult thing to research because we can only research what what we know that happened right in a way that you have to have people that actually are vocal about this and that complain about these laws not being. You know enforced in a proper way. So for example, content that is taken down is completely about something else which also sometimes happens. And and and that's and that's very difficult. I think the the biggest question here is whether there is an amount of studies documenting that something does not work that would prevent the European Union from actually having this legislative fever. And I would argue that not because as I said, they don't have really good arguments or they don't really have good numbers to justify bringing the slow at all not to mention bringing the ridiculous measures that that they propose. So what we what we say sometimes in Brussels when we're very frustrated that we were hoping, you know, being there and advocating for for human rights is that we we hoped for that we can contribute to evidence based policy but actually what's happening. It's a policy based evidence. And and this is the difficult part. So I am all for studies and I am all for presenting information that, you know, make possibly help legislators. There are definitely some MEPs or some people there, even probably in the commission, maybe they just are not allowed to to to voice their opinion on this because it's a highly political issue that would wish to have those studies or would wish to be able to use them and that believe in that. But but it's just it doesn't translate into the political process. Okay, time's up. If you have any more questions, you can come up and approach on later. And there is please. Thanks for so first for me. Thanks for the talk. Thank you very much.
|
We will examine the European Commission’s proposal for a regulation on preventing the dissemination of terrorist content from as a radical form of censorship. Looking at the rationale and arguments of policy-makers in Brussels, we will discuss normalisation of a “do something doctrine” and “policy-based evidence”. How can citizens and activists influence that legislative process? And what does it mean if they won’t? Fear of terrorism as a tool for dissent management in the society is utilised almost everywhere in the world. This fear facilitates the emergence of laws that give multiple powers to law enforcement, permanently raising threat levels in cities around the world to “code yellow”. A sequel of that show is now coming to a liberal democracy near you, to the European Union. The objective of the terrorist content regulation is not to catch the bad guys and girls, but to clean the internet from images and voices that incite violence. But what else will be cleaned from in front of our eyes with this law with wide definitions and disproportionate measures? In the Brussels debate, human rights organisations navigate a difficult landscape. On one hand, acts of terrorism should be prevented and radicalisation should be counteracted; on the other, how these objectives can be achieved with such a bad law? Why are Member States ready to resign from judicial oversight over free speech and hand that power to social media platforms? Many projects documenting human rights violations are already affected by arbitrary content removal decisions taken by these private entities. Who will be next? In the digital rights movement we believe that the rigorous application of principle of proportionality is the only way to ensure that laws and subsequent practices will not harm the ways we exercise the freedom of speech online. Reaching to my experience as a lobbyist for protection of human rights in the digital environment, I want to engage participants in the conversation about the global society of the near future. Do we want laws that err on the side of free speech and enable exposure to difficult realities at the risk of keeping online the content that promotes or depicts terrorism? Or do we “go after the terrorists” at the price of stifling citizen dissent and obscuring that difficult reality? What can we do to finally have that discussion in Europe now that there is still time to act?
|
10.5446/53103 (DOI)
|
So, hello and welcome to a quantum computing talk by Andreas, who gave a talk exactly five years ago and it's almost exactly five years ago. It's like one year and two or three hours and he gave a talk at 31C3 about quantum computing titled Let's Build a Quantum Computer and I think back then we basically had just found out that Google was planning to partner with the University of California in Santa Barbara to try to build a quantum computer. Of course now we're five years later, we've had a lot of developments I think in the field, we've had some big announcements by Google and other groups and Andreas has now come back to give us an update. So please welcome him to the stage. Okay, hi everyone. So I'm very happy to be here again after five years of giving the first version of this talk. My motivation for giving this talk is quite simple. I was often, so I did my PhD on experimental quantum computing from 2009 to 2012. I left that field afterwards to work in industry but always people would come to me and would ask, hey Andreas, did you see this new experiment there? Did you see you can use quantum computers on Amazon's cloud now? Did you see Google has this new quantum thing? Is this really working? Can we use quantum computers yet? Why are you not working on this? I couldn't really answer the question. So that's how I said, okay, I want to go back to this and find out what happened in the last five years since I finished my PhD, what kind of progress was made in the field and do we actually have quantum computers today that are working already or are we not yet quite just there? So we want to do it like this. I want to first give you a short introduction to quantum computing. So just that we have a common understanding of how that works and why it's interesting. Then I will show you a small example of experimental quantum speedup, notably the work I did with my colleagues in CECLA during my PhD thesis. Then we will discuss some of the challenges and problems, why we were not able to build a real quantum computer back then and I will discuss some approaches that have come up since then that would basically allow us to do that eventually and then we will of course discuss Google's recent experiment in collaboration with the University of Santa Barbara where they showed basically a very impressive quantum computing system with 53 qubits. We will look exactly and try to understand what they did there and see if that's really like a quantum computer in the real sense already or if there's still something missing. And in the end of course I will try to give you another small outlook to see what we can expect in the coming years. So in order to talk about quantum computing we need to first talk about classical computing, just a little bit. You might know that classical computers they work with bits, so zeros and ones. They store them in so called registers. This here for example is an example of like a bit register. Of course the bits themselves they're not very interesting but we have to do stuff with them so we can compute functions over those bit registers. That's what like a modern CPU is doing in a simplified way of course. So we take some input bit register values. We compute some function over them and then we get an output value. So a very simple example would be a search problem. I will discuss this because later we will also see in the experiment how we can use a quantum computer to solve this. So I just want to motivate why this kind of problem can be interesting. And it's a very silly search function so it takes two bits as inputs and it returns one bit as an output indicating whether the input bits are the solution to our search problem or not. And you could imagine that we have a very, very complicated function here. So for example a function that calculates the answer to life, the universe and everything. Well not a complete answer but only the first two bits. So really complicated to implement and very costly to execute so we might think that it might take like millions of years to run this function once on our inputs. So we want to find the right solution to that function with as few function calls as possible of course. Overall there are four possibilities. So four input states, 0, 0, 0, 1, 1, 0 and 1, 1 that we can apply our function to and only for one of these states the 0, 1 state because the answer is 42. So that's 0 times 1 plus 2 plus some other stuff. So the first two bits are 0, 1 for this value it returns a 1 for all of the other values the function returns a 0. Now let's think about how we can implement a simple search function. And in principle if we don't know anything about the function so we can imagine it's so complicated that we can't do any optimizations. We don't know where to look so we have to really try each of these values in sequence. And for this we can have a simple algorithm so we can start initializing our bit register with 0, 0 value. Then we can call the function on that register. We can see what the result is. In this case the result would be 0. If the result would be 1 then we know okay we have found our solution so we can stop our algorithm but in this case the result is 0. So we can just go back to the left value and to the left step and increase the register value to go to 0, 1 and try again. And in the worst case depending if you're optimistic or not we have to do this three or four times. So if you want to really be sure that we find the right answers we have to do it four times in the worst case. And this is so to say the time complexity or the computational complexity of the search. If you imagine that in our algorithm the most expensive operation is really calling this function f then the calling time or the complexity of calling this function will be what dominates the complexity of our algorithm. And in this case the complexity is very similar, simple here because it's linear in the number of the search space. So if you have n states, for example in our examples we have four different input states, we also need to evaluate the function four times. And please keep this graph in mind because we're going to revisit that later a bit to see if we can do better with a different paradigm of computing. And so classically this is really the best we can do for the search problem here because we don't know anything else about the function that would allow us to optimize that further. But now the interesting thing is that we might imagine that we don't use classical computing for solving our problem. And in fact the discipline that we call quantum computing was kind of like inspired by a lecture or like a seminar of Richard Feynman who thought about how it would be possible to similar and or if it would be possible to simulate quantum systems on a classical computer, a Turing machine if you want. And he found that because quantum mechanics is so complicated for classical computers that it's not possible to do that efficiently but that if you would use the laws of quantum mechanics themselves to make a computer, like a quantum computer, then it would be possible to simulate this quantum systems. And this kind of like sparked this whole idea of using quantum mechanics to do computation and in the following years they were not only solutions found for simulating quantum systems which search quantum computer but also for solving other not related problems to quantum computing. So like search problems or factorization problems for example. And quantum computers can do computation faster than classical computers because they have several differences in how they work. So one of the key differences here is superposition which means that if you use a quantum computer instead of a classical computer we cannot only load a single register value into our bit register. So for example the first value there with only zeros but instead we can kind of load all of the possible state values at once. So in parallel. And this is so called quantum state or quantum superposition state where each of these values here has an amplitude which is shown on the left that is basically a complex number that relates them to the other states. If you have for example n qubits then the total number of qubit states can be very large to the power of n. So you can imagine that if you have a large qubit quantum bit register then your number of quantum states can be really, really large and this can be very powerful for computation. So in the rest of the talk we're going to just indicate this by like showing the register as like a small rectangle to indicate that it's not only a single value in there but that we have a superposition values of all the possible input values to our function for example. And there's a condition, so called normalization condition that puts some constraints on these amplitudes because the sum of the squares of the absolute values of these amplitudes needs to sum to one which basically means that the entire probability of each of these of all of these states together needs to be 100%. And this is the first ingredient that makes quantum computers interesting for computation because we can basically implement any classical function that we can also run on a classical computer, on a quantum computer, the difference is that we cannot only run it for one value at a time but we can run it then on a superposition of all possible input values. So if you want, you have like this massive parallelization where you run your computation on all possible inputs at once and also calculate then all of the possible output values. And that sounds of course very cool, very useful, there's a catch that we will discuss later. It's not as easy as that but this is one step off like the power that makes quantum computing interesting. The next thing that is different is that we can on a quantum computer not only run classical gates or classical functions but we can also run so called quantum gates. And the quantum gates, they're different in respect to the classical functions because they cannot only like classical operations like and or act on like two qubits in a predictable way but they can kind of like act on the whole qubit state at once and also create so called entangled states which are really weird quantum states where we can't really separate the state of one qubit from the state of other qubits. So it's kind of like if you want to try to make a small change to one of the qubits in our system, we also changing other qubits there. So we can never like separate the bits, the qubits out like we can with a classical computer. And this is another resource that we can use in quantum computing to solve certain problems faster than we could with a classical computer. Now the catch as I said is that we of course do not want to only make computation with our qubits, with our qubit register but we also want to read out the result of our computation. And if we try that, so we make like computation and we want to measure the state of our quantum register, we have a small problem because well the measurement process is actually quite complicated but in a very simplified way you can just imagine that God is trying some dice here and then if we have a quantum vector, a quantum state vector that has like these amplitudes on the left, so a1 to an, then we will pick, he or she will pick a state randomly from the possible states and the probability of getting a given state as a result is proportional as I said before to the square of the absolute value of the amplitude. So that means we can perform computation on all of the possible input states of our function but when we read out the result, we will only get one of the possible results. So and this kind of like destroys at the first glimpse the utility of quantum computing because we can do like computation on all states in parallel but we cannot read out the result. So not a very interesting computer because we can't learn about the output so to say or not easily at least. But it turns out that there's actually a way of still using quantum computing to be faster than a classical computer and the first kind of practical algorithm for a search problem, notably the search problem that we discussed before was given by Love Grover who was a researcher at the Bell Labs and who found the Grover algorithm that's named after him that's basically a search algorithm which can, as we will see, solve the search problem that we have in a much more efficient way than any classical computer could. And in my opinion, it's still one of the most beautiful quantum algorithms because it's very simple and it's very powerful and there's also a proof unlike for other algorithms like the factorization algorithms from shore that the Grover algorithm can be will be faster always than any classical computer classical algorithms. So in my opinion, it's a very nice example of really quantum algorithm that is more powerful than a classical one. Let's see how it works. So there are three steps again in the algorithm. First we initialize our qubit register, our state vector to a superposition of the four possible output values, so 0, 0, 0, 1, 1, 0 and 1, 0 again, all with equal probability in this case here or amplitude. Then we evaluate the function on this input state here and what the function then does. So we made some special encoding here that basically marks the solution of our problem by changing the sign of the amplitude of the corresponding state. So we can see that in the output state here, the 0, 1 state has a sign which is negative, which means that it's the solution of the problem that we search. Still, if you would do the read out now directly, we wouldn't be able to learn anything about the solution because as you can see, the amplitude is still equal for all of the four states. So if you would make a read out now, you would only get like one of the four possible states at random. So we wouldn't learn anything with 100% probability about the solution of our problem. In order to do that, we need to apply another step, the so-called Grover or diffusion operator, which now takes this phase difference or the sign difference between these individual quantum states and applies a quantum operator to that that basically transfers the amplitude from all of the states that are not a solution to our problem to the state that is the solution. And for this case with two qubits here, with four possible values, there's only one step we need and after executing that, you can see that now the amplitude of our solution state is 1, but the amplitude of the other states is all 0. So that's great because now we can just do a qubit measurement and then we will 100% probability find the solution to our search problem. And that's where kind of like the magic of quantum mechanics shows because you can evaluate this function only once. So remember that in the first step, we only call this function once with all of the values in parallel. So from the computational complexity, we are much lower than here the classical algorithm, but still we are able to 100% precision in this case to see which state is the solution to our search problem. And that's working not only for the case of two qubits, but also with larger qubit registers. So for example, if you would take 10 qubits, you would need to execute a few more of these steps two and three. So instead of one iteration, you would need 25 iterations, for example, here, which is still much better than the 1024 iterations that you would need if you would really look into every possible solution of the function in a classical algorithm. So the speed up here is very good for, so to say, all of the, like, it's quadratic for the solution space. And if you look at the complexity plot again, we can now compare our classical algorithm with the quantum algorithm, the Grover search, and as you can see, the time complexity or the number of evaluations of F that we need is only square root of N, where N is the size of the search space, which shows that we have really a speed advantage here of the quantum computer versus the classical computer. And nice thing is the larger our search space becomes, the more dramatic our speed up will be, because for example, for a search space with one million elements, we will only have to evaluate the search function 1,000 times instead of one million times. So that's quite, so to say, a speed up in that sense. Now how can we build a system that realizes this quantum algorithm? Here I show the quantum processor that I built with my colleagues at CECLAY during my PhD. So if you want more information about this, you should check out my last talk. I just want to go briefly over the different aspects here. So we use the so-called superconducting qubits, transform qubits for realizing our quantum computer or quantum processor. You can see the chip here on the top. It's about one centimeter across. You can see the two qubits in the middle. The other snake-like structures are coplanar waveguides where we can manipulate the qubits using microwaves. So we use frequencies that are similar to the ones that are used by mobile phones to manipulate and read out our qubits. And if you look in the middle, you can see the red area, which contains each qubit itself. And then there's another zoom in here, which contains the actual qubit structure, which is just some two layers of aluminum that have been placed on top of each other and which create, when they're cooled to a very low temperature, a so-called superconducting state where we can use the superconducting phase between these two layers to indicate to realize our qubits. There's also a coupler in the middle, so this green element that you see, which allows us to run quantum gates or operations between the two qubits. To use that in practice, we need to put this in a dilution cryostat, which is really just a very fancy refrigerator, you could say. You cool that down to about 10 millikay, so very low temperature, just above the absolute zero temperature. You can see the sample holder here on the left side with the chip mounted to it. So this whole thing is put in the dilution fridge, then it's cooled down to the temperature, and then we can, as I said, manipulate it by using these microwave transmission lines. And what we did is we implemented the Grover search for the two qubits. So we ran this algorithm that I discussed before. I don't want to go too much into the details. The results are obtained by running this algorithm many times, and as you can see, we have achieved not 100% success probability, but over 50% for the most cases, which is not perfect, of course, but it's good enough to, in our case, show that there was really a quantum speedup possible. And if you ask why, okay, why is not 100% probability possible, or why can't we build larger systems with that, or what kept us from, for example, building a 100 or 1,000 qubit quantum processor, well, there are several things. Of course, that we have, like, make errors when we manipulate the qubits, so the microwave signals are not perfect, for example, so we introduce small errors when, like, making two qubit and single qubit interactions. We also need a really high degree of connectivity if you want to build a large-scale quantum computer, so if every qubit is connected to every other qubit, for example, that would make one million connections for 1,000 qubit processors, which is just, on the engineering side, very hard to realize, and then also our qubits have errors, because they can, the environment that the qubits are in, like the chip and the vicinity there, also introduces noise that will destroy our quantum state, and that limits how many operations we can perform on a single qubit. So there's a possible solution, which is the surface code architecture, which was introduced in 2009 already, actually, by David DeVin Czensow from the Ulish Research Center, and the idea here is that we do not have a quantum processor with full connectivity, so we do not connect every qubit to every other qubit. Instead we only connect the qubit to its four neighbors via a so-called tunable coupler, and this is, of course, much easier because you don't need so many connections on the chip, but it turns out that you can still run most of the quantum algorithms that you could also run with a fully connected processor, you just have to pay, like, a penalty for the limited connectivity, and the nice thing is also that you can encode a single logical qubit, so a qubit that we want to do calculations with, as, for example, five physical qubits, so all of these qubits here that are on the chip would together form one logical qubit, which would then allow us to do error correction, so we can, if there had been some error with one of the qubits, for example, a relaxation or a defacing error, then we can use the other qubits that we prepared in exactly the same way to correct this error and continue doing the calculations, and this is quite important because in the superconducting qubit systems, there are always errors present, and we will not probably be able to eliminate all of them, so we need to find a way to correct the errors while we perform the computation. Now the Google processor follows the surface code approach. Here I show an image from the Nature article, which was released, I think, one month ago, so it's a very impressive system, I find, it contains 53 superconducting qubits, 86 couplers, tunable couplers between those qubits, and they achieve a fidelity, so the success probability, if you like, for performing one and two qubit gates, which is higher than 99%, so this is already pretty very, very good and almost enough fidelity to realize quantum error correction as I discussed before, and with the system, you can really run quite complex quantum algorithms, much more complex than the ones that we run in 2012, so in the paper, for example, they run sequences with 10 to 20 individual quantum operations or quantum gates, and just to give you an impression of the cryogenic engineering and microwave engineering here, this is, so to say, the delusion cryostat where the qubit chip is mounted, and you can see that it's quite a bit more complex than the system we had in 2012, so it really looks way more like a professional quantum computer, I would say. If you ask a physicist now, why would you build such a system? The answer would be, of course, well, it's awesome, so why not? But it turns out that if an organization like Google gives like 100 or 200 million US dollars for realizing such research, they also want to see some results, so that's why the team, of course, under John Matini's try to use this quantum processor for something that shows how powerful or dead, so to say, it can outperform a classical computer. This sounds easy, but actually it's not so easy to find a problem that is both doable on this quantum computer, which has like 50 qubits and a bit more than 50 qubits and like 80 couplers, but it's not possible to simulate on a classical computer. So we could think, for example, about the factoring of numbers into prime components, which is, of course, always like the motivation of certain agencies to push for quantum computing, because that would allow them to read everyone's email, but unfortunately, both the number of qubits that you would require for this and the number of operations is much too high to be able to realize something like this on this processor. The next thing, which would be very interesting, is the simulation of quantum systems. So if you have molecules or other quantum systems that have many degrees of freedom, it's very difficult to simulate those on classical computers. On a quantum computer, you could do it efficiently, but again, since the Google team did not do this, I assume the quantum computer was just, or they didn't have like a feasible problem where they could actually perform such a simulation that would not be performable or like calculable on a classical computer. So but in the near term, in the future, this might actually be a very relevant application of such a processor. The last possibility would be to run, for example, the search algorithm that we discussed before, but again, for the number of qubits that are in the system and the size of the search space, it's still not possible because the algorithm requires too many steps and the limited coherence times of the qubits in this processor make it impossible to run this kind of like algorithm there, at least to my knowledge. So what they did then was therefore to perform a different kind of experiment, one that was doable with the processor, which is so-called randomized benchmarking. And in this case, what you do is that you, instead of like running an algorithm that does something actually useful like a search algorithm, you run just a random sequence of gates. So you have, for example, your 53 qubits, and then you run first like some single qubit gates. So you change the qubit values individually. Then you run two qubit gates between random qubits to create like a superposition and an entangled state, and in the end, you just read out the resulting qubit state from your register. And this is also very complex operation, so you really need a very high degree of like control of your quantum processor, which the Martinez, the Google team, was able to achieve here. It's just not solving a really practical problem yet, so to say. But on the other hand, it's a system or it's an algorithm that can be run on the quantum computer easily, but which is, as we will see, very difficult to simulate or reproduce on a classical system. And the reason that it's so difficult to reproduce on a classical system is that if you want to simulate the action of these quantum gates that we run on the quantum computer using a classical machine, a classical computer, then for every qubit that we add, roughly the size of our problem space quadruples. So you can imagine if you have like two qubits, then it's very easy to simulate that. You can do it on your iPhone or your computer, for example. If you add more and more qubits, though, you can see that the problem size becomes really big, really fast. So if you have like 20 qubits, 30 qubits, for example, you cannot do it on a personal computer anymore. You will need like a supercomputer. And then if you keep increasing the number of qubits, then at some point, in this case, 50 qubits or 53 qubits, it will be impossible even for the fastest supercomputers that we have right now. And that's what is called the so-called quantum supremacy regime here for this randomized gate sequences, which is basically just the area here on the curve that you see that is still doable for this quantum processor that Google realized, but is not simulatable or verifiable by any classical computer, even like a supercomputer, in a reasonable amount of time. And if we can run something in this regime here, it proves that we have a quantum system that is able to do computation, which is not classically reproducible. So it's something that really can only be done on a quantum computer. And that's why running this kind of experiment is interesting, because it really shows us that quantum computers can do things that classical computers cannot do, even if there are, for the moment, not really useful. And the gate sequence that they run looks something like this. So we can see again, like, here, five, four of the qubits that the Google team has. And they run sequences of operations of different lengths, then perform a measurement, and then just sample the output of their measurements. So what they get as a result is a sequence of long bit strings, so zeros and ones for each experiment they run, and to reproduce, to check that the quantum computer is actually doing the right thing, you have to compare it to the results of a classical simulation of this algorithm. And that's, of course, a problem now, because we just said that we realized the quantum computer, a quantum processor, which is able to do this computation on 53 qubits, and that no classical computer can verify that. So the question is now, how can they prove or show that what the quantum computer calculates is actually the correct answer, or that he does not just produce some garbage values? And that's a very interesting question, actually, and the way they did it here is by extrapolation. So instead of, for example, solving the full circuit, so that contains all of the connections and all of the gates of the full algorithm, they created simplified circuits in two different ways. So, for example, they cut some of the connections between the qubits in the algorithm, so that the problem space would become a bit smaller, or in the other case with the alited circuit, they just changed the operations in order to allow for some shortcuts in the classical computation or classical simulation of the algorithm. So in both cases, they were able to then verify the result of the quantum computation with this classical simulation performed on a supercomputer, and then they basically just did this for a larger and larger number of qubits. They plotted the resulting curve, and they extrapolated that to the supremacy regime to see that, okay, based on the error models that they developed, based on the simulation, they can, with a certain confidence, of course, say that probably the quantum computer is doing the right thing, even in the supremacy regime, even though they cannot verify it using their classical simulations. And in case the quantum computer did wrong, still, they have also archived the results, so in maybe 10 years, when we have better supercomputers, we might be able to just go back to them and then verify them against the 53 qubit processor here, by which time, of course, they might already have a larger quantum processor again. So the key results of this, I would say, are that for the first time, they show that really a quantum computer can beat a classical computer, even though it is at a very artificial and probably not very useful problem, and what the experiment also shows is really, I would say, an astounding level of control of such a large-scale or medium-sized quantum processor, because even five years ago, or six years ago, 2012, 2013, the systems that we worked with mostly consisted of three or four qubits, and we could barely fabricate the chips and manipulate them to get algorithms running, and now, if I see a 53 qubit processor with such a high degree of control and fidelity there, I can really say that it's really an amazing progress in the last five years that was achieved, especially by the Google Martinez team here. And I also think it's a very good milestone on the way to fully work on quantum computer, because it nicely shows the limitations of the current system and gives a good direction on new areas of research, for example, in error correction, where we can improve the different aspects of the quantum processor. The research has also been criticized from various sides, so I just want to iterate a few of the points here. One of the criticisms is, of course, that it doesn't do anything useful, so there's really no applicability of this experiment, and while that's true, it's, of course, very difficult to go from a basic, very simple quantum processor with two qubits to a system that can really factorize prime numbers or do anything useful, so we will always need to find problems that are both hard enough so that we can solve them in a reasonable time frame, a couple of years, for example, that still prove the progress that we make on the road to quantum computing. So in this sense, while quantum supremacy does not really show anything useful in terms of computation that is done, I think it's still a very good problem as a benchmark for any kind of quantum processor, because it requires that you have very good control over your system and that you can run such a number of gates at a very high fidelity, which is really, currently, I would say, the state of the art. The authors also took some shortcuts, for example, they used like a two-qubit quantum gates, which are not, as we call them, canonical gates, which might be problematic because if you want to run a quantum algorithm on the system, you need to implement certain quantum gates that you need for that, and since they only have, like, non-canonical gates here, which are still universal, by the way, they could not do that directly, but with some modification of the system, it should also be possible. And the last criticism might be that, okay, here you have a problem that was engineered to match a solution, which is, of course, that, okay, we need some solution, as I said, some problem that we can realistic itself on a such a system. I think, though, also like the other points, if you want to build a large-scale quantum processor, you need to define reasonable milestones and having such a benchmark that other groups, for example, can also run that process against is a very good thing because it makes the progress visible and also makes it easy to compare how different groups or how different companies or organizations are competing on the number of qubits and the control they have about them. So if you want to make a more, kind of, more slur for quantum computing, there would be several possibilities that you could do. Here I show you, for example, the number of the qubits that have been realized for superconducting systems over the years. This is, of course, incomplete because you could, like, the number of qubits alone doesn't tell you much about your system. I mean, we could do a qubit chip with 1,000 or 10,000 qubits today, but if you don't have the connectivity and don't have the controllability of individual qubits, then this chip wouldn't be good. So there are other things that we also need to take into account here. As I said, there's just the coupling between individual qubits and the coherence time and the fidelity of the qubit operations. So this is really just one very small aspect of this whole problem space. But I think it shows nicely that in the last years there was really tremendous progress in terms of the power of these superconducting systems because the original qubit, which was developed at NEC in Japan by Professor Nakamura, was done in, like, around 2000. So it had very, very bad coherence time, very bad properties, but still it showed for the first time that you could coherently control such a system. And then it didn't take long for other groups, for example, the Quartronics Group in Sacclay, to pick up on this work and to keep improving it. So after a few years, we already had qubits with a few hundred or even a microsecond of coherence time, which was like three orders of magnitude better than what we had before. And there were other advances then made by groups in the US, for example, in Yale, the Shirkopf Lab, which developed new qubit architectures that allowed us to couple the qubits more efficiently with each other and to, again, have better control over manipulating them. And then there's also groups like the Research Group at IBM or companies like Rigetti that took again these ideas and that added engineering and their own research on top of that in order to make the systems even better. So in 2018, we already had systems with 17 or 18 qubits in them. And now with this Google and UC Santa Barbara work, we have the first systems with more than 50 qubits after not even 20 years, which I think is quite some progress in this area. And, of course, if you ask me how close we are to an actually working quantum computer, it's still very difficult to say I find because we've proven, group prove the quantum supremacy for this randomized algorithm. But in order to do something applicable or useful with such a quantum system, I think we need at least, again, 50 maybe to 100 additional qubits and a larger number of qubit operations. But it's really hard to say. That's why I also say don't believe in this chart because there's also, of course, a lot of work in the theory of quantum algorithms because up to now, we are still discovering new approaches of doing quantum simulations, for example. And right now, there are a lot of research groups that are looking for ways to make these medium scale quantum computers, so quantum computers with 50 or 100 qubits already useful for the use in quantum simulations. So it's really an interplay between what the theory can give us in terms of quantum algorithms and what in terms of experimental realization we can build as a quantum processor. So in my opinion, quantum simulation will definitely be something that where we will see the first applications in the next, I would say, three to five years. Other things, optimizations, I have to admit I'm less an expert in. I think they're a bit more complex, so we will probably see the first applications in those areas a bit later. And the big motivation for the tree-letter agencies always is, of course, the factoring, the breaking of crypto systems, which is the most challenging one, though, because in order to do that, you would both need very large numbers of qubits, so at least 8,000 qubits for an 8,000-bit RSA key, for example, and you would also need a very large amount of qubit operations because you need to run this shore operation and that involves a lot of steps for the quantum processor. And this is so to say the most, I would say from my perspective, unrealistic application of superconducting quantum processes in the next year, but I think if somebody would build a quantum computer, maybe we would also not just know about it, so who knows? So, to summarize, quantum computers or quantum processes are getting really seriously complex and very impressive, so we have seen tremendous progress in the last five years. I still think that we're like five years away from building really practical quantum computers, and there are still some challenges, for example, in error correction and the quantum gate fidelity and then the general architecture of these systems that we need to overcome, and there might also be some challenges which we haven't even identified yet, which we might only encounter at a later stage when trying to build really large-scale quantum processors. And as a last point, I just want to stress again that quantum computing research is not only done by Google or by IBM, there are a lot of groups in the world involved in this kind of research, both in theory and in experiment, and as I said before, a lot of the breakthroughs that we use today for building quantum processors were done in very different places like Japan, Europe, USA, so it's really, I would say, a global effort, and you should also, when you look, when you see this marketing or PR that companies like Google and IBM do, maybe not believe all of the hype they're creating and keep down to earth view, so to say, of the limits and the potential of quantum computing. So that's it, and I would be happy to take on your questions now, and if you have any feedback, there's also my Twitter handle there and my email address, and I think we also have some time for questions here right now. Thank you. Thank you, Andreas. We have almost 20 minutes for Q&A. If you're leaving now, please do so very quietly, and if you can avoid it, just don't do it. Thank you. Okay, Q&A, you know the game, there's eight microphones in this room, so just queue behind them and we will do our best to get everyone sorted out sequentially. We will start with a question from the internet. Thank you. Do you have information about the energy consumption of a quantum computer over the calculation power? Yeah, that's an interesting point. I mean, for superconducting quantum computers, there are like several costs associated. I think right now the biggest cost is probably of keeping the system cooled down. So as I said, you need very low temperatures, 20 or 10 milli Kelvin. In order to achieve that, you need the so-called delusion cryostat, and these systems, they consume a lot of energy and also materials like helium mixtures, which are expensive and maybe not kind of like a rare material right now. I think that would be the biggest consumption in terms of energy use. I honestly don't have so much of an idea. I mean, the manipulation of the qubit system is done via microwaves, and the power that goes into the system is very small compared to any of the power that you use for cooling the system. I would say for the foreseeable future, the power consumption should be dominated by the cooling and the setup cost and the cost of the electronics as well. So the classical electronics that controls the qubit, which can also be quite extensive for a large system. So the qubit chip itself should be really negligible in terms of energy consumption. Thank you. Microphone number one, please. Hello. I have a question in regards to quantum simulation. So I would have thought that with 53 qubits, there would already be something interesting to do since I think they are border the limit for more or less exact quantum chemistry calculations on classical computers is that there are 10 to 20 particles. So is there a more complicated relation from particles to qubits that's missing here or what's the problem? Yeah. So in the paper, I couldn't find an exact reason why they choose this problem. I think there are probably two aspects. One is that you don't have in the system the arbitrary qubit control, so to say, so you cannot run any Hamiltonian or quantum algorithm that you want. You are limited in terms of connectivity. So it's possible that they were not able to run any quantum algorithm for simulation, which was not easy to run also on a classical system. But I'm really not sure why they didn't. I think just if they would have had this chance to do a quantum simulation, they would probably have done that instead because that's, of course, more impressive than randomization or randomized algorithms. Because they didn't do it, I think it was just probably too complicated or not possible to realize on the system. Yeah. Okay, so this is. But again, I don't know for sure here. Thank you. Yes, and also speaking as a sometimes quantum chemist, you can't directly map qubits to atoms. They're not two level systems. And you don't, I mean, you usually also simulate electrons and not just the atoms. But I'm not a speaker. We can discuss later maybe. Microphone number two, please. Hi. Thanks. Can you compare this classic or general quantum computer to the one by D-Wave? It's one of the quantum computers by AWS offered. They have 2000 qubits or something, they say. Yeah, that's a really interesting question. So the D-Wave system is the so-called adiabatic quantum computer, to my knowledge. So in this case, the computation works a bit differently. It's with the normal, with this quantum computer that Google produced, you have a gate sequence that you run on your input qubits and then you get a result that you read out. With the D-Wave system, it's more that you engineer like in Hamiltonian, which also consists of local interactions between different qubits. And then you slowly change this Hamiltonian in order to change the ground state of the system to a solution of a problem that you're looking for. So it's a different approach to quantum computation. They also claimed that they can achieve, or that they achieve a quantum supremacy, I think, in a different way for an optimization problem. But to my knowledge, the proof they have is less rigid probably than what Google group produced here. But again, I'm not an expert on adiabatic quantum computing, so I'm more like a gate-based person. So yeah, I think the proof that here Google showed is more convincing in terms of like reproducibility and really like the proof that you're actually doing something that cannot be done on a classic computer. Thank you. Yeah. D-Wave will see that differently, I think, though. Yeah. All right. Let's go to the back. Number seven, please. Hello? Seven? You just waved to me? Hello. I was reading that earlier this year IBM released the first commercial. Q1 system or whatever the name is. And you were mentioning before to keep our expectation down to earth. So my question is, what kind of commercial expectations is IBM actually creating? So I spoke to some companies here in Germany that are collaborating with IBM or D-Wave or Google as well and ask what they're actually doing with the quantum computers they are the company's offer. And I think the answer is that right now a lot of commercially a lot of companies are investigating this as something that could potentially be very useful or very relevant in five to ten years. So they want to get some experience and they want to start collaborating. I don't think at least I don't know any reproduction use of these systems where the quantum computer would do some calculations that would not be doable on a classical system. But again I don't have a full overview of that. I think now it's mostly for experimentation and for getting to know these systems. I think the companies or most of the customers there probably expect that in five years or ten years the systems will really be powerful enough to do some useful computations with them as well. Thanks. All right. The internet, please. With the quantum computer you can calculate things in parallel but there is this reversibility requirement. So how much faster is the quantum computer at the end of the day? Yeah, it's true so that if you want to... If you want to realize classical algorithm you have to do it in a reversible way. But to my knowledge you can from an efficiency perspective implement any classical non-reversible algorithm as a reversible algorithm without loss in complexity. So you can have also like for reversible computation you have universal gates like the control not gate that you can use to express any logic function that you require. You might need some additional qubits compared to the amount of the classical bits that you need for the computation. But in principle there's nothing that keeps you from implementing any classical function on a quantum computer. In terms of actual run time of course it depends on how fast you can run individual operations. So right now a single qubit operation for example on this Google machine takes about I think 20 to 40 nanoseconds. So in that sense the quantum computers are probably much slower than classical computers. But the idea is anyway that you do only really the necessary computations that you can't do on a classical machine on a quantum computer and anything else you can do on a normal classical system. So the quantum process in this sense is only like a co-processor like a GPU in that sense I would say. All right microphone number four please. On the slide that shows Richard Feynman you said that quantum computers were invented to simulate quantum systems and can you please elaborate on that? Yeah so I don't have the link to the lecture here unfortunately the link is broken but you can find that online. It's a 1982 lecture from Feynman where he discusses like how you would actually go about simulating a quantum system. Because as we have shown like if you want to simulate a full quantum system you need to simulate the density matrix of the system and that takes an exponential amount of memory and computation in terms of like the number of qubits or quantum degrees of freedom that you want to simulate. And with a classical or Turing machine you couldn't do that in an efficient way because every time you add a single qubit you basically quadruple your computational requirement and that's really where the idea came from I think from Feynman to think about a computing system that would use quantum mechanics in order to be able to do these kind of simulations because he saw probably that for large quantum systems it would never be possible to run at least with our current understanding of classical computing it would never be possible to run a quantum simulation of a quantum system on a classical computer in an efficient way. Does it answer the question? Okay. All right microphone 8 please. As a physicist who's now doing analog circuit design I'm kind of wondering why all the presentations about quantum computers always use state 0 and 1 and not multiple states. Is that a fundamental limitation or is that just a simplification for the sake of the presentation? So you mean why you don't use like higher qubit states or like... Multivalued logic or even continuous states. So in principle the quantum bits that we're using they're not really two level systems so there is not only the level 0 and 1 but also level 2, 3 and so on. You could use them of course but the computational power of the system is given as the number of states so like m for example raised to the power of the number of qubits so m to the power of n so in that sense if you add like another state you only change like a smaller factor than adding another qubit. So it's usually not very interesting to add more states what you would do instead is just add more qubits to your system. And for continuous variable quantum mechanics, quantum computation I think there are some use cases where this might outperform like the digital quantum computers especially if you can engineer your system to like mimic the Hamiltonian of the system that you want to simulate. So I think in this sense in these cases it makes a lot of sense for other cases where you say okay you want to run a general quantum computation then like such a digital quantum computer is probably the best solution. And you could also just add that run like a continuous simulation of a quantum system on such a gate based quantum system just like the same order of complexity I would say. Does that answer the question? I think I dilute myself to have understood that the non-diagonal elements in the density matrix grow much faster than the number of states in any diagonal matrix element. I guess you could say it like that, yeah. I have to think about it. All right number three please. What do you have to say about the skepticism of people like Gil Calai that claim that inherent noise it will be a fundamental problem in scaling these quantum computers? I mean it's a valid concern I think. As of today we don't have even for a single qubit shown error correction. There are some first experiments for example by the Schulkopf lab in Yale where they showed some of the elements of error correction for a single qubit system but we haven't even managed today to like keep a single qubit alive indefinitely. So that's why I would say it's an open question. It's a valid criticism. I think the next five years we'll show if we are actually able to run this quantum errors and if our error models themselves are correct because they're only correct for certain errors or if there's anything else that keeps us from like building a large scale system. So I think it's a totally valid point. Microphone five please. There has been a study on factorizing on adiabatic machines which requires log squared n qubits while sure requires n log n. But as the adiabatic systems have much higher qubit numbers they were able to factorize on these machines much larger numbers than on the normal devices and that's something that never shows up in the discussion. Do you want to comment on that? Have you read the study? What do you think? Adiabatic machines bogus or is it a worthwhile result? I'm not, as I said, like an expert on adiabatic quantum computing. I know that there were some like studies or investigations of the D-Wave system. I haven't read this particular study about factorization. I think adiabatic quantum computing is a valid approach as well to quantum computing. I'm not just not sure if currently the results were like shown with the same amount of like rigidity or like rigid proofs like for the gate-based quantum computer. But I really would have to look at the study to see that. Can you maybe quickly say the authors so it's on the record? If your mic is still on number five. Sorry, I don't... Okay, no problem. Thank you. Sorry. But yeah, I don't think adiabatic quantum computing is like... I think adiabatic quantum computing is a valid choice or valid approach for doing quantum computation as well. I can search for the authors later and give it to you. Okay. Okay, that would be great. Thank you. Thank you. Microphone 4, please. What do you say about IBM's claim that Google's supremacy claim is invalid because the problem was not really hard? Yeah. So basically, IBM, I think, said, okay, if you do some optimizations on the way you simulate the systems, then you can reduce this computation time from 10,000 years to like maybe a few hours or so. I think it's of course a valid... It might be a valid claim. I don't know if it really invalidates the result because as I said, like the computational power of the classical systems, it will also increase in the coming years. Right now, okay, you could say then maybe if we haven't achieved quantum supremacy in regards to like the 2019 hardware, then maybe we should just look at the 2015 hardware and then we can say, okay, there probably we achieved that. In any case, I think the most... What's most impressive about this result for me is not like if we are really in the supremacy regime or maybe not. It's really the amount of the degree of controllability of the qubit system that this group has achieved. I think that's really the important point here regardless of whether they actually achieved the supremacy or not because it shows that these kind of systems seem to be a good architecture choice for building larger scale quantum processors. This alone is very valuable, I think, to guide the future research direction regardless of whether this is actually... They achieved this or not. But yeah, I can understand of course the criticism. One thing, the article is called Quantum annealing for prime factorization appeared in Nature in December 18. Authors are John, Britt, Mckeskey, Humble and Case. Okay, great. We'll have a look at that again. Thanks. All right. Microphone 6, do you have a short question? Yeah, hopefully. It is known that it is not very easy to understand how large quantum superposition goes into microscopic states or into microscopic physical description. So apparently there are a couple of things not understood. So is there anything you know about when you go 2,000, 10,000 million qubits? Could you expect the quantum behavior to break down? Are there any fundamental argument that this will not happen or is this not a problem considered recently? Okay. I'm not sure if I fully understand the question. It's mostly about, like if you say like quantum mechanics has some like scale variance so that if you go to a certain scale then sometimes at some point you have like irreversibility or like something like that. I mean I think there are large quantum systems that occur naturally. I don't know, like Bose-Einstein condensate for example has a lot of degrees of freedoms that are not controlled of course but that are also quantum mechanical and there it seems to work. So personally I would think that there's no such limit but I mean who knows, it's like that's why we do like experimental physics. So we will see if we reach that. But from the theory of quantum mechanics right now there's no indication that there should be such a limit to my knowledge. All right, so maybe we will see you again in five years. Yes. So please thank Andreas once again. Thanks.
|
Five years ago I spoke about my work in quantum computing, building and running a tiny two qubit processor. A few weeks ago, Google announced a potentially groundbreaking result achieved with a 53 qubit quantum processor. I will therefore review the state of experimental quantum computing and discuss the progress we made in the last 5 years. I will explain quantum supremacy, surface code architecture and superconducting quantum processors and show which challenges we still have to overcome to build large scale quantum computers. We will first dive into the basics of quantum computing and learn about quantum gates, fidelities, error correction and qubit architecture. We will then go through Google’s experiment and try to understand what they actually did and why it matters. We will then see what else we need to build a useful quantum computer, and discuss when that might happen.
|
10.5446/53114 (DOI)
|
Century accounted for the body and voice of the Oстр XXI 2015 World War II Siberia with sons USA countries with five sons Denmark with you is captain in nuclear power what a powerfulbled superiority thanks for your pla today This topic was all about destruction Thank you very much. OK. OK, this is not just my talk. This talk has a history. I have a co-author. Martin Dürrenkampo is a colleague of mine who could not come here. But so I will give this talk by myself. But we worked together over the year on this talk because this talk has a history. And it's a bit of a history of scientists for future, which is an association of scientists that evolved this year, basically, with the movement of students and people of Fridays for Future. And they were questioned. You know, they took to the street and said, hey, we want the future. We want that things change. And they demanded for politics to change. And this did not directly happen. But it was questioned. So some, well, professional politicians said, well, they should leave it to the professionals. And that's the point where actually a lot of scientists and a lot of scientists I know, where they were all really mad at this because they've been doing science and research for so many years. And we've been, I mean, I don't know if you saw the presentations before. How much effort is being put into this, into this research to make it better and better, better models. And I will show you this presentation is about the results of the outcome of this and what this means. And still nothing changes. So they write papers. They write reports. And, well, nothing happens. And so the only thing we could say was basically, hey, they are right. Things need to change. And that's why we got together and formed this association. So there's a charter on this which says, basically, what we do is we go out and we try to inform people on the research, on the state of the art of the research, and how things are currently. And that's why I'm here. So that's exactly what I'm doing here. So we go out to wherever. And you can come to us and ask for presentations, for discussions to get informed on this topic, on what does this climate change issue actually mean. And this is the disclaimer now. I can tell you this is not a good mood talk. OK? So this is a bit because the topic is very serious. So it's a bit different than I usually do it. So in the end, it looked a little bit better than the beginning. But nevertheless, so where are we currently? So this current graph, it's also from the, and I will, this is all not researched by myself. This is mainly from IPCC reports. And this is from the report from last year on the 1.5 degree report, which was basically done or put together. Because in the Paris Agreement in 2015, it was said, well, the world or the governments of the world want to keep the climate change, to the temperature change to well below two degrees, if possible, to 1.5 degrees. And the question was, hey, is this actually possible? Can we make that? What do we need to do to do this? And so there have been a lot of questions about this. And a lot of research, a huge number of publications came out on this topic. Hey, what does it mean to have a 1.5 degrees warmer Earth? What does it mean to have a two degrees warmer Earth? And is this actually possible to limit climate change to these temperatures? And this is the current state. So this is, I really love this graph because it contains a lot of different things. So what we're talking about, so we have a pre-industrial period that we use as a reference. So that's a period from 1850 to 1900 here. This is the reference period where we say, OK, this was pre-industrial temperature. And everything afterwards, the changes from that are all referring to this. So 1.5 degrees or so would be the difference from this period. And then what climate does, it's not always constant. So every year you have, sometimes it's a bit warmer and sometimes a bit colder. So what you need to do is you need to average. This is quite important because then, for example, and there is this year of, where is it here, 1998. There was a very warm year. And afterwards, a lot of, for a long period, there weren't so many warm years. And then there were some people saying, oh, yeah, look, the temperature does not change anymore. So everything's fine now. And this, of course, is not true because you have to look at average period. So the red line, this is the so-called floating average. So you always average with the years. And this gives us about the current temperature change. And so this would be like a typical climate period, which is like 20 years. You usually look at 20 years. But the problem we have currently is that the change is so drastic that looking for 20 years, then you would always have to go far back to periods when there was a big difference to today. So the last changes in this report were taken from this 2006 to 2015 period. And the extrapolation from this was basically that in 2017, we probably reached a one degree increase in temperature on a global scale. This is not always the same in different areas. It might be warmer and different. It's colder, but that's the global increase. So this is where we are currently. So we have an increase from 280 parts per million in CO2 to about 410. This is changing. This is not constant, but it's going up and down. But it's about 410 in 2019. We have a strong increase in temperature globally, but the biggest increase is actually in the winters and the Arctic. And there's current anthropothenic surplus is about 40 gigatons per year. So 40 gigatons, that was actually current. That was this already gone, because we are now a bit higher than that. But this was the average period from 2011 to 2017. OK, now I go directly into this IPCC report from last year, this is 2018. In chapter two, there's this table. I love this table. This table contains a lot of climate science, because it goes into how much actually can we further emit to reach which temperature change? So this would be here, the 1.5 degrees Celsius. This would be 2 degrees Celsius. And then you have probabilities. How likely you can avoid this, or is it going to come? So if you want to avoid with a 2 sigma, that is like a 67% probability to go over 1.5 degrees, we have 420 gigatons to emit further, additionally into the atmosphere, 420, as you remember, 40 gigatons per year. And this was, I think this was since this is from last year. So this refers to basically 2017. So this already two years gone since then. And it has not decreased, but increased actually. And then there's a lot of different. If you go for a 50% chance, you can say, OK, it's a bit more. We can emit. And if you go with what we just want to have in one third chance, then we actually would have double the amount we could emit. For 2 degrees Celsius, this is far more. So it's more than 1,000 gigatons of CO2 equivalence to emit. Now, there are, of course, a lot of uncertainties, all kinds of uncertainties that go with that. And one is, for example, the so-called Earth System feedback. That is, the Earth itself responds to this emission and also emits CO2 and also methane. And this has also a long-term impact. And then there are further uncertainties. And these are, I mean, this has been also part in the previous talks that, of course, climate models do have uncertainties. Nevertheless, if we take this into account and say, OK, we want to avoid 1.5 degrees Celsius increase in temperature with a 2-thirds probability, they call this likely in this report. So it's likely that we are not exceeding 1.5 degrees. We have 420 gigatons surplus CO2 to emit into the atmosphere in total. 100 gigatons will be more or less gobbled up by the Earth response. This is actually, this was in the report. Current research shows that this is likely a bit too conservative, so it's probably more. But, well, OK. So our emission is about 40 gigatons. So the CO2 emissions by coal power plants that are running was at that period 200 gigatons CO2. So they're built, they're running, 200 gigatons by that. And then we have 100 to 150 further gigatons for plant coal power plants or those under construction. If we count this together, we have already exceeded the 420 gigatons CO2. And this is, of course, one reason why these coal power plants have to be shut down. But they're, of course, not the only source. They're only one source of CO2 emissions we have in the atmosphere. And to make this clear what this means, and this is what I go into this now, what does this mean, this 1.5 degree difference to 2 degree? And there's been a lot of research on that. OK. Now the first one is, for example, first example is here on the Arctic. I mean, there's been a lot of talks about ice bears and so on. But, of course, this is not the only thing to care about. It is quite crucial that there is ice there, also because the ice, we had this before in the previous talks, that the ice reflects the sun. And the less this reflection is there, the more warmth is being taken up by the earth again. So we have a feedback system there. Also, of course, because of all the, it's not just the ice bear, there's a whole biosphere there. And this biosphere has to somehow survive. Now, the likeness of an ice-free Arctic is this graph here, of the comparing 1.5 degrees. This is this one, or these two studies. These are two studies here, one with the dotted line and one with the full line, and two degrees. And this is how likely is this in a certain period of time that this happens. And so you can see, if we consider again that it's likely, it's about 45 years it takes for at a 1.5 degree Celsius increase, that we have an ice-free Arctic. So this is actually possible with this increase, but it's like once every 45 years. If we go for a two-degree increase, this is once every 10, or even with the other study, it's more like once every five years that this is happening. And this is quite frequent, and this of course causes quite some impact on everything that lives there. Now, this is ice and Arctic, and there's not so many people living in the Arctic, so there's a lot of further studies that have been done. And this, for example, for Africa, I will only, because of limited of time, I can do this talk for many hours, actually. I will only go on to this example here, extreme heat with record temperatures over close to 50 degrees, and actually even increasing that. That has been there in 2009, 2010, in the month from December to February in Africa. And these are temperatures where people cannot be outside anymore at these temperatures. It's just too hot. And then I have, they're showing these curves, and these are probability density functions. So these curves show how often, like each of these, I don't know, Bob's here are showing how often does this happen. And so here we have current, the current stages is the temperature from 2006 to 2015. That's what they call current, so there is already this increase in temperature. Under these conditions, this happens every, well, maybe twice every 100 years. If we go for 1.5 degrees increase, that's the blue line, we can see this is going to happen every more or less third year. If we go for 2 degrees, this is going to happen even more often. So this is, for people living there, it's getting hard to live there. It's just the temperature, only that. If we go for, for example, for Australia as an example, that we have the same, it's always these curves here, extreme warm temperatures. Well, that's very easy. But in Australia, what's also important there, it's the temperature of the water because of the corals that live there and hot water leads to coral bleaching. So basically the corals die. And this is of course, as you've seen, the temperature is not always every year the same, but there was this hot summer and an extreme coral bleaching here, temperatures situation here in the summer in 2012, 2013. And how often does this happen? And we can already see here, this would be the natural, so this would be the pre-industrial curve here, where this very warm temperature has hardly ever happened. Well, we can see here already this would be every third year, currently, would be every second year in 1.5 degree scenario and probably two of three years in a 2 degree scenario. And this, well, this means I will go into this later, this is an example for Europe, how often things happen. I don't know if you, I always remember that one because I, well, it was a lot outside during that period. There was a very warm summer we had in 2003 and a lot of people died of that because of the heat. And I remember being in Cologne at the time and laying outside at 40 degrees and I was ill and so I had 40 degrees, so outside was 40 degrees, was very warm. And so naturally this could be, this can happen, it could happen, like once every 100 years. Currently we have like a situation where this would be like every fourth year and this increases then to more than 59% of all the years of 2 degrees Celsius. So we're gonna get hot summers. This is the prediction of this study here. Well, what does this mean? Well, and now I go back to the IPCC reports and the IPCC reports are very diplomatic always. And so they have reasons for concern and we are all very concerned. This sounds very nice, but of course there's some background to this. So they have in the summary of this IPCC report from 2018, they have five reasons for concern. That's one is unique and threatened systems like corals, extreme weather events. And you can see that does make quite a difference from now and going to warmer temperatures. Up here we have the two degrees so you can see between 1.5 degrees and two degrees that does make quite a difference. Distribution of impacts, that's actually basically this means that those who suffer most have contributed less. And that's of course bad because those who contributed most don't suffer as much and then they won't change. And that's a problem, that's why they're concerned on this one. Global aggregate impacts is basically money impact. So how much does this cost in the end to cope with the outcome of this? And well, it costs billions of dollars in the end to have a difference between 1.5 and two degrees. Every year just to cope with the impacts. And then we have large-scale singular events that could be something like de-icing of Greenland or something like that. Well, when it's gone, it's just a singular event because then it's gone. This is very abstract, so they get a bit closer to that. So warm water corals is basically, are they having already a problem? Well, I will show this later, well, they expect about 90% will die off at 1.5 degrees. Well, they will die out at two degrees, most likely certain. And this is, of course, this is important for nourishment and for people who live from the sea, from whatever they fished out of the sea, because in corals there's a lot of, it's like the childhood bed of a lot of fish. So they have quite, we do get quite an impact in the end on fishery. This is why this is so red. Mangroves also get an impact on that. There's about the same story, so a lot of small fish grow up there. Well, the Arctic region is getting increasing problems with the ice. Well, these are all kind, I will go into this later. Coastal flooding will increase from 1.5 to two degrees. This is, well, flooding in rivers and so on. Well, and we will get some more heat related morbidity. Now, there's been a new report this year on land use. And this has been even more into this now different scale. Please watch that. So, where am I here? So the scale here is going up to five degrees. And if you look for that, it's a bit different. So the lower ones, 1.5 and two degrees are in there. But problems they see is a dry land scarcity and water scarcity and dry lands. So that's desertification, a lot of that. Soil erosion, which is related to that. Vegetation loss is also related to that. Vegetation loss is sort of, yeah, I will come to this later. The wildfire damage, we can see that already today. I mean, in the news, like now it's Australia and Chile, but before it was more California and so on. So this will go on. This is no coincidence that this is happening. We have permafrost degradation. We have tropical crop yield decline. Well, crop yield is, of course, that hurts because, well, this leads, of course, in the end to food instabilities. And we can see it does make quite a difference already between 1.5 and two degrees, but of course it can get worse. And they also, they are more specific on that. What they mean with this, for example, in wildfire damage, they expect an increase in fire and weather season currently. Over 50% increase in the Mediterranean area if it gets above two degrees. And, well, if we go to four or five degrees, this will they expect hundreds of million, at least, or over 100 million people additionally exposed. In terms of food supply instabilities, well, what we already see is, well, we have spikes in the food price. This is not so important for us usually, but of course for people in the world that don't have much money and we still have almost, it's not quite one billion, billion people in the world that live of less than $2 a day. For such people, this is, of course, quite important. If we go closer to two degrees, they do expect periodic food shocks across regions. So basically that there will be situations where there will be no food available anymore. If we go up to four or five degrees, this would lead to sustained food supply distribution problems on a global scale. So this depends on what kind of scenario we are calculating. I will go into this later. One additional thing is also to think of that we are not only talking about the temperature. Also, the water of the oceans take up the CO2. They take up a lot of the CO2 that we blow into the air. And this leads to an acidification. And so the pH value of the oceans, they decrease. And this has an impact on a lot of animals that build up calcium carbonate. So shells, basically. So all kinds of bivalves, all kinds of cancers and all that. They depend on building up this calcium carbonate. And if they are not able to do this anymore, of course, they don't grow anymore. And they are pretty much in the beginning of a food chain in the oceans. Now, I was reading this 2018 report and somewhere there on page 223, I found them this here, where they basically say, OK, we do have this impact. And there is this aruginite saturation, which is, well, basically, that's a point where this buildup for specific animals is not possible anymore. At this saturation point, because the chemical reaction does not work anymore. And this depends on the temperature. This depends on the pressure. And the higher the pressure is, the earlier this point is reached. Also, the colder the temperature is. And so this is what you can see here on the right-hand side. They investigated this mainly from the pole regions on. And so that this point will reach the surface of the ocean from 2030 onwards, so that all these animals on the surface of the ocean are not building in the polar regions, will have problems built to build up actually their shells. This has two different impacts. Of course, one impact is they don't grow anymore. This has a big issue on the food chain in the oceans. The second impact is actually that this was one of the carbon sinks. They took CO2 and with calcium, they build up these shells and they die off at some point and they sink to the ground and, well, the CO2 is gone. Well, if this is not happening anymore, of course, this type of carbon sink does not work anymore. Okay. Now, I've talked about these are further. I will go skip through this quickly. These are all kinds of things that happen. And so in this 1.5 degree report, they compared for a lot of regions what will happen. So for 1.5 degree warming or less of 1.5 to 2 degrees and 2 to 3 degrees. So, and there's all kinds of things, this is a big table in this report in chapter 3. Read these reports, please. Read these reports. They're good. They're actually scientifically good. I mean, in terms of if you do science, it's really, really good because they have so much literature and so many cross-references and how they do it. To be very sure to say, okay, this is what we can say with this certainty. This is very, very good science, I think at least. Okay. So I will not go into all of this, but it has to all kinds of regions, severe impacts, like Southeast Asia, for example, they have this risk of increased flooding and they have increased precipitation events and yes. But well, I think the most significant of this is the significant risk of crop yield reductions, which is avoided if we stay below 1.5 degrees. If we're not staying below 1.5 degrees decrease, they say here, they estimate 1.3 decline in per capita per crop production per year. One third less food. That's not good. And if we go even higher, well, this is getting worse. For small islands, where there's actually the small islands are well known, of course, you know, the sea level is rising, so they have a problem and actually the main problem they have is not that just the water is going over the island, but that the salty water is rising and intruding the freshwater reserves they have. So they get a problem with freshwater. And well, this is already a problem for them for 1.5 degrees. For two degrees, it's like a very severe problem. And that's why they are pushing so much for the 1.5 degree change maximum. And the Mediterranean, this is very close to where we are currently. So they expect a reduction of run of water, so this is in rivers, of about 9%, is very likely, but there's a range given as most of the time they have this. So there is already a risk of water deficit at 1.5 degrees increase in temperature. If we increase further, we reach about as to up to two degrees, we have about 17% less water in the rivers. This is of course not good. I mean, especially, I mean, okay, in Germany for example, there's a lot of food coming from Spain and well, they do already have a problem with their crops, with water for the crops. And this is getting worse. West Africa and Sahel, well, there is the prediction, well, there's a prediction of well, less suitable land for mice production by 1.5 degrees already, by 40% less land. 40% that's a lot. It's not the region where people already have huge surplus in food every day. So there is an increase in risk for under nutrition already for 1.5 degrees aim. If we increase, well, this is just getting absurd in a way, it says higher risk for under nutrition of course, because it's going to get worse. Apart from this, that it's too hot to go outside anyways. Well for Southern Africa, it's similar, it's not as drastic. So there is already the high risk for under nutrition in communities, dependent on dryland especially, so savanna areas which are rather dry. And this is getting worse again. And the tropics also, there's a risk to tropical crop yields, we already heard that. And on the other side, it's also there these extreme heat waves they're going to face. So this was like a table in there with a lot of, well, details of what they expect from 1.5 to 2 degrees. Now what scientists are a bit strange sometimes because they're also then doing their science and they look at different things. And one thing they are actually now worried about, and this is actually, it is worrisome, very worrisome, is that actually, well, climate change has been always there. Because there has been like a cycle, and this is the so-called glacial interglacial cycle, the earth has been going through. This has to do with the position to the sun and a lot of feedback systems that kick in. If you cool the earth, you have more ice build up, then you have more sun being reflected again, you have less energy that stays on the surface of the earth, and then it gets colder and colder and colder up to a certain point where this changes again and goes back. And this has been going on for hundreds of years. And the point is now we've left the cycle. And this is the part that's shown up here that basically we are now on a completely different trajectory, and that's a trajectory that is we are heating this up, and the earth is responding and it's also heating itself up. And so we are on a path, and it's not quite clear, so they build this, they show this graph here. There is actually the possibility that the earth will go on this path to heat itself up without us even. And this is called tipping points. So there are several things that happen there. That is for example the melting or thawing of the permafrost, there is methane hydrates in the ocean storage that might be triggered to evolve. There will be a reduction of CO2 intake in the oceans. Currently a lot of CO2 is taken into the oceans, but this will get less and less, the more saturation comes in there. We have a die off of rainforest. So last summer we've seen that there are a lot of rainforests burning in the Amazons, but this will also happen by the increase of temperature without human impact. So in this paper here by Stefan and some others, they estimate of about rainforest reduction of up to 40% by an increase of up to 1.5 degrees anyways. So we're gonna lose rainforest, a lot of rainforest already like that. We have a die off in the boreal forest. This was the summer in Siberia. Well they just don't die off, they get burned and there are other reasons why they die. So there's a lot of CO2 going to be emitted from forests where carbon is stored currently into the atmosphere. We have a reduction of ice and snow, so there's a more reflection of the sun, less reflection of the sun into the atmosphere again, and we have a reduction of ice volume, so we have an increase in sea level. And this whole thing, this is like a communicating system. And one thing triggered will trigger something else. This sometimes goes by circulations, also by ocean circulations and so on. So one thing can trigger the next thing and this will trigger, might trigger the next thing and this will go on. And if this happens at a certain time, at a certain intensity, then we will not have as human beings with the current technology we have, we will not be able to stop that. And that's what they are worried about, so this climate scientists, that we should not get these tipping points to go too strong. They are already, these are processes that can be already seen, but currently they are on a level where it's bad, there was actually four weeks ago this paper published in Nature, a change where they said, well, we might be wrong with our estimation here with this 100 gigatons because these tipping points are worse than we thought, so we are actually further, they are more on the upper limits of the bounds where we thought it would be. Yes, so these are very worrisome, well, situations. Now this should trigger us to do something about it and that's actually the point. So things need to be done, but up to now, well, things have not been done, but this is like the climate greenhouse gas emissions curves from 1970 to 2010 and we can see that not only the curve has been increasing more or less the whole period, but also the increase has increased from 2000 on. And the main increase here is by CO2. The other gases here is methane, there's nitrogen gases up here and well, there's CO2 from, well, agriculture, forestry and land use. This is here, they are more or less constant, sometimes there are spikes like this, most likely this is like rainforest burning. The only year and the recent years where there has been a decrease also in the CO2 emissions was in the economic crisis in 2008, where there actually was a decrease by 4%. Now nevertheless, the scientists went on and said, okay, let's calculate. How can we manage to get to 1.5 degrees and there are different scenarios, some say, okay, let's go to get to 1.5 degrees, some say, okay, maybe we need to get higher to a higher temperature and later on change that again to get to 1.5 degrees. So there are all kinds of scenarios that you can calculate. Now if we say we use, this is CDR, I will go this carbon dioxide removal, we don't have that and we say we use an exponential curve, each year we reduce the same percentage of our emissions and we want to get to 1.5 degrees and this was the curve from 2018. So we should have started this year to reduce our CO2 emission by 18% each year globally, 18% if we want to reach 1.5 degrees. If we want to reach 2 degrees, it's still 5% each year, 5%. If we do this for Germany, by this, and I think this is the most important figure, it's not as important, like politicians always say, oh yeah, by this year we want to reduce our emissions by 50% or something like that, but this does not tell you what happens like by 2030, what happens until 2030. So it's very important to keep in mind that it's like we have a budget and this is actually from a paper, it's a global carbon budget, so they publish each year how much budget do we have left to emit. And so if we take this budget and say, okay, this is our budget, how are we going to spend going to spend our carbon budget? And this is something that we should ask all the politicians, what do you think is your budget, why do you think this is your budget? And there's been actually an article by climate scientists, Stefan Rammstof in the Spiegel, we said okay, let's estimate we have about 7.3 gigatons CO2 overall budget in Germany and we could say if we want to reach 1.5 degrees, this would mean we continue our share of emissions which would be in Germany, which is like double the average of the rest of the world, and we say okay, we have the right to blow out in the air twice as much as the average person in the world, then we still would have 1.5 gigatons CO2 in Germany to emit. And how are we going to do that? That's the question. Do we have this in mind? Of course we can calculate this down to each person in Germany, so we end up with about 40 tons per person. So each of us can also think of this, I have 90 tons here, sorry, 90 tons that is to emit, how am I going to spend this until the end of my life? Now if we go back to this report, then we have different scenarios and as you can see there are different ways of doing that and these are different economic scenarios. So and you can see already that most of these scenarios do have negative emission at some points. Maybe all of them have. Some of them include carbon capture and storage here as shown as BEX and depending on what kind of economic scenario you go for, this is more or less and here it's like up to about 20 gigatons per year to be stored in the ground. The green part here, agriculture, forestry and land use and other land use, this also of course you can reduce CO2 by planting trees. This is actually a very efficient way of doing that but of course the land area is limited and this is also true for other things and of course the land area we can use is decreasing due to climate change. You could always, should always keep this in mind. Now the base of all these scenarios, they put this again into a table and I put some pictures to that so they said if we want to reach to 1.5 degrees what we have to do, we need to wrap it and profound near term decarbonization of our energy supply. So basically we have to be very, very quick and change our energy supply. This has to be, that's the first part. The second part, we need greater mitigation efforts on the demand side so we have to use less and get smaller with things. Third part is, well we do have to do this within the next 10 years. So we cannot wait. This is very, very urgent. Well this is actually a table that looks like this bit, sorry for that. So the main thing is that the additional reductions come from CO2 emissions because the other greenhouse gas are already included in the two degree scenarios. We need to invest differently so investment patterns have to change strongly. What we also, the best options actually for 1.5 degree scenarios are the ones that go with the sustainable development because if people don't have food to eat, they don't have the chance to take care of the climate anymore because first they are trying to survive. So we do have to also care about how people can live on this planet. This helps protecting the climate. Well then they say okay we probably have to think of climate, the carbon dioxide removal somehow towards the mid of the century so this has to be implemented now. And what we also have to do is we have to switch from fossil fuels to electricity in the end use of sector. Now CDR, carbon dioxide removal, I will say a bit about that. This is of course agriculture, forestry and land use. That's very easy, planting trees. Then there's becks so you use basically biomass to produce some gas and then you capture the CO2 from burning the gas and press this into ground and carbon capture and storage. Or what you can also do is use direct air capture as where you use these machines so they take CO2 from the air and then you have to store it. And you can see such a machine here, this was like a model at the time so these have been already existing models. So basically this can be taken 1000 tons of CO2 per year. So if we want to go for gigatons then we would have to build millions of these in the end. Problem with that is a bit in this report so basically we have an energy usage of that by 12.9 gigajoules per ton CO2. So basically if we want to use a put down 15 gigatons of CO2 per year by this which was in one of the scenarios we would need about one fourth of the global energy supply only for atmospheric waste management. It's called like this. And the funny thing this was like a professor, we had them at our university here in Altenberg and he gave this presentation and he said, yeah this sounds so crazy but the climate change will hurt you so much this will be done. And Bex that's a different way of doing that with a biogas so the thing is if we want to have that at large scale it requires huge amounts of land use to produce this amount of biogas and the other drawback is of course that you do have to take care of your storage systems to avoid the gas to come out because well CO2 has a higher density than oxygen and it stays on the ground if there's no wind and if people live there you don't have anything to breathe anymore. Now there are of course different sectors. This for the EU for example where the greenhouse gases come from so the main parts of course agriculture, there's transport in the energy industry and there's also other industries and it's important to keep in mind that this is not equal over all different countries but it's also distributed to depend strongly on the income of the people in the countries. So the so called high income countries here they have the highest share in the CO2 emissions while the mid so called emerging countries they're almost at the same level now while low income countries they mainly have CO2 emissions here from agriculture and land use. So the question is can we make it to 1.5 degrees? That's a good question so there have been a lot of studies like for Germany and the EU either on like energy infrastructure for example or the whole system. There was one study from this year they looked for 95% CO2 reduction by 2050. There was one study currently just released for the complete EU and greenhouse gas neutral EU by 2050. So obviously technically there is this assumption that this is possible. One main thing of that is that we have to go far more efficient and one thing in that is use electricity because electricity is very efficient in many things. So currently the prime energy consumption in Germany is about 3200 terawatt hours in total and the assumption for 2050 where they have this 100% or 95% reduction. That would be 1300 terawatt hours or the other study was even less than that. That depends a bit on the mixture they use. The reason for that is for example that the efficiency for example of battery driven cars is much higher than those of combustion driven or other methods. So it really depends on which technology you put into use on how good you get. On the EU level it looks a bit like this. So this is their demand and supply today and this would be so the reduction is not quite as large but they still assume that we can reach this type of reduction if we want to. Nevertheless they are not assuming 100% CO2 free but they calculate with negative emissions by agriculture and forestry. So this is actually in these calculations and I really like the one by Rubinius and so on. That's the lower one because they actually calculated completely with storage systems with electricity grids and all that and how much needs to be invested into this. This is a very detailed study, a very good one. So this is actually technically possible. They even calculated this what happens in the so called Dunkelflute, that's a German word for there's no wind and no sun in the winter for a period of time. What happens and that's what all they assume is that we do have a lot of storages for gas and we can use these current strategic storages for gas in the future to store power to gas or gas that's one by electricity there as a backup. So basically, technically this is possible. So to conclude, so the climate system is already at a critical stage. The prospect for a 1.5 degree warmer earth are already very bitter and well the IPCC reports and all the reports there are, all of them go for if you should not exceed two degrees because we have this thing of the tipping points. And several reasons we already have these two degrees. Yeah, this carbon dioxide removal is presented basically this is hard to avoid but there are these critical things concerning carbon capture and storage. And whatever we need to do is we have to act fast and that's the main thing this has to be done very quickly. And I must say I'm very sorry but our government, well, yes. So it is not a technical issue, it is a political one. Yes. Thank you. Bernard, thank you very much. We do have eight minutes for questions. So we have a couple of microphones here in the hall. Please line up over there. We have those eight minutes. I'm sure there will be questions. The signal angel is signaling over there that we have a question from the internet. Do you see nuclear power plants as a temporary solution to slow the emission of CO2? And we had quite some discussion on the internet. There was another one answered, you need more than 10 years to build new nuclear power plants and the response was well you could we get the shutdown once back on the power line. So is that a realistic scenario in your view? Well, there is actually this is a current discussion going on and the issue with that is it's not that easy to get old power plants back into running because well they have a certain type of lifetime and if you want to put them back on into the system then you somehow would have to exceed the lifetime. And there are some of course some security issues and if you want to avoid them then you have to put a lot of money and effort into getting them to run and you need also a lot of time to do that. And so this the question is would this be worth it? And I would say probably there are faster methods to do it. You could do it. They are of course the risk and I mean after Fukushima and Chernobyl basically we've all seen what the risks are. So and I would say it's probably not the best and fastest way to do it. There are other ways that could be worth doing it. Okay, then we're going to hop over to microphone number one. Yeah, first I want to thank you for your talk. It was very informative and yeah my question is as follows. There was a talk at the university where I studied in Damshade one and a half years ago from a person who compared the IPCC predictions with what really happened with the real temperature increase and the damage which causes the climate change. And what she found out that the IPCC always nearly always understood and made it the effect of the temperature increase and what it causes. Have you ever heard of this criticism and do you think this is still the case? I hope not. The issue is of course that the IPCC reports is always very, very carefully taking decisions and is very carefully looking at this and they are more conservative and they rather are lower than the actual temperatures in the end probably because there's of course also a lot of pressure, political pressure on them. So if they would predict something and they would over predict then people would immediately say come and say hey you're doing panicking and so on and so that's why it is most likely that they try to be as accurate as possible but they rather choose the lower estimate. Yeah that's what she was saying as well. In the end it's a summary for policy makers, I showed some slides from that. It's actually voted on by the governmental agents so they bring this into a governmental round of the UN entity and the governments actually have to approve this and so that's why it's very, very diplomatic in the terms of doing reasons for concern. So people are concerned about all kinds of things. Alright then we hop over to microphone 2 please. Okay first thank you for your talk, all good mood is gone now and if it's mainly a political problem do you have any idea how we can force politicians to make the right decisions now because what we are doing at the moment like protesting and voting doesn't seem to work? Well I think actually I'm very happy because I think protesting works but it does not work in the same way that people who usually take it to the streets think it works. It puts a lot of pressure on to them but it's one pressure on, they also have pressure from other sides you know and then they look at you know what are my voters and if their voters are not the ones that are on the streets well they might be not as important and so I think the main thing that needs to be done is to go out to the people and going to the street is one way of doing that and talk to the people and talk especially to those who are not there on the streets yet, who are the potential voters of those who think well I don't have to care so much about because these are not my voters and we just have to go out and talk and I think this will put up the pressure together with taking it to the streets and protesting and doing whatever talking to politicians and I mean we have you know Angela Merkel is our chancellor in Germany and she's a physicist I mean she knows I mean she understands all this you know it's not that she doesn't know it's just the pressure from the wrong side yet. Alright and we have time for one last question microphone three please. Yes thank you also very much for my side for the informative talk from the description of the talk I was expecting more on the it said something about the resilience about climate skepticism yeah to be more resilient about their arguments and I was in discussion with many other people also climate skepticists and what they sometimes said they didn't criticize the anthropogenic well they didn't criticize the climate change at all but the anthropogenic part of it and what they say that there is like an increase of solar activity the last decades which increases the temperature and that also like the diagram is like only from 1860 but if you consider like the last millennials there have been higher values of CO2 in the atmosphere but the temperature did not correlate so how do you argue with this these kind of arguments. Yes that's a good one yeah I didn't go into these because they are the sometimes the easy ones but the thing is that there are I did this talk this way because it helps if you go into our climate skeptics say this and they say a lot of different things so you could do a whole talk on what climate skeptics say if you do that then in the end people keep in mind oh yeah this there is some skepticism on this and this is I did a lot of these things because by this now people can go out and say okay this is currently the state of the of the research I did not go into the climate skeptic detailed answers of course there are I mean I can like for example the sun the radiation is already in the climate models the the changes in sun the radiations the variations of the of centuries before are actually being pre-calculated in the climate models currently because only if you are able to run if you if you are able to mimic that in climate models today for today or the past if you're able to do that then you're able to do to run it for the future and this is how climate models work and so all this all these variations are taking in so I'm sorry I'm out time is over but we can talk about this also later on I didn't get too much to the climate skeptics now thank you very much yeah all right we don't have time for any more questions bernard let's see you all thank you very much you
|
This talk is to show the current state of the discussion on climate change and the necessary and possible changes from a scientific perpesctive. It is to give some typical relevant answers and to foster the resiliance against climate sceptic questioning. This is one of the main tasks the scientist for future are trying to tackle. The climate crisis is already existing and it is going to become worse. Looking at the pure facts of the changing climate, the acidication of the oceans, the slowly but steady rising of the sea level and the strengthening earth response effects, which make thing worse, it is hard to stay optimistic on the development of human kind on this planet. This lead to the largest social movement in Germany since the second world war fighting for a limitation of climate change to a maximum average temperature increase of 1.5°C. On the other hand, this movement is often disputed. Since the necessary changes are not liked by everyone, the engagement of especially students was attacked also by politicians – even declaring that they should leave such issues to the professionals. At this point scientist for future joined together to support the demands of the students and declare, „they are right“. This support is urgently needed. People have many open questions. The necessary changes are involving all societies in the world. In Germany, one of the most disputed topics is the field of energy, its generation, distribution and use. Is it actually possible to go for 100% renewable energies? What would this lead to? These are typical questions – which are not easy to answer. Other typical questions are more fundamental, since climate sceptics are increasing in their relevance and their social media outreach. Thus a lot of people encouter questions, they cannot answer. This talk is to show the current state of the discussion on climate change and the necessary and possible changes from a scientific perpesctive. It is to give some typical relevant answers and to foster the resiliance against climate sceptic questioning. This is one of the main tasks the scientist for future are trying to tackle.
|
10.5446/53116 (DOI)
|
We have here a next presentation from a guy called Clemens Schuhl and I have him actually in my pocket here. It's this little man and he's gonna tell us announces actually what he's going to do or what he gets. I just leave the stage for him. Yes, keep it, pay attention. Here he is. Clemens. A little bit like a fairy tale, because many wish that the digitalization would be a fairy tale with beautiful, clear relationships. But in the end you see that the thing is not that easy. From one of the exhibitions, a apartment in Berlin. The first two acts were actually metaphorically understood. That was exhibitions. But now the third act is fully automatic on a stage. If you have watched German lessons, you know the catastrophe is coming in the third act. I'll tell you again what happened in the first two acts. It was once a princess. She just finished her art studies in Leipzig. So what you just saw was an exclusive preview for an upcoming exhibition. It's a play about the Warnung Spot, which I'll tell about more later. In the upcoming scene that I just stopped now, you would have seen the princess realizing that she has to move to Berlin. So let's talk about moving to Berlin. Let's talk about searching for a flat in Berlin. The situation might be similar in a lot of other big cities, but it's hit Berlin especially hard and especially fast recently. What we can see here is a flat viewing. This was a bit out of the norm, because what happened was that someone was offering a decent flat for a reasonable price. Hundreds of people showed up. Now we're going to ask why. There are deep going, complex analysis of the situation in Berlin, but I'd rather start with something really shallow. How Berlin portrays itself. Berlin's city of freedom. They've got all these nice slogans there that always end with Weiler's Gate in Berlin, because it's possible in Berlin. I asked, what is possible and what is this freedom? Because really, if I look at the situation of myself and my partner, there's really nowhere it's possible for me in Berlin, apparently. I don't also understand what should be this freedom of me standing in the cold waiting to view a flat. What is this freedom exactly they're talking about? The resource housing in Berlin has hit exhaustion. Why I'm putting this connection to the resource exhaustion here is because I think that housing is fundamentally a resource allocation problem. A lot of what I'm going to say here about housing could be generalized to other resources. Of someone who went forth to find a flat in Berlin, like the Fairy Tale, is a project I've started as a media artist about two years ago when I wanted to move back from here in Leipzig to Berlin. I needed a flat, right? I needed this resource. I started looking for a flat and it looks pretty much like this. You sit in front of your computer all day and you're refreshing these same websites endlce. It's really mind-numbing. You sit there and you sit there. This is day one. It's day two. I hope I'm boring you by this point because it's really boring and really annoying to search for a flat in Berlin. You're just wasting your time there. By day three, I was so annoyed that I thought, well, actually, I do have a background computer science and this is not exactly the thing I like to do. Why don't I simply build a bot and automate this entire process? There I am building this bot, which from the outside I admit must look exactly as boring and tedious as searching for a flat yourself. To do this, I deployed some cutting-edge technology. I'm going to spill some secrets here, so prepare. Which is basically, well, I'm going to do exactly the same thing as before, just automate it. I was using a browser before, so now I'm just going to automate my browser, do the same things as before. A lot of people ask me, well, are you hacking this or accessing their API? No, nothing. I'm just using Firefox as before. That's the nice little fellow. And then Selenium, which is a way to talk to Firefox to make it do things that you would usually do. And then Python, which is a programming language I like for its simplicity. And what you're doing when you're searching for a flat is that there's this button. So you're looking for something on this web page and you say, well, it should say submit or unfragasenden, which is German for submit this request. And then you're clicking on it, cutting-edge technology. And so, yeah, if people ask me, is this a hack? No, it's just automation at work for an individual, which seems to be something that we're so unused to that it seems like a hack. But it really is a hack as in you're using this really dumb old-school technology to do something because you've kind of understood how it works, but you're not doing anything unusual, really. And one thing I want to find out is that building this bot is something I personally enjoy. So I rather spend my time building a bot than looking manually, although I must admit I've probably spent as much time tinkering on the bot than if I had looked for the flat manually, honestly. But I'm also pointing this out here because a lot of us here enjoy coding. And so we always think, oh, I could automate this rather than doing it manually. But sometimes that thing that we're automating also for other people might be something they enjoy. So we should take a listen to kind of we're automating parts of society that maybe there are people that enjoy doing this. But I'm sure there's no one who ever enjoyed looking for a flat online. So I feel good there. Yeah. And then I found a flat through this process. I wouldn't have found this flat otherwise because it was online only for half an hour. And then I moved back to Berlin and the end of the story. Thanks for coming. APPLAUSE Which is, of course, not what happened because people started asking me, oh, I'm in this dire situation, a friend of mine is in this situation. And they got messages and messages and I started wondering, is it fair if I give these people that happen to know me who are often computer scientists themselves to give to them these tools? Because it's unjust in a sense, right? So let's review what happens if you build a bot. I mean, it works. We see this. I have a flat in Berlin now. It works. It fixed my situation. But it's probably unethical because you're getting this advantage, right? So what happens if you share it with your friends? It also works for them for a while, I guess. It also kind of fixes their situation and it's still problematic. And then you're like, oh, I'll release some GitHub and everybody can use it. But that's still your friends. It's still people who can use this technology. The first version of the bot, I couldn't give to anybody who was not technical because you had to configure it in the command line, all these things. That's not public. That's still your friends just internationally. So what if you give this to the real public? Will it still work? Will it fix anything? And most of all, in what ethical situation are we there if we're doing this? Also, how do you even do that? How do you give a bot to the public? Because you're still in this resource allocation problem. You're still fighting for the same resource. You're still competing. So there's often the incentive to not even give it to everybody. And not everybody can be satisfied. It's not something I can fix magically to give a flat to everybody in Berlin. So that was an interesting situation. And if you're a researcher, you would probably now say, oh, I'll write a paper about this. As a media artist, you will say, oh, let's do an exhibition about this. And so this is exhibition number one, act for the first act. Your future is calling reply now, which was showed in the art academy here in Leipzig. And one thing to know for the people who are not from Germany, there's a bit of a competition between Leipzig and Berlin, just who's cooler and where do you move an artist if you really want to be on the edge. And it was hanging in the edge room of the university. So it was confronting everybody the moment they entered the school. And what you see up there hanging is a banner, which is using the same slogans as we saw before. And it reads in English, search for a flat on the cultural metropole, secure your career as an artist because it is possible in Berlin. And by this stochastic approach, I thought this is the language that you actually reach artists with, hopefully. And what was hanging above it was a box with a printer inside that would print flat ads coming online in Berlin in real time. And bit by bit flood the atrium with these, oh, give me a second. Give me a second. Okay, sorry, attack problems. It was filling the atrium with these. Here we are with these ads. And what would also happen is that every time something came online. Yes, we would get this notification sound. And what I was referring to there is that for a lot of people, by now technology has become something very negative. So they have this thing in their pocket that is pushing them to do this thing. They're feeling surveilled all the time. So we've got this very negative narration of technology in our lives. And this was also kind of like reminding them, oh, you haven't made it yet. You haven't moved to Berlin. Put out your phone now. Do this now. I would see people sitting with me in a bar suddenly pull out the phone being like, oh, there's a new flat online. I have to reply to this now. And so they're constantly feeling like they're being punished by technology. And people are going back to brick phones. They dream of moving to the countryside. And I wonder, where have these dreams gone about automation where people could maybe feel positive about this? Yeah. Now it worked for once. Yes. So where are these dreams about automation? Is anybody still feeling hopeful? And so one person that's feeling hopeful is the bot. The bot made this promise to us. The bot said, well, I can fix this for you. We can come back to new utopias. And so if we want to give this to the public, if we want to public utopia, we need a lot of computers. And then we can show this to the public. So I selected this shopfront in Neukölln, which is an area in Berlin, which has been hit especially hard by gentrification. And the idea was that people could come there and sign up during the exhibition for the Vornang spot. They would configure it for themselves. And then I would be standing there in the shopfront day and night and search for a flat for them. And people in Berlin really know the interface of these platforms. So it had the strong impact on them. They would walk past it and be like, whoa, this would be cool if I didn't have to do it. But unfortunately, it doesn't work. Because people, when they realized this is art, immediately thought it wouldn't work. There's a strong association people have between art thingies and things that don't work. They thought it was a video or they thought it was that. No one believed it actually worked. But they had this imagination. I could trigger the imagination of, whoa, what if we had a world where these things were possible? And then at the end of the exhibition, I made this possible. I put it on the Vornang spot website. You can still download it right now. It works. Where it was downloadable for all the major platforms. And now coming back to this, what is public if you're building a bot? Public to me means it's easily installable. You don't need to know about technology. You don't need to understand anything. And so I look for something where it could have the same automation tools as before. So a thin wrap browser. And I know a lot of you hate Electron. But really, for something like this, we need to get something to the public in an easy and understandable way. It's a great tool. Because public means accessible. Otherwise, you're just building software for you and your friends. And so I spent a lot of time tinkering on the user interface. It first greets you. Then it tells you a bit about, well, this is actually art. It's not just something that you can use. You can use it. But there's an art aspect to it. And then comes the important part of the configuration. You first tell it, this is where I want to live. This is the type of flat that I need. And here I already started inserting some useful things that you usually can't do such as saying, well, I'm not willing to pay more than 10 euros per square meter. So I'm willing to put this much money up but only if I get more space. Because otherwise, you just end up with expensive apartments with little space left. Then you have to put in tons of personal data, which was not my idea. This is just what the platforms request from you. And now the cool part, your personal application. And as you can see here, it uses placeholders. So the bot can later on imitate you much better. It will put in the landlord's name or the landlady. So the lessor's data will be in there. The street data will be in there. Normal people also just copy, paste, and do this by hand. And now the bot will pretend to be this human for you. Then you review everything if the day is correct. And then you start it and you see on the right hand side the normal user interface. So you can really see it acting. You can see it clicking. You can see it typing. There's an overlay for you to understand it better. But it's really just the normal website running in the background. And on the left side, you have an overview of what you've done so far. And then comes the fun part where you watch your computer writing this application for you. So now you will ask, okay, this is a nice bot. You've showed us you can build a bot. But where's the art in this, right? What's the artistic aspect of this? And so there's one more thing I haven't shown of the exhibition yet. There was this kind of like art stopper, kind of like a shop stopper just for art. And it claimed that the artwork, which you cannot see here, is the change of the network. So what is this change in the network at what I'm referring to here? There's three ways of looking for a flat at the moment. You can do it manually. You can search with the notifications the platform provides or you do it fully automated. And when you're doing it manually, you'll sit there refreshing the same website all day. Because if you're not fast, you're not getting it. You can wait for their notifications, but the problem there is that the notification might come so late that the flat has gone offline again. Like flat sometimes online only for 30 minutes or something. Or the bot sits there and refreshes the website all day. Which sounds better, right? And then you normally spend valuable time reviewing the flat. And this is something I'll come back to later a lot with this time you spend reviewing flats. And what you have to keep in mind here is that you'll often assess a flat that you won't get afterwards. So you're just wasting time on things that you have no chance of getting. The bot doesn't do this. You just apply it to everything. And then comes the part where you have to write the application, which doesn't consume that much time. I mean copy, pasting and putting in the lesser's name doesn't take much time. But you have to be in front of your computer. Which is hard, especially if you're a working person. Most of the flats come online during the day. And you have to be there to do this physically. Or at least on your device. The bot can do this by itself. Because this is one thing you'll all know, bots are really good at copy, pasting. And this is what's necessary here. Natural bot work. And then it's lesser's turn. The lesser will go through the first few messages that hit their inbox. So it's important to be fast. And then assess, well, does this person fit my criteria and invite you or maybe not invite you? And I've explicitly left out the option that the bot gets rejected here. Because of course it can get rejected. But you'll never know. You have never reviewed the apartment. You have not written an application. You've not wasted time on it. So it doesn't matter if it gets rejected once in a while. Whereas if you really wanted this flat, you'll feel sad if you don't get it. And now this little black bot jumped up there, right? So what does this mean? It means that there's another case of people feeling mistreated by technology when the lesser start using email responders automatically to simply message the first 20, 30 people. The viewing is next week at this time and date. And there was this article in the New York Times recently where it said that human contact is becoming a luxury good. And so not even when you're applying for a flat you're being treated by human anymore. So sometimes you'll feel sad that you didn't get the viewing. Sometimes you'll feel happy. But in the case of the bot, this is the first time you assess the flat. So only if you're invited, you start considering will I really go there or not. And so I want to talk about working time here. And why working time matters, and this will maybe get a bit chaotic, but I'll still try, is that if we jump on this theory of the labor theory of value, which you might not subscribe to, but just as a thought experiment, it says that a product is just measured by the socially necessary time to produce it. That's its value. If you look at time historically, you would have a real estate agent. They would spend time on finding this resource for you and you would give them money, normal transaction. Now these platforms appear, online platforms, which promise to you, you can do this in your like previously free time. You can do this, of course, for free. You're saving money there, but you're not realizing that in a sense you're working because they're somehow still making a profit. So who's making this profit? How if no one is spending time on it? And so this came to me to this idea of, okay, if we're not okay with the current situation of housing, maybe this is a way we can reject it by saying, okay, we're not putting up free time anymore to work on this. We'll push this to the lessors. Because if you're not spending this time, someone has to spend it and this will be the lessors again. So there could be this hypothesis of like a working time denial of service attack, which sounds cool, but of course doesn't work because the lessors, what they will do is still just go to the 10, 20 first messages and select the best offer that you gave them because we're always giving them our best effort and they can just choose. So as a thought experiment, it's nice. It's something I would keep you in the back of your head, but it's not how it works. The lessors will still always win. And so if we say, well, this is what we dreamed of, right? You write the simple Python script and then you fix the social problem. This is what we wanted. But what really happens is this. You write this simple Python script, you think you can, oh, we can just do some text solutionism and the problems just multiply because the problems will adapt. There will be problems within your solution, all these things. And so the slogan of the bot I've come up with was there's no technological solution for social problems. And of course some of you will jump up here and say, no, no, no, there are things. And so I will go back to, okay, there are no exclusively technological solutions to social problems, because of course under certain circumstances technology can play an important role. But where I want to get at is that if we simply reframe this as it's not a solution, it's a reaction. If we just change this word, we get a lot further. So let's reframe this. I've not solved the Berlin housing market situation. I've reacted to it. So do this the next time you're discussing a solution with someone, see if reaction would not be the proper word for what you want. And if we say reaction, another thing becomes apparent that someone else might react too. If I had solved it, no one would talk about it anymore. It would be solved. But I've only reacted so the platforms will also react. And reacting for them mostly means forbidding. How do you forbid about? Well, there's mainly three ways. You can put in technical barriers which are mostly designed to say, well, you should be human to do this, prove your humanity, which might get interesting in the future with more AI and better capture prevention technology. There's legal barriers which is simple. You just tell people, do not automate this, otherwise I will sue you and you will be bankrupt forever. Seems like a simple solution. And there's another thing that we don't think about that much, but which is important, which is simply moving on. So people admire capitalism for its ability to adapt. And so these are three startups that, from what I understand, have adapted to the situation. These are platforms where you sign up, you put in your profile, and you can't search for flats. They have a matching algorithm which will then say, oh, this lesser is looking for someone with a high income. So, hey, you lesser, pick one of these 10 people that we suggest to you. And I hope that you all see the danger here is that what is happening now implicitly would be explicit, that people can filter by only high income people, only people, well, it might be latent variables in some point in the future, only people that are of like the social class, all these things. So it just moves on. And then the question is, can we even use automation for good? Is it maybe implicitly designed to not help us but the other side? And to understand this, I will go to Michel Soto who has these two words which we use in our normal life interchangeably. But so he does not use them interchangeably. He says, strategy is something institutions do when they are trying to plan for activities. And there are tactics which we use as individuals. And one example for this is that if we look at a city, institutions such as the city government come up with strategies. They say, here are the entrances to this park, there are the exits, this is where you should move. And all of us, we have tactics. We are like, oh, I will do a shortcut here, I will do this, I will do that. So the strategy never really works out because we adapted for our needs. But our tactics never change the rules of the game. And that is the important part here. We don't change the rules of the game, we just adapt. And the bot is obviously a tactic if I do it for myself. I found a way to find my ways through the housing market. And the question is, if I release this to the public, can this become a strategy? Can we scale software in a sense that we are now on an equal eye level to this really becoming a strategic thing? And the answer is, of course, not in the case of the warnings, but theoretically maybe. And how would this maybe look like? We are still allocating resources. And so what we would need if we really wanted to have a strategy here is to have a proper resource allocation mechanism that works, that isn't just based on money, that is not perpetuating the inequalities we already have. And this is the hard part. We live in a society where the only resource allocation mechanism we know is money, which leads exactly to capitalism. So the challenge is to find new resource allocation mechanisms which I honestly don't know about, but there are people out there who look into this which we should listen to. I am just an artist. And so as a summary of this, also like a very obvious take home lesson, if you build bots for yourself, ask yourself, am I gaining an advantage here and is it unfair? Am I giving others the possibility to take advantage of this, which will usually be your friends, either physical friends or get-up friends? And then would it be theoretically possible to give everybody this benefit? And if you say, oh, yes, I have something that does match these criteria, then that appears to question, should we have a right to do this? Should there be a right to automate? Which is hard, this question, because what would be the implications of this? If we look at Congress, they are struggling with finding a way that is both privacy-preserving and does not resort to brain licenses and giving out these tickets. So it's hard to allow automation without being deeply intrusive on privacy or some other level and allowing for this. So a bot for yourself works, whereas a bot for the public often leads to expectable systemic failure. So I released the Vornungspot with two intentions. On one hand, giving people this thought of automation could be cool, it could be something positive, this could be something I'm looking forward to, but then also the failure. So this is like, it's a drama, right? And we're now coming to this dramatic end where we simulate failure. We know that the Vornungspot currently works, you can download it, but it will fail in the future. And so I wanted to simulate this before it happens, so we can learn from it and adapt as a society. And the way I wanted to simulate this is coming back to theater. Theater is a really old technique for simulating something and sharing your insights with an audience. So as the last consequence, which is the title of Act 3, we, well, I built a theater and as I'm into automation, I built a fully automated puppet theater. Again, puppet theater being something that's publicly accessible in my opinion, it's something that people can relate to and thereby hopefully better understand what I'm trying to tell as a story. And again, I'm using very high-end technology. So the plot of the theater, we're from the very beginning, is that the princess wants to move back to Berlin and the Vornungspot tells her, well, I've helped so many people, I will help you find a flat. And the princess, historically in these German puppet theaters, is someone very helpless. So she's in the position of someone needy here, but if you watch the full play, you will see that she's really cool. Don't worry. The Vornungspot says, I can do this for you. He's helped so many people. But now as everybody's using it, the landlords and the Lesser's are reacting and they have the Lesser bot now, the Famita bot. And the Famita bot keeps rejecting the princess because other people with more income have also started using the bot and her income is not high enough to have any chance anymore. And we'll now jump back into the play at the dramatic end with the war of the bots. If a person has been using you for a long time, it's tragic because he still has no flat, but he didn't spend his own time with the search. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. But now, as everybody's using the bot, the Lesser's are reacting and reacting to the Vornungspot. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. And the Vornungspot is a very simple search. I can throw it away! No, I'm not. Now close the Gag Casper. I'm the wrong tool. I'm a machine, I don't care about Berlin as much as the investors who bought your house. 12049 10997 10249 13353 I know your post-library, but I wasn't in your street. The investors probably didn't either. Did you give me hope? Did I find a house for you? I would be happy if I could be emotional, honestly. But I have to disappoint you. I help people find a house. It works well for a long time. They find a house faster and easier, because not everyone uses me. Everyone uses me, but no one has a benefit. But at least no one uses me for the time I live. But honestly, that's not what it's about. Don't just think about yourself. What's wrong with them, who don't have internet at home? Or no computer? I said differently, what's wrong with all the people who don't care how long they're looking, no matter what means, no adequate housing, which they can afford? Those who are really forced to live here, don't even think about public debate. Those who have been looking for a house for months and you hear them and talk to them right away, no more help for another month, no more help. This system has given you up. I am the wrong tool. I think it's a really good tool to be, I work as I thought, I automate well, work self-sufficiently and unabashedly, but I am the wrong one. It has been said often, but I can only repeat it, there are no technical solutions for social problems. Maybe I have a house for the one or the other successful, but that has not done anything. I can't help you. You have to put the tools away and take it back in your hands, together. Organize yourself. Get back to the values, regulations, controls, distribution, emergency prevention. Techniques, millennia, values, what more do you want? Your woot has the wrong goal, the problem is not the search for a house, but the housing market. The problem is that the housing market is a market. I am the housing spot, a project, a hook, that was carried from personal needs into public. A young man with big promises, big ambitions. But my goal was always to step down. I am the one who is standing here in front of you and saying, you are not yet screwed, yet you can go on. I am the one who is warning you, you should not screw, the city is built on you. Now is the moment, in which you can still demand, how you want, the housing to be given. Now is the moment, in which you can decide how automation should shape our lives. Now is your moment, not mine. Nina. You! Thank you, Clemens. Super. I had you in my pocket, so... I can imagine some people have a question here, and please step up and take a microphone and drop it. Number four, there is someone. Sorry if it is on your website, but did you plan on putting the housing spot to other cities? It is a question that has been asked from day one. Why Berlin, why not another city, this and that situation, cities and much more situation. The reason I chose Berlin is not only because I was there myself, it is because I think that the city in Berlin that is the most interesting, in Germany, it feels to me like the one city where this situation could actually change. They are debating something called the meeting deck, the cap on rents, which will get to the situation where we have to reconsider how to allocate flats afterwards. I think it is the city where the most is moving, and I want you to have it only in one city because to me, artworks are experiments that exist outside of the scientific way. I am still wanting to experiment with something, I want to have a model. If I do this with every city in Germany, we don't learn anything, because it will just change everywhere. I want you to have one city where we can see how does it change if we apply the solution, right? I am using it myself. How does the city react if we do this in comparison to all the other cities? The source code is in GitHub, I know that all of you can fix it for your own cities. I am not doing it for the public because that is not my goal. It is still an artistic research in that sense where I want to see what happens in Berlin. Yes, number three is here. Thank you for your talk. I have not really questions, only two recommendations. First of all, you said something in the end about people being rich and how to pass by them. Sometimes I think that we are the new riches. We are the technical people who get around the expensive things by automating things. Because it is great to automate things and pass by everything by the people who have to pay for everything. I don't mean stealing, but automating things. Well, that gives you some kind of richness that we deserve. Second thing is, I saw your theater thing and the first thing I thought was, oh, this would be cool if you would do it with an improvisation group. I play improvisation together myself. If you get an improvisation group on stage and you give them the input, I am a robot. You play the robot and they play with you. You can get amazing scenes and share ideas about this whole housing problem, not only in Berlin. I have to cut the commercials here. Hey, sorry. But improvisation. The advertisement time is over, number two again. Let me give you one short reaction. You said there are other forms of being rich, such as knowing how to code. In that sense, we should see capital as something very wide. Capital is not only money, it is also the social capital, the cultural capital. In the long term, these all convert into one another. It doesn't matter if you only know how to code now, at some point you will accumulate classical capital. We can focus on that, I think. Two things I want to ask. First thing is, did you check GitHub before creating everything from scratch? Because they are already projects there. Does it work? Quite fine. I want to understand. At the point that I checked, there was someone who built something that would automate parts of it. But as I was trying to find out, something being on GitHub to me is not the same as being public. I have built the bot twice, with the exact purpose of building something that is not on GitHub, but somewhere where people can actually use it. I was reviewing things that existed, but my goal was different. It was not just to find something for me, but to release something to the public. Okay, and as I was going to ask, why did you choose such a high level of automation? After 25 non-premium requests, and if you give such a bot with a high automation to many people, this will lead to many flats going off the market very fast from people who did not even check them manually. You're pointing out one thing where you can say, okay, this is problematic, the bot shouldn't exist, you're creating a problem. I was trying to point this out before, that whenever you react to a problem that's social and not technical, you will create new problems. What you're pointing out is one of these many new problems that will arise from this. The same being, what is with people that don't have a desktop computer at home? I met people at the exhibition that were like, I can't download this at home because I don't have a desktop computer. I don't have internet at home. This is where you must see, this is not meant to solve this, to do this or to that, this is an art project. I want to get into this discussion with you. I want to have people on the street thinking, okay, maybe we need different technology, maybe we need different regulation and different society. This is about creating this imagination, about creating aesthetical moments for yourself and not fixing this. Yes, there are all these problems, but that was not my focus in this. I wanted to have a theater there where people can start dreaming rather. Okay, thanks. Correct. Number three, please. Hi. You basically said, if I use your bots and the response rate depends on my income, what is your experience if I say, okay, but I pay more for the flat? As it's capitalism, if I pay more for something and I say I pay more for my flat, I will get more responses? I've heard from people doing this in other countries. I've never heard of someone actually doing it in Germany to just offer to pay more. I have no idea how people react here if there's something where they can't do it. It's not something people talk about in my surroundings, so I have no idea really. It might work. It might not work. I don't know. If you search for a flat which costs 700, it's really hard to find because so many people are looking for flats. But I say, okay, but for me, it's fine also 900 and I adapt the search parameters. Do I get a higher response rate with your bot? If you... I mean, the thing is, if you're looking for flats for 700 in Berlin, you'll likely get very few responses because there's very few flats like that out there. I can tell you that much. I've not done any data... I was considering adding data research parts to this. I've not done so I cannot give you any statistics on where you'll get the best response rates. It's out there, you can tinker with this, but that was not my focus here. Okay, thanks. Thank you. Are there other questions here online? No one online. There we can close, I think. Bourdieu is a good reference, actually, that he gave regarding social capital, economic ground. Look that up, because that's actually the next step about what is then the context and in what kind of context you want to live, actually, and with what kind of people do you have to integrate. Thank you for this fantastic thing, for the theater that you gave us. Thank you for the talk. Thank you.
|
At the center of Clemens Schöll's latest art project is the "Wohnungsbot" (flat-bot), which automates flat searching in Berlin. But it doesn't only try to search flats for everybody, it fundamentally questions power-relationships in (flat-searching) online platforms. Where are the utopias about public automation? Who should be able to automate what, and how? With increasing urbanization and financial speculation on the housing market the search for a flat in any big city has become an activity that consumes a lot of resources for people in need of housing: beyond the emotional load a significant share of your supposed leisure time is being consumed by repetitive tasks. Online platforms force us to refresh pages, scroll, click here, click there, look at a few pictures and eventually copy-paste our default text over and over again. If you're ambitious you maybe adjust the lessor's name or the street. But honestly, why do we do this? It could be so easily automated. The 'automation drama in three acts' by media artist Clemens Schöll titled "Von einem der auszog eine Wohnung in Berlin zu finden" (Of someone who went forth to find a flat in Berlin) speculates about alternative strategies and narratives for both the housing market as well as automation itself. At the center of the multi-exhibition project stands the Wohnungsbot (literally: flat-bot), a free open source software to automate flat-searching and applications in Berlin, released to the public in June 2019. But the Wohnungsbot is about much more than just rejecting the out-of-control housing situation. There are no technological fixes for social problems. By reclaiming de-facto working time a fundamental utopia of automation is opened once again. Who should be able to automate, what should they be able to automate, and how? But even if these tools are publicly available – who is aware of them and who is able to use them? Looking back in history we find that automation has always been accompanied with struggles of power and labor. How have we reached a state where only institutions, be it private companies (usually for-profit) and the state, are allowed or able to automate? Why has automation become a synonym to nightmares of many people, such as mass unemployment? If we're not asked for consent (or don't want to give it) to being dehumanized by automated processes, how can we oppose these practices? Ultimately, we can look at ourselves (at Congress) and ask: where do we stand in this? With many of us being "people who write code" (title of a previous artistic research project by Clemens Schöll) we must reflect if and how we shape this tension with our work and existence.
|
10.5446/53119 (DOI)
|
Alright, our next talk is understanding millions of gates, introduction to IC reverse engineering for non-chip reverse engineers. Our speaker Kitty will provide a summary of methods and counter methods for integrated circuit reverse engineering and why you should care and what you can do at home. Our speaker Kitty is a researcher at the university and likes reverse engineering and cats. Please welcome her with a round of applause. So hi, thanks for the intro. I hope you like my pink cats. I tried to match them with my hair. I didn't quite work, but I'm getting there. I like to talk about understanding millions of gates, also known as how I learned to love looking at gates all day. Quickly, just for you, we had a talk just before mine which has some basics to reverse engineering, so I will try to build on that. I may repeat myself or repeat the other talk a little bit, so don't mind that. What I won't talk about, any kind of PCB reverse engineering or any kind of process teardown, I don't do that, unfortunately. I also don't do any kind of probing, so that's out of my expertise. I can duck, duck, go, but I don't do it for any kind of data sheet. I also don't do any kind of process analysis, so I don't actually use any SEMs to figure out how chips are made. And I don't care very much about side channel based reverse engineering. There is a whole heap of research, which is really interesting. I don't do that, unfortunately. What do I do? So what I'm interested in is what the hell is this chip? And some of you may recognize it's an open titan chip, so it's actually even going to be open source. Does that mean we actually know exactly what's on there? Well, we have a bit of a gap right here, so that's a bit of a problem. We do know the RTL description, and apparently from the packaging it's all going to be open source as well, but any kind of fabrication data we just don't get told about. So that's where I kind of come in and go look. I really would like to know everything about this chip. That'd be pretty cool. So let's try to reverse engineer it. So we've had this motivation before, right? If you start Googling software, reverse engineering, you get told these are the top 12 tools. These are your best 10 tools. If you do the same thing for hardware, reverse engineering, you get the Wikipedia entry for reverse engineering, which is not so helpful, and maybe kind of a couple of maybe talks or some kind of academic research, but you don't really get given any tools. You don't really get given any methods. So that's a pretty big problem for me. And that's what we're trying to change. We've had this introduction of the life of a chip, of how a chip is made. Usually you have some kind of design that's made. You know, you have like a design house or a P vendor. You write your RTL description. That gets verified, synthesized, placed in route. You eventually have some kind of play out right here, and it goes off into the foundry wherever that may be in the whole entire world. You get some kind of wafer back. You test it, and you have a chip, and then you have sort of your life cycle management back here, where you basically figure out is that chip now done and dead? Do I recycle it? Why do we care about reverse engineering? What is a company worried about when it comes to chips? They don't want the foundry stealing the design, so the foundry goes cool. Thanks for the design. I'll produce the 2 million you wanted. I'll also produce 20 million I can sell. So that's pretty cool. They don't want that happening. They also don't want out of spec or badly tested chips to be going anywhere. So basically the foundry says look, we tested it, and 90% are okay, and we'll keep the 10%, and we'll definitely get rid of them. Yeah, yeah, sure. They don't want any kind of out of spec chips to get back in the market, and they don't want any kind of recyclable chips to get back in the market because they're losing money. So that's why a company generally cares about reverse engineering, about chip security. Why do you care? It's a little bit different. So you don't want any IP to be bad. What does bad mean? Well, it might have a trojan in there, so some kind of malicious hardware. What could that be? Well, we have chips pretty much anywhere, so maybe in your car, and all of a sudden your car starts stopping and everyone else is too. Or in your nuclear power station, I won't go on. It could also be that maybe the in-house design team made a mistake, or decided to hide something in there which is not in the specification. We all know how that goes. When we start finding random things that never got documented anywhere when we bought a chip and are using it, we're also worried about the foundry putting into something in there that not even the company knows about, so we stand up buying chips and the foundry in wherever this chip was produced starts doing bad things. So we very much care about the use, right? We want chips that are working and not evil. That's kind of why we care. Obviously, that also means we don't want failed tested chips in our product. So let's have a look at what the life of a test chip is like. We've also seen this, so usually we come out of the foundry, we have our lovely little chip, and we start de-packaging the poor thing. We delay it, we image, we have to stitch all that, so that's very much kind of an image processing kind of topic, and before that a physical processing topic. And eventually we have some kind of interconnect identification, so that might look something like this, that you start finding pictures and then you have to do image processing to figure out where exactly the connections are or some kind of standard cell identification, so you go through and you have some poor guy, reverse engineer, or 300 cells, and then you find them again with pattern matching, the rest 2 million minus 300. So that's pretty much not trivial, it's complex, but we know how to do it. We know how to do image processing, that's kind of easy. We kind of have the what the hell do we do now topic, we have this net list, what does it actually mean? And I certainly hope you have these at home to do kind of reverse engineering, so maybe they're standing out in your garage, I tell you, yeah, call me, you know, and I'm obviously just kidding, you don't actually need any of these as we've heard before, companies do do these kinds of things for you, and for the interesting part, the actual what comes after I have a net list part, you don't need any of this, so that's quite nice, usually this kind of laptop I have here is enough. Let's talk about the problems we're actually looking into, so what is the really interesting part of net list reverse engineering, we also call it abstraction, trying to get these maybe millions of gates into something that you can understand. So what we have is, you know, some kind of images, some kind of net list, and the first thing we do, and we've had this in previous talks is to figure out what the hell is the hierarchy, right? So what modules do we even have on there? What do they look like? Where are the borders to these modules? And this would be kind of the topic partitioning, so in synthesis we partition to do good layouting and reverse engineering we partition to figure out where the modules are, that's kind of our first step, and we actually already failed pretty badly there as a spoiler. The second thing we do is then to actually identify what those modules are, right? So now we know, hey, we have ABC, what is that? So maybe we have something like that's a crypto core, and this is actually coming back to the point of the previous talk, that graphical analysis of net list is not so great, that's 250,000 gates of a risk five, and you can see quite easily if you do present it as a graphical thing at least we can kind of figure out maybe where the modules are. So I'll do some more of that in a little while. So we have these two main problems, we first want to find the modules, then we want to identify them, how do we do that? I'll give you a quick overview of what we do in academia now, by the way, in big companies we usually just have like 200 EDA specialists who look at the design and go, yeah, look, I wrote this design in 1980, so maybe it's probably that thing again, so the HR courses are actually quite big for this, and as we've heard in previous talk, there is no automation here as of the time being. So let's talk about partitioning methods, the first idea that we have is generally in a chip we do any kind of work, usually on a data path, so maybe you have a 32 bit multiplier and what actually happens is that you have two 32 bit inputs, and every input has the same thing happen to them 32 times. So, every bit basically has the same thing happen to it, and that's duplicated 32 times. And we classify something like that as a word, so a data word, and we can try to figure out where these words lie and then propagate them through the whole entire design, so we hopefully come from input to output, and then say, look, these are probably at the boundaries of modules, so in particular when new words are starting to be combined in different ways or maybe words are being split up, that's probably a new module, so if you go from a bus to your multiplier, all of a sudden you're now working on 32 bit architectures, you can say, look, beginning word and word, everything in between is probably my module. The other thing you can again do is graphic analysis, so this is actually a fully paralyzed AES implementation, which you can tell by the fact that it has 200 S boxes, please don't write your IES like this unless you really have to, because it's very easy for us to find, it looks exactly like this, the 200 spots you can see are exactly these S boxes, and if you do graphical analysis, you will find that they're more densely interconnected and you can get these kind of images, which are actually, yeah, pretty easy for you to see, so if you ever start clustering a design and it looks like this, that's probably an IES, so those are kind of two different ways of trying to petition things, it tends to be quite difficult, we would like to be able to petition each cell specifically into one design, that's not usually possible, so we fail at that quite badly at the time being, we can usually get like 80, 90%, which would be pretty cool in a normal sort of thing, but here it's not really enough, unfortunately. The second part, so we have now got our module, we would like to know what is it, and what we usually do is comparison, so hopefully we have somewhere a database with all known designs, so this would be our database, and we compare our unknown design to a known design in this database, we've had the topic of trying to figure out or trying to get lots of samples, that might be for example a design library, you could also use something like open course or Libre course, and maybe as a company you also probably have access to some kind of IP that you can use here. And the idea is, look, I've got an unknown net list, let's get some inputs, let's get all the inputs, and let's connect them together, and let's get our outputs and see are they the same, so if for every single input that we can put into both designs, the output is always exactly the same, it's probably functionally exactly the same thing, which is quite nice. We have a couple of problems here, so we actually need a perfect functional match which requires a perfect net list extraction, so while we're de-packaging, while we're de-layering and imaging, you better not have any dust on your pictures, because all of a sudden you might be missing a connection or a gate, and this doesn't work anymore, right? Then you have two million gates, and that's a pretty high number of polygons in your image, the chances of there being no problems with your image are pretty minuscule, so this is going to fail. Let's say you do have your perfect net list extraction, maybe you're working on FPGA, so that's easier to get there, we don't get errors, and you also need a perfect partition, so this step I just described, you also need to perfectly, in order for this to work, and you also need a no net list, obviously if you don't have that, that's going to be a problem. You can tell why this stayed in academia and never went out into the real world kind of thing. The second thing we can do is kind of more graph-based, so we can use some kind of fuzzy methods. I have nine designs here, you might be able to tell that three are kind of similar and two others are kind of similar, so we can do some kind of graph analysis and have a look at this kind of fingerprint of what it looks like, and spoiler, it's these three, these are all IS rounds or IS implementations, and these two, these are both catch-back implementations, so SHA-3. You can try to do some kind of fuzzy matching based on this, so if you're interested in graph theory, this is now your part right here, and figure out what could be a structural similarity. The first question I always get asked here, well, you don't know what kind of optimization tool or synthesis tool the other people used, you don't know what kind of cell library they use, how are you going to be able to compare that kind of stuff? So what we actually did is we had a look at three different cell libraries and two different synthesis tools, and we had a look at the same design, so this is actually an IS round, and had a look how they actually turned out at the end, and I would say that's kind of pretty similar, at least with your eyes, you should be able to match it. Teaching an AI to do it does require a lot of data, spoiler, we have done it. So that's regarding the topic machine learning, that's actually possible if you have enough samples. Cool. So we have some kind of output, hopefully, we have some kind of hierarchy, we figured out our modules are these bits, and we've also figured out what they do, and that's pretty cool. So yeah, we can reverse engineer all things, great. I mean, regarding without, you know, the problems that we face, what's our problem? Attackers can do it too. Well, that's kind of shit, right? We don't want attackers to reverse engineer our stuff, they might be able to put hardware children in there, they might steal IP, we don't care so much, we care probably more about the hardware children, we don't really want that happening though. So there are a couple of countermeasures that people have thought about. The first one has kind of been mentioned a little bit, logic locking, there's also split manufacturing, camouflaging, I'll go into those three, there's a couple of more, spoiler, they're all pretty broken. So first thing we did, not me, this is research I'm presenting now from other people, please do note that, is to say, hey, let's do it like in software, let's encrypt the functionality, right? So we add some kind of key, and we do that by adding key gates, so if you have, for example, this lovely little net list, we might add two more gates with some key inputs, and if you don't apply the right key, the functionality is not there, so your chip doesn't really do what it's supposed to do. So that's quite nice. We've actually had quite a few different ways of doing this, so for example, the first idea was we just add key gates everywhere randomly, you know, it'll be good enough for the functionality and eventually that got broken, so we kind of had a smarter method used on fault analysis, that got broken, so we started doing something called strong logic locking, which got broken, so we started doing something called sat resistant locking, which also got broken, and so on. So it's a bit of a cat and mouse game that we have going on there. I'll get back to that in just a minute. Let's have a look why it does fail. So the first thing is that these key gates are pretty easy to find. There's actually a nice paper which says, look, these are the key gates that we put up in here. First thing first, they're usually XNOR gates or XOR gates. Yeah, let's look at all the XOR gates. These are pretty easy to find, and even if we do do some kind of optimization afterwards to try to sort of hide those XOR or XNOR gates, it's still able to reverse engineer it because it's very local and very deterministic of how our synthesis tool might do this. So we can actually find all those locations pretty easily and maybe even cut out the key gates if we're trying to redesign it or get the right functionality if we're trying to add in the Trojan. Furthermore, the structure doesn't really change, so if we're doing any kind of fuzzy analysis like structural stuff, we have, again, an IS round here, and if we do like 20% key gates in there, well, it doesn't change that much. So if I ask you, is it still an IS round? And you said, no, I'd probably, yeah, wonder if your AI is broken, maybe, or something. I don't know. So we can still do functional or structural analysis quite easily with these key gates, so that kind of sucks. We also have a lot of attacks on specific schemes. So this is the current landscape of logic locking schemes. In blue, we have our schemes. In red, we have everything that's breaking them. And I think there is only a few which are currently not broken. Generally the ones with a really high overhead. So that kind of sucks, which leads me to the next problem. We have practical difficulties, right? We have to have some kind of key in there. We always hate having to securely store keys. We need some kind of secure storage or maybe some other way of doing it. We also have overhead that we have to kind of look into. And finally, the verification team is going to hate you. I've actually talked to someone who said for the NASA they don't do any kind of logic locking because it just sucks too much to verify. So this kind of fails quite badly. Okay, we actually kind of get to this kind of situation where if someone says let's do strip security, everyone's like, yeah, let's do logic locking instead of actually putting some thought into it. We have had some information on a FSM or sequential type of logic locking scheme in the previous talk. I would just quickly like to introduce that also other schemes is broken. This is a scheme based on black box. FSMs. So the idea is that you have in the middle your original FSM and from each state you can get into these black box states. So those are like your evil states we can't get back out of. And those happen if you apply the wrong key. If you get into this original, I'm going too fast, into this original thing here you actually have to apply the correct key as soon as you take it away, you have a wrong FSM. And that's actually also broken because as we've seen in previous talks, if you actually look into that, this is the reverse engineered FSM. It's visually possible to identify all our black box states here. And you can even figure out what the key is supposed to be. So you buy the chip, you apply your own key. There you go. Good. Let's get to something a little bit different. So logic looking is pretty broken. Encryption on hardware doesn't work. We don't have one-way functions. So that sucks. Let's do something different. Let's say in Germany we can maybe fabricate 45 nanometers. That's pretty cool. We want to do 8 though. Right? And 8 doesn't work in Germany. Okay. So we're scared that the foundry is going to do something with our design that we're doing the 8 nanometers at. So we only give them the most bottom layer. The one where it's actually important, the one that needs the 8 nanometer technology. And we give that to them. They fabricate that and we get it back. And in our own foundry we do all the rest. We do all the wiring and the upper metal layers. Yeah. And all the connections. And the idea is like here, without the connections we won't know the functionality because we can't actually reverse engineer the whole entire net list. That also fails pretty badly. Why? Because first things first, we usually have some kind of physical proximity. If we design stuff, gate A at the top is usually not going to be connected to gate B at the bottom of this chip. So if you consider this gate, it's probably connected to this one or this one. We also don't usually have that many loops. So the chances of stuff going back on itself are pretty low except in FSMs or maybe for flip flops. And these gates can only drive a specific amount of other cells. So we have some load capacitance constraints that we can also basically throw into our attacker model or attack a model, attack a type or attack a scheme. And we can figure out pretty quickly what the connections are actually supposed to be by brute forcing. The next thing is that we also have sometimes bad designers who don't do this very well. So they just design usually and like normally and then take away the top metal layers and you get something like this. So you have your source gate and you have a unconnected connection. And then you sit there and go, I wonder if it's connected to gate A or gate B. And I think spoiler should probably be gate A, right? So that is also part of why it's really broken. Yeah. Good. The final kind of counter measure I'd like to talk about is something called cell camouflaging. If we can't hide the connections, let's at least hide the functionality of the cells. The idea is that if you consider these two cells here, it's a nano norcell, they look visually different. So the metal actually looks different or maybe the dopant looks different. We can design cells where the metal doesn't look different. So for example, these are two obfuscated nano norcells. Visually they look the same. So if you reverse engineer it on the picture, they should look exactly the same, but they are different functionalities. With the idea being if you don't know what the actual functionality is, you can't reverse engineer the chip. We have quite a lot of problems here. The first thing is those cells are huge, right? So the area overhead is going to be pretty big. Means your chips are going to cost more. You're going to have some kind of heating problems eventually. You also need to find a manufacturer willing to actually produce these because you need special tech to actually do this. You need a whole entire new cell library and we all know how tech companies are with cell libraries. In the past they have now been published decamouflaging attacks. They are also based on what would be normal to have in a kind of chip and to try to reverse engineer the functionality. They are even to brute force it because this can only be a nano nori. Well, that's two options. Let's brute force that. Finally we have the problem that our SCM, scanning electron microscopes can actually distinguish between some dopant changes. So if the only difference is maybe the metal looks the same but the dopants are different, we can actually see that. So if you have a look here at these dots, the optical microscope can't distinguish between those two but the SCM we have lighter and darker dots and the same thing with the fib as well. So we can actually see the changes depending on our tech of tech anyway. For very small dopants this won't work but for very big dopant changes or dopant differences you will actually be able to see the camouflage gates again as well. Reverse engineering is pretty cool, right? So we have some kind of IP protection for the companies. We can try to figure out is there something in our design we didn't want there, some kind of malicious logic. We don't want nuclear power stations blowing up because we designed our chip here and then send it off to wherever. However, foundries use this kind of stuff to figure out where to place hardware trojans, at least that's the fear. If they know the functionality they can figure out, let's place it right in the crypto core, that's what we want this thing to be. If they don't know the functionality they won't know where to place the hardware trojans. And any kind of counter measures we have are broken. So we don't really have anything very good working for us here to actually prevent some reverse engineering chips. The ugly, we don't really have any tools. I know we previously had some discussion or some talk on the tool. We've actually found that we haven't really found anything that can be used commercially for an actual chip. So it's all very nice if you have your 6000 gates and then you get your actual chip and it has 2 million gates and nothing works. So there's that. We also don't have any formal methods. So that kind of sucks because we can't actually prove anything or it's not provably secure. And sometimes it feels a little bit like no one cares, right? We all kind of go, oh, shit, meltdown is really, really bad. And yeah, it is. But if your chips are insecure from the hardware point of view, you're going to have a bit of a bigger problem. And it's going to be a bit more difficult to replace those. So that's kind of sad sometimes. What can you do? So those two main problems I mentioned right now, the petitioning and the identification, you don't need any kind of specific hardware for that. You don't need any kind of specific tools for that. Any kind of designs you can use here are open source. So I mentioned open cores and Libre cores. Feel free to download everything there. Feel free to synthesize everything there with wonderful open source tools you can get. So Yoast as Qflow, all that kind of stuff. So that's something you can do. And you can find new methods to petition to identify maybe to counteract everything here with a normal kind of laptop, for example, this laptop which I work on. And you don't actually really need anything else. So that's quite nice. In particular, it would be good if you had some interest in graph theory, if you had some interest in functional analysis. So I think the previous talk mentioned that they're looking for software engineers. We would like your sort of research expertise here. We have some ideas of how graphs could maybe have some other functionalities to figure out how to partition better or maybe how to fix some errors there. So that's quite nice. And for that, please do feel free to contact me. I mean, if you have the SCM in your garage, also contact me. I'd love to see it. But if you have any kind of ideas of how to do this better, do let us know. And thanks so much. And I hope you have maybe some questions. Thank you very much for your talk. If you do have questions, please line up, add the microphones in the room. If you have questions from the internet, just keep asking. Signal Angel, do you have a question? What awesome software do you use to visualize such graphs? Okay, so we use a couple of different ones. We're big fans of GIFI graph tool. And we sometimes use graph viz. So those are all you can look into. With the extremely big graphs, we've had really good, good stuff going with GIFI. So even so we've had this problem with HAL, where you can't visualize stuff on the go. GIFI is able to actually run different kind of graph algorithms on your graph in real time. You may need to up the memory a little bit sometimes, but it does, I think we've done up to one million gates in there, which is quite cool. Graph tool is awesome as well, though. Microphone number four, your question. Hello, I have two questions. The first one was with the scanning electron microscope. To my knowledge, you can actually see the chip working when you are using an SEM, because you can see the loads of the electrons and the transistors themselves. So why do you even need all this camouflage? Because you can see it working. Why do you need all this camouflage? This is regarding the camouflaging chips. Yeah, well, this is the question I asked myself as well. Sometimes it becomes so small that you can't see it working anymore. So if you're camouflaging in a really small technology, we actually start getting troubles to be able to properly visualize that. So it's actually difficult enough to get pictures to reverse engineer, to actually have it functioning and to see what's happening inside becomes difficult. 429 minutes, yeah, feel free to just see what it does. And my next question is, isn't there like, of course, this wouldn't work with the foundry, but aren't there like physical methods, for example, special coating, which you cannot remove without trying the silicon? Yeah, so there have been ideas in that direction also to try to, so one of them is a coating, the other one is to try to put into the metal layer something that will, if you start physically removing them to sort of, which will scratch up the surface, there is ways to do that. That is very much a physical problem though. So that's what the material guys do. And as of now, as far as I'm aware, they haven't found anything where that's really been a problem. Right, signal Andrew, do you have more questions from that? You're good. And I don't see anyone else being lined up. All right. We have microphone one, sorry. Hello. Just a small question. Do you know a tool to do, let's say, to recover clock groups out of an at least? Because you mentioned the partitioning problem, I would say if they are clocking and all those stuff, you get the function as one clock group. Yeah. So actually clock grouping is probably the first thing you would do when you start partitioning any kind of design. So your first step would always to be to have a look at the clock tree and see which parts are clocked by the same design. And then you go from there and try to sort of divide and conquer some more. As far as I'm aware, how does do that? I'm not sure if there's any other tools out there which specifically do that. It would be nice just to have a tool with weed in the net list showing the placement and say this flip flops belongs to this clock group just by coloring or something like that. So I think this is something that HAL does do. So that is possible with HAL. Yeah. Microphone number four, your question. Yes, thanks. Can you tell me how much overhead do you add if you use this obfuscating and how often it's actually used because it seems that it's quite easily broken. Is this really a thing that's widely used? So what happens with chip design is that people start designing now for chips which we've done in like three years time. And when this was the hot shit, people decided let's do logic obfuscation for our chip in three years time. So there's chips on the market which do have some kind of logic locking on there. The overhead depends on how you implement it. You're not going to put in 100% key gates. You usually only do it for the parts that are important to you. So for example, for crypto modules or for your CPU. And so the overhead depends. Are we talking about the logic encryption or the obfuscation or the camouflaging? Because that's a little bit different. So the one depends very much on the cell library you end up using for this camouflaging stuff. There is a lot of different ideas of how to do it well. The more difficult it comes to for us engineer, the bigger the cells are going to be and you have overhead to 1.5 to 5 times as big in your chip for the part that you camouflaged. Again, that's not going to be your whole entire chip. That's going to be the part that's of interest for you. For the logic locking, you can usually choose a little bit more, maybe you have some space left in your chip and you go let's do 20% logic locking on this one part and that's just enough to fill it in perfectly. So you have a little bit more possibility to choose there. Thank you. Hello, thanks a lot for the talk. I was just wondering with the potential evolution of packaging technologies like 3D stacking, how do you perceive the future of your field with that? Okay, so we've actually worked in the past with some people who are trying to do 3D printing and sort of try to integrate more in there. Again, that is kind of a physical process that has to be done. My research very much does this high level. I would assume that as long as we have to test chips, we're always going to have the tools to be able to analyze them and even with 3D stacking, we're going to have to be able to have tools to be able to actually get in the chip and do take the images. So for fault analysis, we need it, we can use reverse engineering. And the same thing goes for low-anameter sizes. Thank you. Microphone number two, your question please. FPGA? Can you do this also with FPGAs? Okay, so I'm not an FPGA person. That would have been the guys from Bohem. They do that. As far as I'm aware, this whole clustering stuff, so this first kind of stuff, that's exactly the same for FPGAs. I am not sure how camouflaging works for FPGAs, I don't think it does. Logic locking you can do. So that's fine. Thanks. Microphone number four, your question. Hi. Regarding the problem of sending your designs to the foundry and getting it back modified. So I'm asking like, what's the status of this in the real world? So is it a real problem? Can companies actually verify quite well that they received what they sent? Or is it you have to trust them blindly? How much is it actually possible? Okay, so there's two parts of this. This is a typical do hardware choasings really exist problem. As far as I'm aware, it hasn't been seen in the industry, but again, they probably wouldn't tell me if it had been. The overhead to actually get something in there in the foundry is quite large. So this is not going to be something that one single person does. This would need probably some kind of state actor to do this. The problem is that our foundries do lie in countries where we have maybe that kind of problem. At the moment, I know companies are checking their own products. So I am aware of big companies that do this kind of thing where they get their choos back and reverse engineered. It feels like it's very much in its baby steps. I do hope eventually we're going to get some kind of certification. So we have certification now for chips and I hope eventually they'll have some kind of we reverse engineered it when it came back and it was fine kind of certification. That's where I hope it would go in the future. But I think it's a long way off. If you had to guess, so let's say if a state entity sends out a design to a foundry and then a state operator there makes a modification, so who wins? Kind of depends on the state, I would say. So in America, all the big research and this is founded by DARPA and they have a lot of money. I'm going to leave it at that. All right. Thank you very much for your talk, Eddie, and for answering all the questions.
|
Reverse Engineering of integrated circuits is often seen as something only companies can do, as the equipment to image the chip is expensive, and the HR costs to hire enough reverse engineers to then understand the chip even more so. This talk gives a short introduction on the motivation behind understanding your own or someone else’s chip (as a chip manufacturing company), and why it might be important for the rest of us (not a chip manufacturing company). The focus is on understanding what millions of logical gates represent, rather than the physical aspect (delayering, imaging, image processing…), because everyone can do this at home. I will introduce some proposed countermeasures (like logic encryption) and explain if, how and why they fail. The talk will give a general overview of the research field and explain why companies are interested in reverse engineering ICs (IP overproduction, Counterfeits, Hardware Trojans), as well as why it’s important for an end user (IC trust, chip failure). Then, I will very shortly introduce the reverse engineering workflow, from decapsulating, delayering, imaging, stitching, image processing and then come to the focus: netlist abstraction. The idea is to show some methods which are currently used in research to understand what netlists represent. Some theory will be explained (circuit design, formal verification of circuits, graph theory…), but I want to keep this to a minimum. Finally, I will show some current ideas on how to make reverse engineering difficult, as well as some attacks on these ideas. The talk does not give insights into how large companies do reverse engineering (i.e. throw money at the problem), but rather show the research side of things, with some of the methods published in the last couple of years, which is something everyone can do at home.
|
10.5446/53122 (DOI)
|
In the following talk, Mr. Bernd Zeke will speak about the crashes and what led to the crashes of the most recent 737 model. He is an engineer and he also worked on flight safety and he analyzed plane crashes for a lot of time and a long time. And you have to keep in mind that this 737, although multiple models have been built, can be flown, all models can be flown with the same type rating since 1967, which is one of the many root causes of the issues that led to the disaster that killed 346 people. Let's listen to Bernd and he'll enlighten us what else went wrong. Yes thank you very much for the introduction. I see there are not quite as many people as with the Edward Snowden talk, but I'm not disappointed. Aviation safety has always been very important to me and I've done a lot of work on it and I'm happy to share my passion with so many of you. Thank you. So here's basically the outline of what I'm going to talk about. It's the Boeing 737 Max or 737 as some may say. I will briefly talk about the accidents, what we knew at the beginning, what went wrong and then what came to light later on. I will show our causal analysis method that we use very shortly, very briefly, and the analysis, an overview of the analysis that I did of these accidents. Then talk about the infamous MCAS system, the maneuvering characteristics augmentation system as it's called by its full name. Then I'll talk about certification, how aircraft certification works in the United States. It's very similar in Europe although there are some differences, but I'm not going to talk about European details in this talk. So it's mostly about the FAA and aircraft certification across the pond. Some other things and an outlook, how it is going to go on with the Boeing 737 Max. We currently don't know exactly what's going to happen, but we'll see. If we have time, there are a few bonus slides later on. So the Boeing 737 Max, the star of the show as you may say, it's the fourth iteration as the Herald already indicated of the world's best-selling airliner. I looked it up just recently. I think there are almost 15,000 orders. There have been for the 737 of all the series, the original, the classic, the NG and now the Max. The Max itself is the fastest-selling airliner of all time. Within months it had literally thousands of orders. It has now almost 5,000 orders, the 737 Max. All the airlines in the world are waiting for the grounding to be lifted so they can receive and fly the aircraft. So the first accident was last year. It was a line air, Indonesian flag carrier. Actually, I think the second or third largest Boeing 737 Max customer in the world with a couple hundred, 250 or something aircraft and it crashed relatively shortly after it entered service. And so we heard some strange things in the news and on the forums that deal with aviation safety. It seems that there had been uncommanded nose downtrim. So the tailplane is moved by an electric motor and it forces the nose of the aircraft down. The pilot can counter that movement with some switches on his control column and apparently the stick shaker was active during the flight and there were difficulties in controlling the aircraft. We didn't know at the time exactly what it was. And then for the first time the abbreviation MCAS surfaced and even 737 pilots, even 737 Max pilots, at least some of them said they'd never heard of it. It was a mystery. We later found that actually in some documentation it was very briefly mentioned that such a system existed but not exactly why it was there. And I guess Boeing knew and the certification authorities as it turned out sort of knew a bit of a story but not the whole story. But especially people in the West, in the US and in other countries said these are just poorly trained third world pilots and we expect that. And they weren't completely wrong. Lion Air has a particularly bad safety record and it wasn't unknown to aviation safety investigators. There have been a number of crashes with Lion Air. So in the beginning we thought okay, maybe it's a fluke, it's a one off or maybe it's caused by poor maintenance or bad pilots or whatever. So several people on the other hand already began worrying because some flight data recorder traces became public and there were some very strange things which we will see shortly. And then only a few months later the second aircraft of exactly the same type and the same variant, Boeing 737 Max 8 also crashed. And you can see maybe on the picture on the left it left a rather big crater. It really dove into the earth quite fast. It turned out I think about between 700 and 800 kilometers per hour. So really fast. And not much was left. I think the biggest parts were about this size I guess. So all small pieces of debris and the engine cores which are a bit bigger. And from that as well flight data recorder traces became public. The recorders had survived at least the memory in them and were readable. So we finally found out something and found some similarities, some rather disturbing similarities. We'll come to that in a moment but I'll talk a little bit about the Boeing 737 family in general. So there were four as I said models. There was the original which had narrow engines under the wings. Not a lot of room between the ground and the engines. But it looked quite normal. You could say it was one of the first short haul airliners with under slung engines under the wings. Then new high bypass turbo front engines entered the market which were much more fuel efficient. We were talking about maybe some 15 to 20% lower fuel consumption so it was a big deal. And the Boeing 737 was re-engined and became known as the classic. Big engines but still mostly analog mechanical instruments. And it was basically the same as the original instead that it had some bigger engines. They had to shape the cowling little differently to accommodate the bigger engines but more or less it worked for a while. And then as airlines demanded more modern avionics so the cockpit electronics in aircraft, the next generation was conceived. It also got a new wing, new winglets which again saved a lot of fuel. It had basically the same engines except that the engines now were also computer controlled by what we call FADIC, full authority digital engine control. And Boeing said well that's probably going to be the last one and in the next few years we're going to develop an all new short and medium haul single eye aircraft which will be all new and super efficient and super cheap to operate all the promises that manufacturers always make. In the meantime Airbus was becoming a major player with the A320. It was overall a much more modern aircraft. It had digital fly by wire. It always had digitally controlled engines. It had much higher ground clearance so it was no problem to accommodate the larger engines in the A320. And Airbus then announced that it was going to re-engine the A320. And for the A320 that was the first time it got new engines. For a long time you had the choice of two types of engines for the A320 and then they said we're going to install these new super efficient engines which brought with it another optimization of fuel consumption. It was another 15% fuel saved per mile traveled. Something on the order of that so it was a huge improvement again. And many Airbus customers immediately ordered the so called A320 NEO. And some Boeing customers also thought well this one is going to consume so much less fuel that we might consider switching to Airbus. Even though it's a major hassle if you have a fleet entirely consisting of Boeing aircraft if you then switch to Airbus it's a huge hassle. And nobody really wants that unless they're really forced to. But the promised fuel savings were so big that companies actually considered this and lots of them. And so Boeing said we need something very quickly. Preferably within two years I think. So that's for airline development that's very, very, very quickly. And they said well scrap all the plans about the new small airline now we're going to change the 737 again. And now the new engines were going to be bigger again. And so actually there was no ground clearance to move them in the same way as on the NG. So they had to modify the landing gear to mount the engines even further forward and higher and the engines were bigger. But the engines were on the whole. They were very good new development. The same type of engines that you could get for the new Airbus by CFM International. And so yeah they decided to make the Boeing 737 fourth generation and called it the MAX. So when we analyze accidents we use a causal analysis method called YB-COS analysis. And we have some counterfactual test which determines if something is a cause or something else. We call it a necessary causal factor. And it's very simple. A is a causal factor. B if you can say had A not happened then B would not have happened either. So I mean you need to show for everything that there's a causal relationship and that all the factors that you have found are actually sufficient to cause the other event. So you can probably not read everything of it but it's not really important. This is a simplified graph and I will show the relevant details later. And this is the analysis that I made of these accidents and you can see it's not a simple tree. As computer scientists many of you are familiar with trees and this is just a directed graph and it can have branches and so on. And so some things are causal influence, causal factor of several different things. So some of the factors actually have an influence in multiple levels. For example the airspeed influences the control forces and it also influences the time the crew had to recover the aircraft before impact with the ground. So these are some of the things that I will look at in a bit more detail. So here's one of them. Uncommanded nose down trim. So what happened apparently on these accident flights was that you can see it in the flight data recorder traces. I don't know, can you see the mouse pointer? Here there's the blue line and that is labeled trim manual and there's the orange line that is labeled trim automatic. And if they have a displacement to the bottom that means that the aircraft is being trimmed and nose down which means in order to continue to fly level you have to pull the control column with more force towards you. And what you can see is in the beginning there are a few trim movements and on this type they are expected. It has an automatic trim system for some phases of flight which trims the aircraft to keep it flying stable. And then after a while it started doing many automatic nose down trim movements. Each of these lasts almost 10 seconds and there's a pause between them and in every case the pilots counter the nose down trim movement with the nose up trim movement. On the control yoke there are switches that you operate with your thumb and you can trim the aircraft that way and change the control forces and cause the aircraft nose to go up or down. So for a very long time this went on the computer trimmed the aircraft nose down, the pilots trimmed the aircraft nose up and so on. Until at the very end you can see that the trim the nose up trim movements that the pilots made become shorter and shorter and this line here it says pritch trim position that is the resulting position of the trim control surface which is the entire horizontal stabilizer on the aircraft and it moves down and it doesn't really go up anymore because the pilot imports become very short and that means the control forces to keep the aircraft flying level become extremely high and in the end it became uncontrollable and crashed as you can see here. So the pilots for various reasons which I will highlight later the pilots were unable to trim the aircraft manually and the nose down trim persisted and the aircraft crashed. And this is only the graph of one of the accidents but the other one is very similar and so that's what we see. There is a known system which was already known before on the Boeing 737 I think it's available on all the old versions as well which is called the speed trim system which in some circumstances trims the aircraft automatically but the inputs that we see the automatic trim inputs don't really fit the so called speed trim system and so for the first time we hear the word with the word MCAS. And we'll talk a bit more about what made the Boeing 737 different from all the previous models and that is the bigger engines. As I said the engines were much bigger and to achieve the necessary ground clearance they had to be mounted further forward and they're also a lot bigger which means at higher angles of attack when the aircraft is flying against the stream of the oncoming air at a higher angle these engine cells produce additional lift in front of the center of gravity which creates a pitch up moment and the certification criteria are quite strict in that and say exactly what the forces on the flight controls must be to be certified and due to the bigger engines there were some phases or some angles of attack at which these certification criteria were no longer met and so it was decided to introduce a small piece of software which would just introduce a small trim movement to bring it in line with certification criteria again. And one of the reasons this was done was probably so the aircraft could retain the same type certificate as was mentioned in the introduction so pilots can change within one airline between the aircraft between the 737NG and the 737 MAX they have the same type certificate there's a very brief differences training but they can switch even in line operations between the aircraft from day to day. And another reason no other changes were made. Boeing could for example have made a longer main landing gear to create additional ground clearance to move the engines in a more traditional position that would have probably made it more aerodynamically in line with certification criteria. I hesitate to say the word to make it more stable because even as it is the Boeing 737 MAX is not inherently aerodynamically unstable. If all these electronic gimmicks fail it will just fly like an airplane and it is probably in the normal flight envelope easily controllable. But to make big mechanical changes would have delayed the project a lot and would have required recertification and what instead could be done with the airframe essentially the same the certification could be what is known as grandfathered. So it doesn't need to fulfill all the current criteria of certification because the aircraft has been certified and has been proven in service and so only some of the modifications need to be recertified which is much easier and much cheaper and much quicker. So this is one of the certification criteria that must be fulfilled. It's even though I have removed some of the additional stuff that doesn't really add anything useful it's still rather complicated. It's a procedure that you have to do where you slow down one knot per second and the stick forces need to increase with every knot of speed that you lose and things like that. It says it's stick force versus speed curve may not be less than one pound for each six knots. It's quite interesting if you look at the European certification criteria is that they took this exact paragraph and just translated the US units into metric units but really calculated the new value. So the European certification have now very strange values like I don't know 11.79 kilometers per hour per second or something like that. It's really strange. So you can see where it comes from but they said we can't have knots even though the entire world except Russia and China basically flies in knots even Western Europe. But the criteria in the certification specification need to be in kilometers per hour. Well I would have thought that you would even if you do the conversion you would use meters per second but it used kilometers per hour for whatever reason. So due to the aerodynamic changes that were made the max did not quite fulfill the criteria to the letter so something had to be done and as I said mechanical redesign was out of the question because it would have taken too long would have been too expensive and maybe would have broken the type certificate commonality. So they introduced just this little additional software in a computer that also existed already and so it measures angle of attack it measures airspeed and a few other parameters flap configuration for example and then it applies nose down pitch trim as it sees fit. But it has a rather interesting design from a software engineering point of view. Can you read that? Is that there are flight control computers and one part of this flight control computer one additional piece of software is called the MCAS the maneuvering characteristics augmentation system and the flight control computer actually gets input from both angle of attack sensors. It has two one on each side for redundancy but the MCAS algorithm only uses one of them at least in the old version in the new version it will probably use both if it ever gets recertificated and then if that angle of attack sensor senses a value that is too high then it introduces nose down trim and it may switch between flights between the left and the right sensor but at any given time for any given flight it only ever uses one. So what could possibly go wrong and here we can see what went wrong it's the same graph as before and I may direct your attention to this red line that says angle of attack indicated left and the green line which says angle of attack indicated right so that is the data that the computer got from the angle of attack sensors both are recorded in the data recorder but only one is evaluated by the MCAS and you can see here's the scale on the right you can see that one is indicating relatively normally around zero a bit above zero which is to be expected during takeoff and climb and the red value is about 20 degrees higher and of course that is above the threshold at which the MCAS activates so it activates right and apparently in the old version of the software there were no sanity checks no cross checks with other air data values like airspeed and altitude or other things and it would be relatively easy to do not quite trivial you have to get it right in these kinds of things which influence flight controls but nothing too fancy but apparently that was also not done so the MCAS became active so how could it happen and it's still to me a bit of a mystery how it could actually get so far that it could be certified with this kind of system and the severity of each failure the possible consequences have to be evaluated and the certification criteria specify five severities catastrophic hazardous major minor and no safety effect and that doesn't have to be analyzed any further but for catastrophic failures you have to do a very very complex risk assessment and see what you can do and what needs to be done to bring it in line to make it either mitigate the consequences or make it so extremely improbable that it is not to happen so here are the probabilities with which the certification criteria deal and it's different orders of magnitude there are usually two orders of magnitude between them it's from a probability of one times ten to the minus five per hour to one times ten to the minus nine per operating hour and this is the risk metrics many of you are probably familiar with those and it basically says if something is major then it may not happen with a probability of probable and if it's catastrophic the only probability that is allowed for that is extremely improbable which is less than once in a billion flight hours right and to put that into perspective the fleets with the most flight hours to date I think are in the low hundreds of millions of flight hours combined so we're still even for the 737 or the A320 we're still quite far away from a billion flight hours so you might have expected perhaps one of these events because statistical distribution being what it is the one event might happen of course but certainly not two in less than two years and quite obviously the severity of these failures was catastrophic I think there's no discussion about that and here's the relevant part actually about flight controls and the certification criteria which was clearly violated it says the airplane must be shown to be capable of continued safe flight for any single failure without further qualification any single system that can break must not make the plane unflyable or any combination of failures not shown to be extremely improbable and extremely improbable is these ten to the minus nine per hour and this hazard assessment must be performed for all systems of course and severity must be assigned to all these and the unintended amcus activation was classified as major and let's briefly look at that what's major a reduction in capability maybe some injuries major damage so nothing you can just drug off but certainly not an accident with hundreds of dead so and therefore there are some regulations would say which kind of kinds of specific analysis you have to do for the various categories on for major no big failure modes and effects analysis FMA was required and these are all findings from the Indonesian investigation board and they're all in the report that is publicly downloadable in the final version of the slides I'll probably put some of the sources and links in there so you can read it for yourselves it's quite eye-opening so only a very small failure and failure analysis was made comparatively small it probably took a few man hours but not as extensive as it should have been for the event had it been correctly classified as catastrophic and some of these things that could happen were not at all considered such as large stabilizer deflection so continued trim movement in the same direction or a repeated activation of the MCAS system because apparently the only design of the MCAS system that the FAA saw was limited to a 0.6 degree deflection at high speeds and to one single activation only and that was changed and it is still unclear how that could happen it was changed to multiple activations even at high speed and each activation could move the stabilizer as much as almost 2.5 degrees and there was no limit to how often it could activate and what was also not considered was the effect of the flight characteristics caused by large movements of the stabilizer or movement of the stabilizer to the limit of the MCAS authority the MCAS doesn't have authority to move the stabilizer all the way to the mechanical stop but only a bit short of that much more than the manual electric trim is capable of trimming the airplane on the aircraft you can always trim back with the manual electric trim switches on the yoke but you cannot trim it nose down as far as MCAS can so that's quite interesting that wasn't that that was not considered what was also not considered at least it wasn't in the report apparently that that the Indonesian agency had seen was that flight crew workload increases dramatically if you have to pull on the yoke continuously with about let's say 4 equivalent to 40 kilograms or 50 kilograms continuously otherwise if you let go you're going to go into a very steep nose dive and at the short at the low altitude that they were they would not have been able to recover the aircraft and in fact they weren't what was also not considered was an AOA sensor failure in the way that we have seen it in these two accidents although apparently they those had different causes the effect for the MCAS was the same that one of the sensors showed a value that was about 22.5 degrees too high and that was not considered in the analysis of the MCAS system so I hope that is readable that is a simplified state machine of the MCAS system and what we can see is that it can indeed activate repeatedly but only if the pilot uses the manual electric trim in between it will go into a dormant state if the pilot trims manually with the hand wheel or if the pilot doesn't use the trim at all it will go dormant after a single activation and stay that way until electric trim is used so that's the basic upshot of this of this state machine so when the pilot thinks he's doing something to counter the MCAS and he's actually making it worse but this isn't documented in any pilot documentation anywhere it will probably be in the next way if it's still working like that but so far it wasn't so Boeing was under a lot of pressure to try to sell a new more fuel efficient version of their 737 and so I can't say for sure how it was internally between the FAA and Boeing but it's not unreasonable to assume that they were under a lot of pressure from management to accelerate certification and possibly take shortcuts I can't make any accusations here but it looks that not all is well in the certification department between Boeing and the Federal Aviation Authority so originally the idea of course is the manufacturer builds the aircraft analyzes everything documents everything and the FAA checks all the documentation and maybe even looks at original data and maybe even looks at the physical pieces that are being made for the prototype and approves or rejects the documentation there is already a potential conflict that is not there in many in most other countries because they have separate agencies but the FAA has a dual mandate it is supposed to promote aviation to make it more efficient but also to ensure aviation safety and there may be conflicts of interests I think so here's what the certification has been up until not quite sure 10 15 years ago so the FAA the actual government agency the Admin Aviation Authority appoints a designated engineering representative the DER is employed and paid by Boeing but is accountable only to the FAA and the DER checks and documents everything that is being done there's usually more than one but for simplicity's sake let's say and the DER then reports the findings and all the documentation all the low-level engineering and analysis documentation that has been done to the FAA and the FAA signs off on that or asks questions and visits the company and looks at things that makes audits and everything like that and so that usually has been working more or less and has certainly improved the overall safety of airliners that have been built in in the last decades and this is the new version and so he's the person is now not called DER but is called AR the authorized representative is still employed and paid by Boeing that hasn't changed but is appointed by Boeing management and reports to Boeing management and the Boeing management compiles a report and sends that to the FAA and the FAA then signs off on the report they hopefully at least read it but they don't have all the low-level engineering details readily available and only rarely speak to the actual engineers so anyone seeing a problem here? Well you have to say that most aircraft that are being built have been built in the last years aren't really terrible right? The 787 is a new aircraft the 777 has been one of the safest aircraft around at least looking at the flight hours that it has accumulated so it's not all bad but there's potential for real really bad screw ups I guess. There's another factor maybe that I briefly mentioned is that the Boeing 737 even in its latest version is not computer controlled it's not fly by wire although it has some computers as we have seen that can move some control surfaces but mostly it's really it really looks like that I think that's an actual photo from a 737 has some corrosion on it so it's probably not a max and older version but it's basically the same which is also why the grandfathering certification still works so it's all cables and pulleys and even if both hydraulic systems fail so yes it is hydraulically assisted the flight controls but if both hydraulic systems fail with the combined forces of both pilots you can still fly it and you can still land it that usually works except when it doesn't and the cases where it doesn't work are when the aircraft is going very fast and has a very high stabilizer deflection and this is from a video some of you may have seen that it's from Mentor Pilot and he has actually tested that in a full flight simulator which represents realistic forces on all flight controls including the trim wheel. You can be in the center console under the thrust levers there are these two shiny black wheels and there are the trim wheels you can move them manually in all phases of flight to trim the aircraft if electric trim is not available. The normal trim system would not do this okay it would require manual trim to get it away from this that's fine oh fine trim it backward as you can. Now he's trying to trim it nose up again after he has manually trimmed it nose down because the normal electric trim system cannot trim it so far nose down they have to do it manually and now he's trying to trim it back nose up from a position which is known from the flight data recorder that it was in in the accident flights and is trying to trim it manually because some people said oh turn off the electric trim the electric trim system and trim it manually that'll always work and they're trying to do that and it has representative forces to the real aircraft. Oh my god. Okay uh what? Do you want to pass the red? And you can see that the pilot on the left the captain can't even help him in theory both could turn the crank at the same time they have a handle on both sides because he has to hold the control column with all his force so you can't let go he must hold it with both arms otherwise it would go into the nose dive immediately and this is the physical situation with which the pilots were confronted in the accident flights and he now says press the red button in the simulator so end the simulation because it's clear that they're going to crash. So there's another thing that came that came up after the accident and 737 pilots said oh it's just a runaway trim runaway stabilizer trim there's a procedure for that and just do the procedure and you'll be fine. Well runaway stabilizer trim is one of the emergency procedures that is trained at infinite um right that's something that every 737 pilot is aware of because there are some conditions under which the trim motor always gets electric current and doesn't stop running that just happens occasionally not very often but occasionally and every pilot is primed to recognize the symptoms saying oh this is runaway runaway stabilizer and you turn off the electric motors for the stabilizer trim and trim manually and that'll work. But if you look at what are the actual symptoms of runaway stabilizer it says uncommanded stabilizer trim movement occurs continuously and MCAS movement isn't continuously MCAS trim movement is more like the speed trim system which occurs intermittently and then stops and then trims again for a bit and then stops again so most pilots wouldn't recognize this as a runaway trim because the symptoms are very different the circumstances are different so I guess some pilots might have recognized that there's something going on with the trim that is not right and will have turned it off but some didn't even though they know what they all know about runaway stabilizer. And yeah that's the second file that I have. So that's the sound the stick shaker makes on a Boeing 737 and now imagine flying with that sound all the while shaking the control column violently flying with that going on for an hour and that's what the crew on the previous flight did. They flew the entire flight of about an hour with the stick shaker going. I mean that's quite interesting because the stick shaker says your wing is about to stall. But on the other hand they knew they were flying level they were flying fast enough everything was fine the aircraft wasn't about to stall because it was going fast and right. So from an aerodynamic perspective of course they could fly the airplane because they knew it was nowhere near a stall but still I think in most countries and most airlines they would have just turned around and landed again and saying the aircraft is broken please fix it something is wrong. Yeah so the stick shaker is activated by the angle of attack rain on each side and but the sticks are mechanically coupled so both of them will shake with activation from either side. So is it going to fly again? It's still somewhat of an open question but I suspect that it will because it's hard to imagine that letting these 460 airplanes or something like that that have been built sometimes sitting around on employee parking lots like here just letting them be scrapped or whatever I don't know almost 5000 have been ordered as I said neither airlines nor Boeing will be happy but it's not quite clear it's not yet being certified again so it's still un-airworthy. So there's another little thing certification issues with new Boeing aircraft reminded me of this have you ever seen that? So battery exhaust which aircraft has a battery exhaust I mean what do you do with that? Does anybody know? Yeah of course some know yeah. Boeing 787 Dreamliner less than two years after introduction or after entering the service actually had two major battery fires. They have two big lithium ion batteries lithium cobalt I think not sure the one that burns the brightest really because they wanted the energy density really and that wasn't available in other packages if they had used nickel cadmium batteries instead they would have been like 40 kilograms heavier for two batteries that's almost a passenger. So yeah they were on board fires and if you ask pilots what's your worst fear of something happening in flight they'll say flight control failure and fire so you don't want to have a fire in the air absolutely not and one of the fires was actually in flight with passengers on board one was on the ground shortly after disembarking and the lithium ion batteries because they are unusual and novel features as it's called have special certification conditions because they are not covered by the original certification criteria and it says here safe cell temperatures and pressures must be maintained during any foreseeable condition and during any failure of the charging system not shown to be extremely remote and extremely remote is actually two orders of magnitude more frequent than extremely improbable extremely remote is only less than once every 10 million flight hours but I think the combined flight hours for the 787 at that time were not quite sure maybe a few hundred thousand at most so and also happened two times that was not really not really fun and then it says no explosive attacks toxic gases emitted as the result of any failure may accumulate in hazardous quantities within the airplane I think they've neatly solved the third point by putting the battery in a stainless steel box really thick walls maybe I don't know eight millimeters or something like that and piping them to this hole in the bottom of the aircraft so the gases cannot accumulate in the aircraft obviously so yes and with that I'm at the end of my talk and there's no I think quite some time for questions thank you extremely punctual I have to say thank you for this interesting talk we do have the opportunity for quite some questions and healthy discussion please come to the microphones that we have distributed through the hall and while you queue up behind them do we have a question from the internet already do you signal angel is your microphone working no yes yes do you think extensive software tests could have solved the situation software tests in this case perhaps yes although software tests are really a problematic thing because to test software to these extreme reliability is required you really have to test them for a very very very very long time indeed so to achieve some confidence say of 99% that a failure will not occur in say 10 million hours you'll have to test it for 45 million hours really and you have to test it with the exact conditions that will occur in flight and apparently nobody thought of an angle of attack failure angle of attack sensor failure so maybe testing wouldn't have done a lot in this case thank you microphone number four yes thank you for the talk I have a question concerning grounding so what is your view that the FAA waited so long until they finally ground the aircraft week after I think the Chinese started with grounding yes it's a good point and I think it's an absolute disgrace that they waited so long even after the first crash they made an internal study and it was reported in the news some some weeks ago and estimated that during the lifetime of the 737 max probably around 15 aircraft would crash so I say every two to three years one of them would crash and they still didn't ground it and waited until four days after the second accident yes it's a shame really thank you microphone number seven please thank you for your talk I have a question regarding the design decision to only use one a way sensor so I've read that Boeing used the AMCA system before on a military aircraft and that used both sensors so why was the decision made to downgrade yeah that's a good question I'm not aware of that military system if that was really exactly the same but if that's the case yes that makes it even stranger that they chose to use only one in this case yes thank you okay microphone number two please um yeah thank you for your talk um so how do you actually test these requirements in practice so how you determine in practice if something is likely to fail every 10 to the minus nine as opposed to every 10 to the minus eight no that's that's obviously practically completely impossible why you can't as I said if you want to have a reasonable confidence that it's really the error rate is really so low you'd have to test it for four and a half billion hours in operation which is just impossible what in state is done there are some industry standards for aviation that is DO 178 currently in revision C and that says if you have software that if it fails may have consequences of this severity then you have to use these very strict very formal methods for developing the software like doing very strict and formal requirements analysis specification in a formal language preferably and if possible and some some companies actually do that formally prove your source code correct and in some languages that can be done but it's it's very it's it's a lot of effort and that's how this should be done and this software obviously should have been developed to the highest level according to DO 178 which is level A and quite obviously it wasn't thank you signal angel please the next question from the internet um your talk focused mostly on NCAS but someone noted that the plane was actually designed for engines below the wings and already and the ng model so the one before already had problems with the wing mounts and engine mounts do you think there will be mechanical problems with the max two I'm not sure there were really mechanical problems there were aerodynamic problems and apparently well I'm sure they have tested the ng to the same standards to the same certification standards because obviously there were aerodynamic changes even with the ng and the ng apparently still fulfilled the formal criteria of the certification there are some acceptable means of compliance and quite specific descriptions how you test these stick forces versus air speed and as far as I know the ng just fulfilled them and the max just didn't so for the max something was was required although even the classic which basically had the same engines as the ng even the classic had some problems there and that's where the speed trim system was introduced and yeah so it has a similar system and actually the m-class is just another little algorithm in the computer that also does the speed trim system please stay seated and buckle it up until we are reached our parking position uh no um we are still in the q and a phase please stay seated and please be quiet so we can enjoy all of this talk and uh if you have to have to leave then be super quiet right now it's way too loud in here uh please the next question from microphone number one so considering lessons learned from this accident has the fAA already changed the certification process or are they about to change it or what about other agencies worldwide the fAA is probably going to move very slow and I'm not aware of any specific changes yet but I haven't looked into into too much detail in that other certification agencies works work somewhat different and at least the arsa in europe and the chinese authorities have already indicated that in this case they are not going to follow the fAA certification but going to do their own and until now it was usually the case that if the fAA certified the airplane everybody else in the world just took that certification and said what the fAA did is probably fine and vice versa when the arsa certified a Boeing airplane then the fAA would also certified and that is probably changing now thank you microphone number three so uh hi uh thank you for this talk two questions please were you part of the official of the investigation or this your own analysis of the uh facts and the other one i heard something about the software being outsourced to india can you comment on that please the first one now this is my own private analysis i have been doing um accident analysis for a living for a while but not for any official agency but always for for private customers um and uh about outsourcing to india i'm not quite sure about that i've read something like that um what i've read is that it was um produced by honeywell i think maybe wrong about that but i think it was honeywell and um who the actual programmers were sitting if it's done properly according to the methodologies prescribed by d0178 and fulfilling all those requirements then where the programmers sit is actually not that important and um i don't want to write indian programmers and i think um if it's done according to specification and analyzed with static code analyzes and everything else um vis-a-vis the specification then that would also be fine i guess but the problem is not so much really in the implementation but in the design of the system in the architecture thank you microphone number five please um hello um i may got your presentation wrong but for me the real root cause of the problem is the competition and uh hide that line from the management uh so the question for you is is there any um suggestions from you that process could be i don't know maybe changed in order to um in order to um avoid the uh the box in the um in the software and have the mission critical systems saved yeah so we don't normally just talk about the cause or the root cause but there are always several causes basically you can say depending on where you stop with the graph where is it uh where you stop with the graph all the leaves all the leaves and the graph are root causes and but i've stopped relatively early and not not done not gone into any more detail on that but yeah the the competition between abbas and boeing obviously was a big factor in this and um i don't suppose you you suggest that we abolish competition in the market but what needs to be changed i think is the way certification is done and that requires the faa reasserting its authority much more and that will probably require a lot more personnel with the good engineering background and um maybe that would require the faa paying better wages so i don't know because currently probably all the good engineers will go to boeing instead of the faa but the faa daily needs engineering expertise and lots of it thank you the next question we hear from microphone number four hi thank you for the talk um i've heard that there is or i've heard i've read that there's a version of the 737 max 8 that did allow for a third a oa sensor to be present that served as a backup for either sensors but that this was a paid option and i have not found confirmation of this do you know anything about this no i'm not aware of that as a as a as a paid option um there was something about an optional feature that was called a safety feature but i can't exactly remember what that was maybe it was an angle of attack indicator in the cockpit that is available as an option i think for the 737 for for most models because the sensor is there anyway um as for a third a oa sensor um i'd be surprised if that was an option because that is a major change and requires a major change to all the system layout then you'd need an additional a data inertial reference unit which is a big computer box in the aircraft of which there are only two and that would have taken a long long time in addition to develop so i'm skeptical about that third angle of attack sensor at least i've not heard of it thank you signal and do we have more from the internet please one quick one um if we need a quick one would you ever fly with a 737 max again if it was ever cleared again i was expecting that question and actually i don't have an answer yet for that and that maybe would depend on on how i see the f a and the a are doing the certification um i've seen some people saying that this 737 max should never be re-certified i think that it will be and um i look at it in some detail seeing how the f a develops and how the ariza is handling it and then maybe yes great okay in that case we would take one more very short question from microphone number five do you know why the important a oa sensor failed to give the correct values there are some theories about that but i but i haven't investigated that in any more detail now there were some stories that in the case of the indonesian the line air that it was actually mounted or reassembled incorrectly um that would explain why there was a constant offset uh it may also have been somebody calculated that it was actually exactly if you look at the raw data that is being delivered on the bus there was exactly one flipped bit which is also a possibility but i i don't really know but there were some implications in the report maybe have to read that section again from the indonesian authorities about um substandard maintenance as it's euphemistically called okay we have two more minutes so i will take another question from microphone number one hey i would have expected that modern aircraft would have some a plug physical plug hermetic one that will disconnect any automated system isn't something that exists in our plans today now and especially modern aircraft can't just disconnect the automatics because if you look at modern flyby wire aircraft there is no connection between the flight controls and the control surfaces there's only a computer and the flight controls that the pilots handle are only inputs to the computer and there's no direct connection that is true for every airbus since the a320 for every Boeing since the 777 so the 777 and the 787 are totally 100 flyby wire well i think 95 percent because there's one control service that is directly connected one spoiler on each side but basically there's there's no way and so you have to make sure that the flight control software is developed to the highest possible standards because you can't turn it off because that's everything that's well let me put it this way on the flyby wire aircraft only the computer can control the flight cell the flight control surfaces yeah so yeah so just hope that it's good think about that when you next enter a plane and also please give a big round of applause for our speaker bernsiecker thank you you
|
Everybody knows about the Boeing 737 MAX crashes and the type's continued grounding. I will try to give some technical background information on the causes of the crash, technical, sociological and organisational, covering pilot proficiency, botched maintenance, system design and risk assessment, as well as a deeply flawed certification processes. On the surface of it, the accidents to two aircraft of the same type (Boeing 737 MAX), which eventually led to the suspension of airworthiness of the type, was caused by faulty data from one of the angle-of-attack sensors. This in turn led to automatic nose-down trim movements, which could not be countered effectively by the flight crew. Eventually, in both cases, the aircraft became uncontrollable and entered a steep accelerated dive into terrain, killing all people on board on impact. In the course of the investigation, a new type of flight assistance system known as the Maneuvering Characteristics Augmentation System (MCAS) came to light. It was intended to bring the flight characteristics of the latest (and fourth) generation of Boeing's best-selling 737 airliner, the "MAX", in line with certification criteria. The issue that the system was designed to address was relatively mild. A little software routine was added to an existing computer to add nose-down trim in situations of higher angles of attack, to counteract the nose-up aerodynamic moment of the new, much larger, and forward-mounted engine nacelles. Apparently the risk assessment for this system was not commensurate with its possible effects on aircraft behaviour and subsequently a very odd (to a safety engineer's eyes) system design was chosen, using a single non-redundant sensor input to initiate movement of the horizontal stabiliser, the largest and most powerful flight control surface. At extreme deflections, the effects of this flight control surface cannot be overcome by the primary flight controls (elevators) or the manual actuation of the trim system. In consequence, the aircraft enters an accelerated nose-down dive, which further increases the control forces required to overcome its effects. Finally I will take a look at certification processes where a large part of the work and evaluation is not performed by an independent authority (FAA, EASA, ...) but by the manufacturer, and in many cases is then simply signed off by the certification authority. In a deviation from common practice in the past, EASA has announced that it may not follow the FAA (re-) certification, but will require additional analyses and evidence. China, which was the first country to ground the "MAX", will also not simply adopt the FAA paperwork.
|
10.5446/53251 (DOI)
|
Hi, I'm Eric Wostel, and this is Advent of Code Behind the Scenes. Who let this guy on stage anyway? Well, I used to make large-scale web applications for ISPs, and I used to work on auction infrastructure. Now I do architecture for a trading card game marketplace. I also run some programming challenges like Advent of Code, which is probably why you're here. I make tools for games like Eve, League of Legends, Minecraft, World of Warcraft, some other things. I make fun of programming languages like PHP and JavaScript, and make lots of other random things that you probably haven't heard of, but we're not here for any of that. We're here for Advent of Code. What's Advent of Code? Let's back up a little. Suppose you are here. Ah, sorry, that's small. Let me zoom in. Great. Suppose you are here, and all you have is a pen, a few napkins, a few weeks until Christmas — it's Halloween time — a random memory of Advent calendars, and a passion for programming puzzles and helping people learn to become better programmers. The answer? Advent of Code. A combination of Advent calendars and programming puzzles. So what are Advent calendars? So depending on your childhood, where you grew up, you may have encountered these things where there's little doors in this container, and you open up a door every day, starting usually early December, beginning of Advent, sometimes December 1st, it depends. And inside of each of one of these doors is a piece of candy or a toy or something like that, and you open them up and you're counting down the days until Christmas, and so every day you get some small thing that's fun. This is the one that I had growing up. Every day you take a little ornament out of the pouch, then put it on the tree and count down the days until Christmas. And so I made an Advent calendar that looks like this, or this, or this. But where are the puzzles? Well, if you click on one of these numbers, you'll discover a puzzle inside, and every day contains a puzzle. And if you scroll down to the bottom, you can get an input file for some description of the puzzle, it talks about elves and doing some stuff, and you get an input, and it has something that you're supposed to do with this file, and eventually you'll get an answer, and you put it in this box. And when you do that, it goes on to part two, which has a twist or something, we'll get into that in a bit. And part two goes into some variant of the puzzle, or some harder version, or goes on to some more interesting idea or some other concept, and eventually you fill in that answer and you get two stars for the day. And as you finish all of these puzzles, the calendar fills up, and you get some kind of cool picture. So suppose it's 2015, and you just built a bunch of programming puzzles, where should we host this thing? Well, let's see, let's do some capacity planning. Programming puzzles probably won't be that popular. I've made some other systems, programs, applications, toys, websites in the past, you know, my friends used them a little bit, not a huge deal, probably won't be that popular, they're kind of a visitary. Okay. I have a few friends that might like it though, and they have a few friends that might like it. Maybe 50 people? Ah, but we're good systems engineers. We're good at capacity planning. Let's give us a wide margin to make sure that we can accommodate everybody. 70. That'll be enough. A small personal web server should cover it. Okay. So now it's November 30th, 2015, it's almost December 1st, it's time to launch the site, it's time to announce it. So you post on Twitter, and you tell everybody that you made this website about some kind of an advent calendar, it's coming out soon, and 27 people retweet it. Great, that's well within our margin of how many users we're expecting, should be no problem. You look at the signup graph, leading up until midnight, December 1st, 2015, and look at that, about 81 people signed up, we estimated about 70 people, that's still within tolerances. We can absolutely handle that, there's no problem at all. But then, the puzzle unlocked. And the graph did something that the technical term for this is, oh no! 12 hours later, the graph looks like this. Let me zoom out a little bit for you. There, that's better. So that's 12 hours into the first day. That's the estimate we had, 70 people, that's 4,000. Okay, 70 people, we hoped for 4,000 people, we actually got, that's an error of 5,600%. The small server is very sad. Okay, you're bad at traffic estimation. Now what? Two steps. The first thing you should do, if you have a website that suddenly gets a ton of users and you weren't expecting it, turn off the Minecraft server. That freed up a significant amount of resources and gave us the breathing room to implement the real solution, which was, don't create a new process to handle every request. Now what, why would I be doing that? Okay, so there's a very simple way to host very simple applications called CGI, stands for Common Gateway Interface. You create a process, the web server can talk to this process, the process can manage the stuff for the request. It gets some inputs, it gets via environment variables and standard in, it gets the request, via standard out and standard error, it processes the request and gives a response and can put some things in the logs. These end up being the headers, the request body, the HTTP response, the log messages. I can also talk to databases, files, whatever, once in the meantime. The problem is that it does this for every single request you get. If you have 70 users and you're making a new process for every request they're sending, that's no problem at all and it makes prototyping and developing really, really fast and simple. If you have 4,000 users, do not use CGI. An emergency switch to fast CGI where it keeps the same process alive that serve as many requests, manage to save the event. The server would have absolutely fallen over. So let's continue, 24 hours since unlock now. The graph sort of looks like this. That's our estimate of 70 users, that's 9,000. So hold on a second, where did all these people come from? We just launched this thing to a couple of our friends. We posted it to Twitter where we had no followers. Who are all these people and why are they here? Well, can you see the error in my reasoning? It turns out these two lines I've highlighted are recursive because if they have a few friends that might like it, those people might have a few friends that like it and then those people might have a few friends that like it. And this can end up being a serious problem if you suddenly find that word of mouth has caused your applications, whatever it is, to spread around the world and all of a sudden you have people in every country talking about this thing that you made that you were expecting only a couple of your friends to do and suddenly everybody on Twitter is posting about it. This was unexpected. 48 hours since unlock, graph looks like this. There are 70 users, that's 15,000. Well outside of our tolerances. And by the end of December, we'll fast forward a little bit, 52,000 people ended up doing the puzzles on the site, which is mind-boggling. It's 2015. Shortly after that, we switched to AWS, which solved all of our scaling problems and has been completely excellent ever since. Today we've had 730,000 people, 360,000 people have at least one star, so they've solved at least one puzzle, which is mind-boggling. We're still growing, we're trending toward a million, which will be an incredible moment, I don't even know. There were also recently, with all of this scaling, some interesting problems in 2020. We had a pretty good grasp of the number of users that we would get based on the amount of traffic we were seeing in November, and the amount of traffic that we were getting and the number of users that were accessing the site. We can extrapolate roughly from how many users are hitting the site in November versus how much we're expecting to see in December. Based on a bunch of estimates for database CPU utilization, web server CPU utilization, traffic throughput, things like that, we figured it would be totally fine. Turns out, we did not account for the amount of memory that individual worker processes take up, and we had no sane limit on the number of worker processes, and so all of a sudden the number of users that 2020 entailed due to presumably people being stuck at home and wanting to work on something interesting, work on programming puzzles or something, all of a sudden, the servers fell over because there were so many workers created that they saturated all of the memory and all of the machines, which caused all of the virtual machines to just completely fall over, and a hard, forced stop from AWS wouldn't handle the problem. We handled that within a few minutes, but it screwed up the day one unlock. There was also another issue that was kind of happening intermittently to a couple of users, but it was still a really interesting thing to track down. Turns out that elastic scaling things like most AWS services, like AWS elastic load balancers, which are excellent, this is not at all a knock on AWS, they've been very, very, very good. Turns out elastic things need time to scale, right? Even just seconds. But ELBs in AWS elastic load balancers will scale up to the level of traffic that they're seeing within seconds or minutes. They're very, very fast. The problem is that AWS traffic goes like this. One second before midnight, there are no users hitting the site. Midnight, all of the users are hitting the site. Midnight and one second, no users are hitting the site again. There is no time to scale. You just get all the traffic and if you miss it, it's gone. So we had to work with AWS who were excellent and jumped in and figured out our problem and managed to handle things immediately to basically scale up our load balancers ahead of time and keep them scaled during the event so that we can handle the traffic without having to wait for the load balancers to figure it out. In a normal application, you get a couple users and the next minute you get a couple more and it ramps up over the course of the day and then it goes back down at night and it's a really good model for just like elastically following your traffic. Advent of code does not do that, it has a single spike and that's it at midnight. I mean during the day it does, but the midnight spike is the one that was falling over because none of the load balancers knew what was coming. So we pre-scaled the load balancers and now we're figuring out other solutions and it shouldn't be a problem next year. Hopefully don't quote me on that. But that was a really interesting one. There's posts in the subreddit. If you look for postmortem stuff in the subreddit recently, we talk about all of the different things we did in the research of how we got there. So why do people do Advent of code? Why is this thing? So the goal is to give people a bunch of practice problems to work on, a bunch of interesting little nuggets of mini projects to work on. The original intent was if you're like somebody who's learning to program and you read the documentation for like Python string reversal and you're like, okay, this is how you reverse a string in Python or whatever it is, right? You'll never retain that because there's nothing you're applying it to. There's nothing you're tying it to. It's just like a feature that you now know the language has somewhere maybe. But if the problem says like, oh, there are some elves in Santa needs help fixing this database and it can only be uncorrupted if you reverse the strings between this point and this point and then it'll be fixed, it seems like people retain that better when they're learning a language and when they're researching things in like language features or documentation, when they're doing it with a purpose, they tend to absorb it better. So the original goal was just like practice problems or fun things. But people also use it for interview preparation. Admin of code puzzles tend to be very similar to some of the kinds of interview questions that people get like whiteboard problems and that sort of thing, which is very good. It's become very popular for like company training, like engineering organizations will pick a couple of puzzles throughout the year or some group of people will do like the what one month every night they'll do admin of code puzzles and then during the day we'll talk about them or something and just like employee enrichment, those kinds of things. There is a ridiculous group of users who every night at midnight do it as a speed contest, which is completely, completely nuts. The people that do the right of unlock who can solve it first contest are people that basically do competitive programming year round nonstop, cut every corner, take every shortcut, none of their code is clean, they're not thinking about architecture, they just want the answer and they get it very quickly. And it's incredible to watch them work, but it is a different thing than what I'm used to in software engineering. Or just challenging friends with harder versions of puzzles. This is a really popular thing to do in the subreddit where people will see a puzzle and they'll say, oh, this would have been way harder if the input were larger, if the input had some property or whatever, and they'll create these versions of puzzles that have these properties that make them really interesting and post them and all of a sudden everybody has this like part three version of the puzzle that was just way, way harder that I didn't want to include on the side or that I didn't even realize existed that all of a sudden everybody gets to play with and try out. So just like stretching your knowledge and stretching your understanding. Also learning new languages, practicing existing languages, stuff like that, lots of reasons. So who are all these people doing have a bunch of code? Well, all around the world. This is output from Google Analytics is like map view thing. It's basically just like if there are people in the country like with internet and doing software engineering, they show up on the map. Here's the map of the United States. Here's when it unlocks in those places, by the way. New York where I am right now it unlocks at midnight. California locks at 2100, 9pm. These are not bad times for people typically. Tonight can be hard for some people 2100 can get in the way sometimes, but Europe has it a little bit harder. In Europe, things are unlocking 5am, 6am, which for some people is a big problem and some people just incorporated into their morning routine. Here's somebody doing it in the shower on their way to get ready for work. Some people say waking up at 5am every day is not easy. Some people say nothing else can wake me up as reliably as I've been of code did this month. I hear both, it's fine. People of all languages and all backgrounds do it. This is actually a pretty old picture. These numbers are much larger now. Any language you can think of, people are doing Advent of Code and it's just because it's a good way to learn new languages or people will pick a language that's either an esoteric language or one that they've designed or one that they're working on at work, like some embedded language they're working on at work, and use it as like stress test almost, like a suite of requirements and feature demands to see if the language can even tolerate those kinds of things. Lots of languages. People have solved the puzzles directly in hardware. That's an FPGA. It's like a programmable chip where you can tell the chip to be whatever chip you want and then implement a program in hardware and then run the program to get the result. Very, very cool. People from companies all around the world. Here's a picture of Facebook London's offices where they were posting their solutions on the glass wall on the other side of the stairs going up. Farewell someplace so people could see them as they went up. That was a lot of fun. Or in universities, people will post their solutions on like this is the student union of a, I forget what university this is, somewhere in Poland I think maybe, where as people do their solutions, they print them up here and then people can walk past and see all the cool code that they've written and solutions that they've done. Non-programmers, people will just print out parts of the puzzle or do it on graph paper or work on it by hand or something like that. You don't necessarily need to be a programmer at all as long as you have problem solving skills you can approach these problems any way you like. Excel. Excel is a popular one. Difficult, but certainly doable. A few different people have done them in Excel in different approaches which is a ton of fun to see. Literally just Gantt chart software. One of the puzzles was a topological sort. If you just toss this into a Gantt chart it'll produce the answer for you which is really funny. Minecraft, lots of things like that. So let's talk about puzzle design a little bit. What does it take to make a good puzzle? Not just, not advantage of code puzzles necessarily. We'll get to those in a second too, but just puzzles in general. The first trick is avoiding ambiguity. It is really easy to express something in a way that seems like it makes sense in your brain as the author of the puzzle but doesn't actually make sense at all. Where the pronouns in a sentence can reference several different words and they're all equally valid interpretations and it's all very confusing as to what it could be. This applies to documentation and stuff too. You need to be very careful to make sure that there's a single obvious interpretation or a single correct interpretation for any given sentence. Also for puzzles especially avoiding expectations of outside information, a lot of the people doing these puzzles have never taken a CS course. Or they might not have done, this is especially true for whiteboard problems, they might not have the same domain knowledge as you. So it's really dangerous to ask an interviewing whiteboard question that assumes that the person knows a bunch of things about your industry. You can't use nouns from your industry that are obvious to you and your coworkers but might not be obvious to the person that's applying for the job, something like that. Avoid requiring the users to make assumptions. So every time there's a place where you expect them to make assumptions or there's a deliberate point of ambiguity or a deliberate point where two different things can be true, it basically requires them to try every combination of all of those to see which one ended up being right which is typically not the intent when you're doing one of these. It's especially good to repeat and highlight important details. I try to highlight key words or key phrases that way just if somebody is glossing over the puzzle, hopefully they at least see those. And I've discovered that for every sentence there is a user that skipped only that sentence, the dog barked or they just like zoned out for a second or whatever. And all of a sudden if there's some critical piece of information that appears in just one sentence, they will have missed it and they'll need to either go back and find it or post asking for help or something like that. The best way to accommodate most of these things to handle these sorts of situations is to make sure that your brain isn't the only brain that processed the text of the puzzle. In my case, I just have a bunch of beta testers that go through the puzzles and if there's something that doesn't make sense to them or they feel it could be read multiple ways, we'll go back through and revise it a bunch of times until they think that it reads in a way that's consistent for them as well just to try to catch things that for me as the puzzle author are obvious but for them wouldn't necessarily be. So for an advent of code puzzle in particular, I have a bunch of extra constraints. There's always exactly one correct answer for any given input. There's a whole bunch of different inputs. Each one of them has exactly one answer. So there's never a case where you need to give the number of things but it could either be this or this. There's always exactly one situation. Any languages is valid which means that all I'm looking for is the answer, some number or string or value that the puzzle describes, not give me your code and I'll boot up a container and run it and spit the thing out or something like that. Any language you want, you just give me what the solution is. There's lots and lots of different inputs. Generating those is its own puzzle which I'll talk about in a moment. Every puzzle has two parts. Sometimes it's of a checkpoint puzzle format in which the checkpoint half, the first half just makes sure that you parse the puzzle correctly or that you understand some fundamental property but doesn't actually have the really interesting part in it until you get to part two where the puzzle puzzle is. Sometimes at the twist format where the first part is the puzzle that I originally intended to show off, the interesting part and then part two is some twist on that that either shows some other interesting thing or really, really cranks up the algorithm or asks you to run it a billion times or requires some optimization or scaling or something like that. But there's a lot of interesting things that you can do with two-part puzzles. They also are really useful because they help to simulate real world engineering environments. There is no environment in which, well typically people will find exceptions, but in general there are no environments where you write the code once and you are done and nobody ever touches it. Like that there's no maintenance and there's no user that has new things they want one day or something like that. That's very unusual. So giving beginners, giving learners the opportunity to say, I've written this code now what and the answer is we'll now change it to do this other thing, exposes them to things that are very realistic, that gives them skills that are very useful when they get into the real world. It's very important to have a lot of variety. There are a lot of times where people are very divided on whether or not they like a puzzle. There are a lot of puzzles that some people really love and some people really hate. And rather than trying to make a puzzle that everybody loves, which is probably impossible, just make a lot of different puzzles so that one day if you don't like the puzzle hopefully the next day you do and then everybody can get something out of it and try something new. Difficulty calibration is also really important. It sometimes helps to have three or four puzzles that sort of talk about the same concept, ask the same sort of thing, but do a simple version of it early in the month and then five or six days later do a slightly harder version or a different way to look at it and then a couple days after that do an even harder one and then finally get into the real concept that you wanted to apply originally without starting with that having everybody hit a brick wall. By the time they get to the harder puzzle they don't even realize that they've familiarized themselves with it already. It's just all of a sudden they see this concept and they're like, oh yeah, I know how to do whatever it is. It's important to consider weekdays versus weekends. People have more time Friday night and Saturday night than they do Tuesday night when they have work in the morning. Variety is really important for difficulty. Everybody's different. Everybody has a different skill set. Everybody's been exposed to different concepts to different degrees. So if you have a whole bunch of very mathy puzzles they'll all be hard to people that are not great at mathy puzzles, but if you have a lot of variety then you can help to modulate the difficulty a little bit by making it so that if somebody found a mathy puzzle really hard hopefully they're better at some other concept and the next day will be easier or vice versa. Progression throughout the month. Start easier, work your way to harder, typically, roughly, but lots of other factors come into play but something like that. Random off difficulty puzzles to control pacing. I like to have one or two hard puzzles earlier on if I can fit them in and it's reasonable to do. I like to have a couple easy puzzles late in the month so that people aren't just burned out by hard puzzles night after night after night just to shake things up and to give people something interesting. Also interpreted versus compiled languages is really interesting because one of my goals with puzzles typically is to make it so that using any particular language or paradigm doesn't give you a runtime advantage. So if you're using Python or you're using C the intent hopefully at least for the optimization puzzles is that you can't just switch from Python to C and suddenly the runtime becomes reasonable enough to finish in an amount of time you'd be willing to sit there. The intent is that unless you get to the intended solution or the intended category of algorithms the Python solution in the C program will always be slow so if the non-intended category of solutions is brute force for example picking some input size where the C program even still takes weeks or years or whatever it is but once you get into the big O log N fast solutions the Python solution is all of a sudden fast enough to do it in a couple seconds and the C program is also so regardless of whether it's interpreted or compiled make sure that people can still have access to the same difficulty levels at the same time based on run times and having to wait for things. So what does it take to actually make an advent of code puzzles? There's all of these inputs every user sees a different input like all of the users get distinct inputs like there's a finite number but there's a lot and they're pre-generated. So you take some inspiration and usually the inspiration comes from some interesting nugget of things I've found when I'm working on some problem or some work or some whatever. Search that thing see if it's actually interesting see if it's a solved problem see if it's an open problem see if it's something that has some other neat twist to it or something like that and then take all the things that I've learned and pull the most interesting bits out and just design a puzzle around it then build a program that spits out valid inputs for that puzzle that you've designed. Sometimes this process involves the input generator being very very smart and always spitting out really good inputs and then the part one and part two solvers which are just solutions just say oh okay I've solved that here's the answer which it then records. Sometimes the input generator is really really dumb and it just says I don't know I just generated some random numbers here you go lots of random numbers and the part one and part two solvers say okay based on these random numbers I'm going to solve the puzzle but I'm going to check a whole bunch of assertions and assumptions to make sure that this input in particular happened to have whatever properties make this puzzle interesting and sometimes it's a combination of both of those so depending on what kind of puzzle it is I have to spend different amounts of time on the input generator or on the solvers to make sure that the assertions are good that the attributes of the puzzle are good that every single input is fair and that they're all even that it's not like one input is significantly harder or that requires some trick that the other ones didn't or something like that. Then use the generator and solvers to generate many inputs and get all their solutions and then finally write some pros that describes the puzzle in a cutesy way that involves elves or whatever the story is for the month on top of all of the puzzle that you've written so very often I won't even know what pros I'm going to be writing for the puzzle until after I've generated all of the inputs and all of the solutions I'll create a puzzle and I'll say okay what like what situation could this interesting thing be in oh maybe it's like one this year was like there was a kid sitting next to you that had a little handheld game console okay maybe there's something there's some bug with the console that I can work into this and that's the puzzle or something like that so figure out how that would fit in write the pros and then that's your puzzle and you might think that the process for this is to go step for step for step for step for step and do all the things in order and then you're done but it actually looks like this you write the thing and then you figure out what's not really going to work and you have to go back and rework it and sometimes that didn't work at all and you have to go back to the research phase and figure out why it didn't and then jump around and it's very you know you still get to the same answer but you there's a lot of steps so let's do some example puzzles here's one that asks how many square inches of fabric are within two or more claims so this is a list of rectangles in a grid basically it gives you the top left coordinate and then the width and the height and it says how many square inches of these things are in some kind of overlapping area of the rectangles and it ends up rendering an image like this where all of the lighter colored parts are overlapping and then the second part is okay what is the ID number of the only claim the only rectangle that doesn't overlap with any other rectangle in the input and I'll give you a second to look for it did you spot it it's right there nope nobody got it okay that's fine we'll move on here's another one in so one of the things that I've thought a lot about and worked on a lot of about is just like log parsing and so I wanted to include like a log parsing type puzzle just because I find that stuff to be interesting and I hope other people do too also it's probably something that people will encounter in engineering and so it's a useful thing to have been exposed to so this one is just a list of timestamps and guards that begin their shift their shift and then fall asleep or wake up while they're on their shift so it asks find the guard that has the most minutes asleep and then it asks of all of the guards which guard is most frequently asleep on the same minute and so people build all sorts of visualizations and diagrams of all of the things and data analytics and like load these things into real tools that do like real analysis and stuff which is awesome and other people just you know create funny gifts of guards that dance around on a clock and then every once in a while fall asleep for a bit and then wake up again here's another puzzle that gives a list of walls and then it says water is going to start pouring into this 2d environment from the top here how does it flow where does it fill up according to some rules and that one ends up filling up in a an animation like this where somebody rendered it so they actually shows the water like flowing into all of these containers as it goes through and this asks like how many water ends up being retained in these containers whenever things done here's one that claims it's not really but it pretends to be a regular expression that matches all of the paths in a maze which is a really funny time and the solution actually involves not doing that at all but treating it as a graph basically and so you get an input like the top and you end up generating a thing like the bottom and somebody built this animation of it actually going through and running the whole maze and finding all of the paths and this I love the stuff like this this is great I love all the animations there's so much fun here's one that asks what dots or given these dots what other positions are the closest to any given dot based on Manhattan distances taxi cab geometry here's one that is a three state cellular automata this one was a lot of fun because I knew I wanted some kind of an interesting three state cellular automata so everything can be white green or brown it was a I think clearing trees and logging camps or something like that and so the trees grow the green things the trees grow over time into the clearings and then the brown ones logging camps like growing behind them and I ended up just by fiddling with it for a while coming up with a set of rules that in almost all of the inputs always ended up in these really cool spider spiral patterns and so I basically wrote an input solver that so the generator just says here's some random noise and then the solver says make sure it ends up in a spiral pattern so that way no matter what anybody that took their input and actually visualize it would get this really cool effect when they were done here's another one that says here's a bunch of dots in 3D space a bunch of points in 3D space and they all have some position and some velocity where do they end up do they collide with each other if so where and it ends up looking like this so it's going to move and it's going to zoom in a little bit and now you can see that they all like smack right into each other in the middle here and somebody used I think this is GNU plot to figure out where they all collide and then for the rest of them that don't collide where they all go here's the a graph of the first hundred people to finish each puzzle I forget what set this is them this might only be a year or two but I don't know which one but you can see kind of the the typical amount of time that it takes to solve for a hundred people the fastest hundred people which by the way are often a bunch of competitive programmers so it's not really good estimate of difficulty but it is a good estimate of like how involved it is maybe typically 15 30 45 the mid the median there is what 30 ish minutes and then all of a sudden every once in a while puzzles take you know an hour two hours three hours just because they're more involved and people went to bed or people had to go to work or whatever and so people really a few people are doing it so it takes the remaining people longer by an average or something like that but typically I aim for puzzles that are smaller than that hopefully so let's talk about some growing pains I know let's talk about some learning experiences how do you prioritize tasks this is something that I had to figure out because my normal method of prioritizing tasks as follows one thing happens right whatever it is something happens to handle the thing three go to one great as long as you have one thing at a time everything is top priority no problem everything is fine how do you prioritize tasks if you have 730,000 users from all around the world of all different backgrounds well go something like this one thing happens to handle the thing great so far three while you were handling the things several other things happened but when you try to handle those things even more things happen and then you never sleep which is how I was for the first couple years of this it is not the best strategy so to resolve this actually stop to prioritize things actually say like okay is the site outage more important or less important than this email that just came in don't just handle the things in the order that you saw them get help from other people I have a bunch of beta testers I have people watching the site I have people helping me run the community I have an entire subreddit of people that helps itself we have things documented on the subreddit wiki we have I have automated a bunch of tasks on the back end and I have provided some self service tools for some sorts of things basically just streamlining things and like stopping to think and stopping to figure out what needs to be done let's talk about accessibility accessibility is really important for every website it is something that I take very seriously we have people even on the just the beta test or moderator team that have different various disabilities I have a disability so accessibility is very very important careful testing is a good way to get around that running it through the automatic checkers is a good way to get around that and taking accessibility feedback seriously is very very important if somebody emails you and they say like hey I'm having trouble reading this text like it's not that that person is is like wrong or bad or something it's that your website is broken and you need to fix it from my perspective accessibility issues are as bad as outages some some fraction of your user base can't use your application that's an outage so if you have part of your application that people are having trouble using because whatever reason right that's an outage and it should be handled with the same priority as any other outage would be taught let's talk about denial of service attacks I do not know why somebody would try to denial of service attack a programming puzzle website but we've had a couple in the past do not know why somebody would elect to use their botnet or whatever to do this but they try to fortunately aws provides tools like just banning whole ranges of IP addresses which largely solves the problem if it's dropping the traffic at the load balancer at like the virtual private cloud edge and it's not getting into all of our resources that we're like spending money on and stuff that typically works also aws is pretty good about helping people get around denial of service attacks but yeah you you get big enough and you'll see it eventually people just start throwing traffic at you because it's funny don't know so here's some frequently asked questions why do the puzzles unlock at midnight UTC minus five or why don't you make the puzzles unlock at a different time every day you have 25 puzzles you could make them shift by an hour every day the reason for this is that it's when I can be awake to watch the servers and monitor the event I can consistently be awake at midnight UTC minus five every single day and still have like a family and a job and sleep if it's moving to a different time every day I would die and so it probably most of the users it's untenable if I moved it to some other time I would be watching it you know when I would have been it would have been asleep or when I would have been spending time with my family or what I would have been spending time at my job I like every once in a while stuff happens like I talked about before we've we had some problems during the 2020 unlock we've had problems in the past too and so if I'm not around watching the servers and they fall over like that that like that's not great so they are around when I'm able to help like watch things and run the event and fix stuff when it breaks how long does it make take to make 25 puzzles I typically start in April and they need to be done by December so that's what eight months or something like that in in that span of time I need to plan the story design the calendar build all the puzzles test everything add features adjust the hardware as necessary scale things the puzzles themselves take right now about three or four months of all of my free time every night spread over five or six months which is still most of my evenings just you know researching things writing puzzles testing things all of the stuff involved in that that we just talked about it takes a while can I send you puzzle ideas please do not send me any puzzle ideas if your message looks like it's a puzzle idea I will ignore it and delete it or I will have somebody else scan it looking to see if it actually is a puzzle idea the goal here is I don't want to accidentally steal your idea by reading it or skimming it or something and then using it later on without realizing it not crediting you I don't need I don't want to worry about attribution I don't want to worry about copyright infringement I don't know where your puzzle idea came from if you got it from a game or some other copyrighted source like I can't confirm that so I don't take puzzle ideas from anybody I just take I only generate them from myself and the things that I find in my research which at least keeps me safe from like legal problems and things like that please do not send me puzzle ideas I have hundreds of ideas in a giant list why is the site rejecting my answer popular reasons are you're using the wrong input different users get different inputs so if you're using your friends input you're going to get the wrong answer probably or you have the wrong solution even if you're super super confident that you have the right solution if it's not like three minutes past midnight a ton of people have already solved the puzzle puzzle that you're working on it's extremely likely that you just have the wrong answer somewhere you misunderstood something or you have a bug or something like that another very popular reason is you didn't copy paste the whole answer that's surprisingly common people will copy like all but the last character of the thing in their terminal and paste it in and it'll just say it's wrong and they don't figure it out or something so make sure that you're careful about selecting everything typically those reasons though what else does the site lock after getting a wrong answer or why do I have to wait an hour to submit my next guess so this comes up occasionally the site locks out for a minute after getting a wrong answer and the reason for that is largely to prevent people from writing scripts that just spam answers into the site until they get it right if you get wrong answers enough times in a row it will start to lock you out for longer and longer periods which is like that to prevent people from writing scripts that spam at this site with answers very slowly if you see somebody claiming that they have to wait an hour to submit their next guess it means that they submitted a wrong answer a lot of times the site doesn't scale up to like hour length waits for a lot of I forget exactly how many it is but that's not something that any normal user ever ever encounters but yeah it's something that that comes up occasionally that people ask about how do some people solve the puzzle so quickly or are the people on the leaderboard cheating no they're not cheating they're just very fast as we discussed in a couple slides back the people that are solving the puzzles right at midnight are competitive programmers that cut every corner and take every shortcut and use every tool at their disposal they have a bunch of algorithms already memorized they have a bunch of tools already written they eat drink sleep breathe puzzles for breakfast lunch dinner this is this is how they think and what they do so they're not like opening up a editor right at midnight and like deciding how they're going to approach it and like skimming through the text and reading the they don't do any of that the approach typically is scroll to the bottom of the page click on the gender the download input button look at the input because the input is the only thing that really matters based on the input does it seem like it's asking an obvious thing then go back to the puzzle and look at the last line which is usually a problem statement if that's enough to guess at probably what the puzzle is asking go start implementing that and while you're implementing it skim through the rest of the pros to get any clarification you need this is very very fast they're doing a lot of guessing they are familiar with the kinds of things that programming puzzles can ask and so they're working in a limited a limited state space of like the kinds of things that the puzzle might be talking about and they just do it all the time there are a lot of videos and recordings of very high ranking leaderboard people solving the puzzles right at midnight on like YouTube and Twitch and places like that if you're interested in watching them and how they do it it does not look like regular software engineering at a job it's a completely different thing entirely but no they are definitely not cheating they're just very very fast let's do some stories so here's one that I really like this is somebody that just posted on Reddit they said as the non participating mother of a 14 year old who got up at 4.50 a.m. I'm both massively impressed that people do this and somewhat baffled that nothing else has this effect I've been reading here on the subreddit where we talk about the puzzles so I can make sense of the conversation later in the day so this is a mother who's 14 year old has been getting up early early in the morning to do programming puzzles and the mother is like trying to make sense of what their kid is doing it's just I love that I've gotten a couple stories like this is so funny to me or people that just say advent of code help me in getting my current job or helped me with an interview I did or help me you know whatever gave me the confidence that I needed to apply for this position or something like that there's a lot of people like that which is excellent I'm really glad that people are able to apply it in that way or people saying things like my son and daughter are becoming more interested in coding because of your site like getting to people that didn't necessarily know about programming or know how to get into it but seeing these small small approachable puzzles and not like a big application but just like build a thing that just you know goes through this list and find some things and having that be the starting point for saying oh I can make computers do things what else can I make computers do well let's try some more of these puzzles and just like getting more and more into it I love that there's also situations where people use advent of code as a midterm exam in university and as a final exam which I which is hilarious in conclusion engineers in your organization should solve puzzles for fun actually scratch that everyone should solve puzzles for fun thank you very much.
|
Advent of Code - built entirely with Perl! - is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. People use them as a speed contest, interview prep, company training, university coursework, practice problems, or to challenge each other. In this talk, the creator of Advent of Code will give a behind-the-scenes look at what it takes to run a month-long programming event for over 500,000 people.
|
10.5446/53252 (DOI)
|
Hi, my name is Curtis Poe and I want to talk to you today about bringing modern object-oriented programming into the Pro-Core. First of all, I want to thank all of you for joining us at Fosdham 2021. It's been a fascinating year that we've had to go through with the pandemic and everything. Attending a virtual conference with me presenting remotely and I can't actually interact with my audience directly is an interesting challenge but I hope you'll bear with me as I try to deal with how things are going. You'll notice when I talk about bringing OO to the Pro-Core that I've spelled this C-O-R. That's because of something that I'll explain just a little bit. I want you to know that's not actually a typo. As for myself, I'm Curtis Poe but I know online is open to many folks. I'm with AllAroundTheWorld.fr. We're a consulting firm and we specialize in building complex software for people and building the spoke teams that fits what the company actually needs as opposed to we have three people on hand who might or might not be able to do what you want and we'll just toss the value. You can follow me at Twitter or you can email me at allaroundtheworld.fr. For people who like reading rather than just the beautiful graphics but if you're blind or have limited mobility, you'll discover that it's very easy to play and follow along in a way that a few other games are because we think that's important. The morality of that is important. Being accessible to everyone. But enough of that. I want to talk about Core. When I first came up with the idea of bringing modern object-oriented programming to the Pro-Core, this wasn't initially me. Stephen Little had been doing a lot of work on this before that. He's one of our booths, for example, but he wanted to do something even better and have truly modern object-oriented programming available. But he'd been working on it for a long time by himself and I think he kind of got burnt out, to be honest. So I picked up the baton and I was trying to figure out what's the smallest thing that I could put together that the Pro-5 porters, P5P, might accept. The Sawyer, the Pumpking for Pro-5 porters, came to me and said, don't think about that. Don't worry about the implementation. Build excellence. And then later, when I found out about the Pearl 7 project, I realized I was thinking too small. I was thinking within the confines of what Pearl currently does and I wasn't thinking about what it could do. So Core now is my attempt to keep something that is like Pearl, but a little bit more. Terminology. Be aware that when I talk about slots, I'm talking about basically the places where the data for an object is stored. Currently in Pearl, we tend to use something called the blessed hash reference. So the key value pairs would be the slots where we store the data. So that's what I'm talking about there. Attributes is a heavily overloaded term in many programmable languages including Pearl. In this case, attributes are simply slot modifiers such as reader, writer. They give some additional behavior to the slot, which we'll understand more later. The term core itself is basically short for Karina. The poet Ovid wrote a lot of love letters to a woman named Karina. A love poem, sorry, two women named Karina. So I just shortened that to core. But then that hits the problem of we talk about bringing core to the core and people are like, I don't understand. So if you want to just call core Karina, don't worry about the name because the name is going away. It just gave me a handle that I can refer to the project. Bye. Ovid was Italian, so let's talk about something Italian. Pizza. Pizza is wonderful. I live close to the Italian border. Pizza in Italy is amazing. And you want to open up a pizza joint in a town, but they already have a pizza joint. How are you going to beat them? You might think about offering a better pizza. You might think about offering a cheaper pizza, but it turns out the cheaper pizza is a disaster because people associate cheap with poor quality. Economists have known this for a long time. Business experts have known this for a long time. When I sold cars, one of the things they hammered into our heads over and over again is the customers are happy, so the cars are those who have paid the most money because they're the ones who have been sold the value. And it was true. My happiest customers paid the most for their cars. So you don't want to look at cheap. You don't want to look at price for the pizza you're offering. You're looking at value because value is what attracts people. And when we talk about pearl, when we talk about the between price and value, what's the price of pearl? It's got a shallow learning curve for the basics. It's got a steep learning curve when you want to get advanced. So that's something to consider. But no one actually cares. Rust as a steep learning curve. People love rust. C++ as a steep learning curve. I don't think anyone loves C++, but they can get a job with that very easily. There's lots of languages where you have a very steep price for learning language. What people appreciate, why? Because they get great value. And what's the value of pearl? It's not available jobs anymore. Back when I learned pearl, I could have quit my job every six months and gotten to pay your eyes. Well, I kind of did because there were jobs everywhere. In the past 15, 20 years, those jobs have dropped tremendously. There's still a lot of them, but not nearly as many as there were. Data science and AI are kind of taking over the software world. Pearl doesn't have much to offer there, which is kind of an embarrassment. So that's not the value we're looking for. The CPAN testers, Unicode support, regular expressions, those are great, but those are marginal value for many people. We're not worried about that. Everybody hits OO either because they want to develop OO software or they have to maintain OO software. OO is an almost universal value for a given programming language. So if you have powerful OO, you offer something which is of tremendous value. So let's not forget about that. So as a new developer, how do I write object-oriented code in pearl? Well, that's simple. They ask this question and you tell them, use the CPAN client. What? Okay, use the CPAN client. What does that mean? Now I have to learn how to download this extension. That's actually not bad. We're getting more and more used to downloading extensions, even for something that probably should be core to the language, but okay, we'll put that slide. Then they got to pick an OO system. What do you mean I've got to pick an OO system? Just pick one of them. And no one has any idea what that they're going to pick if they're new to the language. They can't because they won't have the background. And then finally they pick one and you say, no, not that one. Okay, fine. And then the build fails after 15 minutes because they picked a particular one and they get this obscure error message. Oh, just force the install. Don't worry about it. That's an embarrassment. That is horrible embarrassment for the language. People shouldn't have to go through that. If they want to install a different object-oriented system to replace the core one, that's absolutely fine. But they're not going to be expert enough in the language in order to be able to make that decision to understand what that means, but not when they're new to the language. Instead, they want to learn how to do OO and Perl. And now they have to learn how to pick which OO system they want to install. Then they have to learn how to install a CPAN client and how to use that to install that OO system and what happens if they get test failures and there's too many OO systems to pick from. And no, this is bad. And I am tired as a consultant going into clients and, oh, you've written another in-house OO system because what you find in the market is not suitable for you because we don't have something in the core. Or they built something long before Moose and now they have settled on that over the years and I can understand why, but it's frustrating. And bless is not the value that you were looking for. There's nothing wrong with bless. If you want to use bless, that's fine. Not for a brand new developer because you've got to wire everything together, which means you're going to be creating your own unique set of buttons. We don't need that. OO. So Core OO is assembly language for object-oriented programming. We don't need that. Moose and the family of languages changed everything. It was great. It gave people an appreciation for better object-oriented programming. But it's not in the core, so it raises a question of what should be. Good enough is not good enough. Most object-oriented systems are incomplete. They're sugar forgetters and setters and they often offer you a default constructor. We need to stop treating objects like structs, but that's what we do. It's like, oh, look, it's just a data structure with methods you can call on it. That's not what an object is. An object is an expert about the system that it works on, which might be a data structure with methods you can attach to it, but generally shouldn't be. But I'm not going to go into object-oriented design in this talk because we're going to get distracted. Instead, instead of thinking about something that's just good enough, that don't kind of work, we need to be better than. That's very important. We need to be better than what we had. We need to be better than other languages. We need to reach for that gold medal rather than settling for the bare basis or settling for this mess. I am so tired of this. This happens to me all the time. I go to clients and I'm trying to debug performance problems and I'm running my NYT proc and boom, everything coming out of Moose, it slows everything down when it doesn't have to be because it's well, Moose is the third party. It's not core to the language. Good enough is not good enough. We need to stop that. Core wants to add modern OO to the Pearl Core. Making OO easy. But we have the opportunity to build something beautiful, wonderful, more powerful than most languages, but still feel like Pearl if we do our thing. So I've got a grammar for it. I've got a here's another grammar for just the methods. The grammar is mostly done. The initial semantics are mostly done. It's complicated. There's a lot of little corner cases that we have to deal with. We've got to figure out exactly what we're going to be putting in the MVP to pitch that to P5P to see if they'll accept it. But I want to talk about the design considerations so you know where I'm coming from because I'm the driver behind this. My first and foremost was easy things should be easy and hard things to be possible, which is that's Pearl at an up shell. That has been Pearl for decades. So keep that in mind. I want small systems to be easy to build. It should be trivial, it's prototype stuff. But large systems should have a greater safety. I work with many companies who have a half million to a million plus clients of Pearl code and large systems where you don't offer some sort of safety about the kinds of data that you're working with, they're difficult to work with, they're frustrating to work with and I have to work with these clients to manage things. So I really work hard to be able to offer them something built in to handle that. Tremendously need to decouple data and accesses and constructors. You'll see a little bit more about that as we go along. We need to encourage smaller interfaces but there we're talking a little bit more about fundamental sort of objectory design that I also won't be going into in this talk. But we need to cater to new developers. And you might ask what I mean by new developer, do I mean new to Pearl or new to programming? I mean both. We want it to be simple. I mean this is from Moose. Hasname is RWP. What's RWP? That's redirect protected. Okay fine. What is redirect protected? In Moose, it's Hasname and it's got a reader. That's once you understand it, it's actually pretty easy to follow. It's not complicated, it's not difficult. We want this to be accessible to people who come along. So I've avoided things like colon RWP because I want it to be easy to read. I want it to be accessible to newbies. While Graham created ARC to be catered to experts, ARC is dead. You go out to the boards for ARC on development and how to learn ARC. No one's posting them anymore. No one cares because he made a language that was not accessible to the massive programs. So of course very opinionated. Our warning, this is mostly my opinions, but it is very purlish. It still feels like purl even though we had to extend the language a little bit for this. It's very practical because it's very easy to write with core and you'll see more examples of that later. If you want to contribute to this, be aware you can go to ircpurl.org. Or github.com. Sorry the video is covering up that URL. There's a lot of work which is being involved. A lot of people are being contacted discussing this. But let's get closer into this. I want you to know that all core syntax that you're about to see is speculative. It's mostly nailed down. What you're seeing is probably the final version. There are some little bits that we're arguing about here and there because we're not sure, but it's pretty close to it. But the data types are for entertainment purposes only. Do not pay attention to them. Put them in there so you can see what the potential future is. But there's problems because data types aren't just for core. They're also for a regular bit of protocol, not objectoring and just being able to type a variable. Or how data types are being used in signatures, which Dave Mitchell's been working on. Because types are going to be used in so many areas and we need to unify how types are used across almost different areas. I pulled types out of the original core specification because it wasn't appropriate to be there. But let's talk about the evolution of what we've got right now. Specifically, I don't want to talk about blessing or referencing. Most of the old timers are familiar with that. Newer developers are less familiar with that. So we'll go ahead and start with this. Moose was first released. We released it back in 2006. And you can see right here that we had to import strict warnings. Is it int? Well, that's code which is being invoked right there. But that's mostly what we understand from Moose today. It's very similar to that. Here's a point object in the original version of Moose. And that's going to be important because I'm going to be using this as for later examples so you can see the power of what we're talking about here. Here's Moxie. Stephen Little was a creator of Moose and he realized there's a lot of limitations of Moose. And he was trying to create something better, core, that we put into the language. Simple and easy. And this is part of what he created with Moxie. And you can see that we're declaring the slots here. We have default values for those slots. We're declaring the accesses for the slots. I won't go into details about the syntax. Here's how you clear those values. This was ugly because it made the assumption of a blessed hash reference, which I think was unfortunate in this case. Core does this. The reason I think this is better is because it's so much simpler. It's easy. We don't have to worry about how it's implemented under the hood. But we're not introducing a lot of complex syntax. We're just declaring our variables, slot variables. We say that we can pass it to the constructor. We have readers for both of those, and we can clear them. It's that simple. There's nothing complex about that. So here's Moxie. On the left, here's core on the right. Exact same thing. Just different cleaner syntax. So Moose was great, but it had some really bad affordances. And a lot of that wasn't Stephen Lewis' fault. It's limitations to pro-programming language itself, which he had to work with. So Moose finds it being clumsy for many of the things that we need. Raku, however, has some great ideas. So I've stolen a lot from Raku, but I needed to have this fit with how Perl worked, because they are different languages. For those of you who are not familiar, Raku used to be called Perl 6. And a lot of people thought it was going to be the successor for Perl, but it's not. So that's what I'm referring to when I talk about stealing some ideas from Raku. So when we talk about Moose, when we declare a slot, we use the has function. And has, and no slot declaration, attributes, types, queries, and delegations, there's predicates, blah, blah, blah, blah, blah. If you want to talk about a hideously overloaded function, this is it. This is exactly the problem we're talking about. It's trying to do too many things, and it gets confusing at times. Core, what does has do for core? There's a slot declaration, that's it. Nothing else. Very simple, very predictable. So let's look at what this actually means. We'll take a simple object, a customer object. It has slots, name, birth date, optional title, doctor, it is, whatever. And it's got custom methods like full name, or predicate methods, or you don't have to vote, drive, et cetera. It looks like this in Moose. And you can see here the slots, pretty straightforward, title, name, birth date. Here are the various methods that we have. The difference is slots are just dumb pieces of data. You call them, you get your data back. The methods actually do stuff to keep that simple. Note the slots are all immutable. So RO means read only. What does that actually mean, though? Why is that important? Well, think about this. We have our predicate method, old enough to vote. If old enough to vote, blah, blah, blah. Then we set the birth date to yesterday. Except old enough to vote isn't yesterday, obviously. Which means I can have a guard to verify that it is okay to follow this path of code. And then I can set my object to an invalid state. And I can no longer guarantee the correctness of my program. So I throw an exception if I try and modify that. That's why we like to read. One of the reasons we like to read only objects. But because the daytime object in that slot is not read only, I can grab out my birth date and then I can set the year to last year. And now I violated my old enough to vote constraint. So it's still a problem. In fact, this is not a theoretical problem. This is a real world problem. Ricardo Cineas wrote a great blog article about how this was hurting them on their project. And as a result of that, we had daytime moving and daytime ex immutable. Some of my clients, what I do when we're using a DBX class or M, I make sure that when we pull dates out of the database and it gets inflated into a daytime object, I make sure it's inflated to this immutable daytime object. So they don't have to worry about that because immutability is important. If I'm passing an object to another function, the object is a reference to data. And that other function modifies that data. I don't necessarily know. So now I've got two different sections of code, two different sections of data that I think are the same and I can't guarantee correctness. Immutability is what you want. So of course, as immutability should be the default. But that means default. It's not a guarantee. Sometimes you want immutability. So it's trivial to make immutable just by adding the writer attribute to the slot. So that's fine. We're not dogmatic, but we do push you in the right direction. But we've got to do business rule. Never refer to customers by their names. Never do that. Refer to them by title plus name if they have a title because Dr. Smith might not want to be referred to as Smith or Brian or whatever. So what we do in our code, we've got this full name method that you can see up there. The full name method is the title and if we have a title, then we include the title or we just have an empty string. Then we can catnick the title to the name later. We refer to them by their full name. But oh, someone's going to call the name method. That might violate our new business rule and we don't want that to happen. So what do we do? We actually want the name method to be private. We don't want people to be able to call that method. But we don't want to pass underscore name to new. So what do we do? we have an init arg of name. It's called underscore name. So it doesn't have a public method, but the init arg is name which says when you pass it to the constructor, you can use name instead of underscore name. And if you call customer name, you get a method not found exception. Except someone's going to call the underscore name sooner or later. Yes, they will. I see this in code all the time. It's very frustrating. You don't actually want to expose that data. So what do we have? We can say name is bear. Bear means we don't have a getter or a setter for this particular attribute. There's nothing we can do with it aside from passing the constructor. If we need to get the value out of it, though, how do we do that? Because the full name method needs to know what the name will set up. My name equals self-meda, get attribute name, get value, self. No one does that. No one does that. They don't bother. They just make name public or something like that. We hope that people don't abuse it. So the arg bear, those are some clever solutions for this problem that we have, but they're clumsy solutions. We want to avoid those. And the has function in moves and move. It conflates so many different responsibilities, and we want to eliminate that, and that's part of what core is trying to do. So by default, when you're building an object system, don't expose any of your public data. Don't, correct, don't expose any of your data. Just don't do that. If you have a case where you have to expose it, do that on a case-by-case basis, not rather than just like, like, the, splatting everything out there. What it's painful to do with this in core, it's easy. So in core, we believe that private data should be easy, and we don't encourage misuse data. That's a very important design principle behind this. So here's how we do this in core. Notice our class customer, we have a name, title, birth date. None of these have a reader. You can't read those from outside the code because there's no need to do that. But we do have a name method, and we just return the title. And if there's a title, title, what's the name, otherwise just a name. We have a predicate methods because we don't need to expose the data. Please, keep that in mind. It's important. Attributes. So attributes, I hate the term because it's so terribly overloaded. Attributes in this are often the name that people use for the slot data. But sometimes they can be very expensive to calculate this data. But what if they only need to be calculated once? So for example, the 3,000 Fibonacci number only needs to be calculated once. And then that's fine. But we have other things like, let's take an example of volume. We have a box object with height, width, and depth. Volume here is height, width, and depth. That's not actually expensive to calculate, but for the sake of argument, we're going to say that it is, just to show you how moves would handle something like this. If you call this method volume, it'll be recalculated every time. But if it's in a moves attribute, it's calculated once and only once. So how does that actually work? Well, we do this. Has volume. First, we make it read only because you don't want someone to be able to set the volume to a different value. We assert that it's a number. It argues on depth because you don't want someone to pass the volume in this box object because that'd be silly. Lazy, because we don't want it to be calculated if we never call the volume method because that would be expensive in a way still CPU-side. And then we have a builder, build volume, which is the height plus times a width times the depth, which you can't see because my video is cutting it off. That's a lot of junk or nothing. People don't do that. They just declare a volume method and they're going to recalculate it every time because you don't want to jump through all those suits and try to remember all that stuff because the more stuff you have to remember, the more likely you are and you're going to have to have bugs. Why do you want to do that? Core. How do you do this in core? That's it. Reader so you can read the volume because it doesn't have a colon new attribute. You can't pass it in the constructor. We have a builder which multiplies the height, the times the width, the times the depth, once and only once. That's it. It is that simple to create this in core. So here's moose. Here's core. Which do you want to write? Yeah, I thought so. That's the great thing I suppose about having these prerecorded videos because you can't argue with me. But let's talk about object construction. That's another thing that's sometimes kind of complicated. Moose has all sorts of little corner cases and we're going to deal with this right now. There's a lot of stuff we're also going to skip because object construction can be complicated sometimes. So here we have height, width and depth of our box object. We have new saying we have to pass those to the constructor. Great. Here we have a name but undeath. Which means we can pass the constructor but because it has a default, it's not required. Here we have a volume which has a builder and a reader which means you can read the build, you can read the volume method. The builder means we have a way of building this and because it doesn't have a new, it means we cannot pass it to the constructor. That's simple. All of our construction rules about what we can and cannot pass are just dependent upon the existence of the new attribute or non-existence of the new attribute whether or not it has a default value. So here this is moose. We have height, width and depth. Height, width and depth. Okay, then we say box volume, fine, 21. But what if we want to do this? What if we want to say box new 7? What would that mean? That would be a cube where everything is 7 units, means the size of the volume is 343 cubic units. How would we set up the constructor to allow a single value like that? Well, in moose you've got build arcs which you've got around build arcs, sub, or ridge class units. The 1 equals arcs, my num equals arcs, 0, and then I've got this height width, class, or ridge. That's strange. Okay, but once you learn the incantation it works. Okay, fine. And then you can call box new with a single argument. Not we know in moose you can pass a list of key value pairs or you can pass a single hash ref which is going to cause you all sorts of problems because you were expecting a single argument and here you've got 1, but this argument's not an integer. We'll lead through most of that. Basically what it means is in build arcs we have if 1 equals arcs and not ref arcs, 0. And that's actually not enough, but I'm not going to go into detail because it starts to get complicated, ugly, and embarrassing. For my clients I just tend to do this. If I have to create a build arcs, my arcs equals 1 equals arcs, then arcs, dereference, or arcs, it's just this voodoo line noise that is an embarrassment that I never, ever want to have to write again, but I write it all the time and it mostly works. Here's how they do it in Java. They have constructor overloading where you can have different types of constructors. This one has three arguments, this one has one. It's that simple. And you call box new three arguments or one argument and it just all magically works. Part of that's because new is a keyword in Java, not just a method. At a moose you've got this embarrassing set of code versus something that's actually kind of clean, easier to read in Java. Why is Java easier to read than Perl? That should not be the case. Why is Java less for both than Perl? That should not be the case. The downside of this though is we've got this in Java. Which of those three arguments is high-twits or depth? You don't know. In core we simply have a constructor method. It takes the arguments. If arcs equals 1, if we only have a single argument, then we map those arguments to high-twits and depth and we return the arcs. There's nothing fancy, there's nothing complicated. We don't pass in a hash ref or an optional list of key-value pairs or whatever. It's simple, it's easy. So here's moose, this weird voodoo for build arcs, and here's core. It's much simpler, much easier to follow. Or you can just use an extra constructor if you want to skip all of that. You can say, new box takes a single argument and then I call new. And it makes life much, much easier for you. So core makes object construction easy, something which other systems often do not. But let's look at classic OO. We're going to start doing some comparisons here. Here's classic OO for a customer object. And then here's what it looks like for moose. You can see moose is a lot simpler. Moose is declarative as opposed to this procedural code that we have for corporal. And then we have core versus moose. Here's how simple and easy core is compared to moose. Because core is extending the syntax in the language just a little bit. And then look at corporal versus core, perina. There's no question which I would rather write. Because the corporal, it's very easy to get that wrong. Very easy to get that wrong. And perina, it's much harder to get that wrong and it's much easier to read this than this. So object systems are for corporal currently. Many of them are just their public methods for data. But they expose way too much. And moose and loo practically require public methods. It's hard to avoid those. And everybody wants public methods. We make it easier in core. You just slap on a reader or a writer attribute. But we don't default to that. So attributes for data should just be easy. And we make them easy for core. But let's look at a bit of real world code. Something a little bit more sophisticated than just a dummy customer object. In this case, this is a least recently used cache, cache LRU. Where basically anything which hasn't been used recently gets pushed out of the cache. And we have a certain minimum number of objects. And it's kind of hard to read this core, Perl, OO code for that. We can list it moose and it becomes a little bit easier. We can rewrite it in core. And it becomes dead simple. We have our cache, which is a hash ordered object. We have a created, which is a time it's created, the max size. We can instantly see just all the data that we need for this. And then we only have get is handled by the cache, but set. This handles evicting something from the cache if it is not new. And it's very simple and easy. It makes it easier to see the code that we're working with. It makes it easier to read the code that we're working with. Much easier to write correct code because we've simplified object construction tremendously. Here we see we have non-lazy defaults. We create our reader, which is set to the current time of when we've instantiated this particular object. We have the cache where it's hash ordered. We just read and write from this particular cache variable directly. There's no reason to go through a method call for that. We have the max size. Here we have new, which means even though it has a default of 20, we can set that to a different value if we wanted to. And we have a reader so we can read what the max size is if we get a cache object. And here we just access it directly. It's very simple. It's very easy. And here's how I wrote it in Raku because just for those who are curious about this, you can see how this is conceptually very, very similar to Raku. I borrowed a lot of ideas from Raku, but I wanted it to be easier. Except Raku didn't exist. It doesn't actually work because the ordered hash was sorted and not ordered. Subtle difference, which caused a problem there. But let's think about the has function, which is the key thing which most people are going to be focusing with core. As like my, it declares a variable, which is bound to the electrical scope that is declared in. And we have attributes we can apply to it. We've got a new attribute saying you can pass it in the constructor. Here the new attribute has default. So you can pass it in the constructor, but now it's optional. Got a builder to build the value if it's not present. Got readers, so you can make it easy to read about data if you need to. Writer, if you want to make it mutable. It's all there. It's all available for you. Got quite a number of attributes, in fact, to give you a lot of control over how you're going to manage this behavior. But I think that's extremely important. There's one illegal combination currently, which is a builder and the equals default. Because equals default is kind of a synonym for the builder. It's just when you have a simple scalar value or that you want to assign to it, as opposed to something you have to add to compute. Or duplicating any attribute is you don't want to have colon new listed twice, or colon builder listed twice, because that wouldn't make any sense. And the goal behind this is to make attributes as composable as possible, to make it difficult for you as a developer to write something that's invalid. I think that's extremely important. And it were not for the equals default, every single attribute attached to a slot would be composable. So you wouldn't have to worry about, does this work with this? Does this work with that? What does that mean? It's all simple and easy. But another guiding principle is that core should support types. I can't get that in the first version. As much as I wanted to, it simply wasn't going to work. And that's unfortunate, I'm sorry. But you need to be aware that we couldn't do that yet. There's a lot of debate about how types would actually be structured, how we would declare them in the language. Do you declare it first? Do you declare it afterwards? What's the exact syntax? So we had to punt on that because there's so many other areas in the language of this impacts. So Paul Evans, he wrote ObjectPad. ObjectPad is great. It's been a great test bet for a lot of things that we're doing. And I really appreciate the fact that he hasn't done work with optimizing it. So even though ObjectConstruction is a little bit slower, the runtime using the Objects is actually faster than with Corp. or with Lucid. Obviously, which I think is absolutely fantastic. I'm just delighted by that. It probably will first be released under a feature guard, say use feature class or something like that, so that we don't have to worry about it causing problems for older versions of Pearl, we're familiar with how feature guards work. The steering committee might veto this. That'll possibly depend upon how P5P feels about the initial implementation work. And we're trying to do small steps at first. We're still trying to get a minimum viable product, but we want this minimum viable product to be far superior to what you can get in most object-oriented programming systems today. That's very important to us. So you can go out to Cpad right now, check out ObjectPad, and get a feel for how a lot of that works, start playing around with that. Give feedback to Paul. He's really going to appreciate it because that will help us better understand what is actually being done here. But there are some infections to Corp. And I need to address those, so people are aware of them up front. Some people say, bless is good enough for me. OK, fine. We're not proposing a ruling bless from the Corp. If you want to use bless, do it. That's simple. Why not moose or moose in the Corp? Moose pulls in a huge, huge, huge ton of dependencies, far more than the profile reporters want to maintain because it adds a huge extra burden with a lot of work. And they don't want to do that. So maybe just moose, because moose is pretty simple. Moose has a lot of the wrong affordances like moose, but it's faster. It's simpler. But moose has meta. And the meta method, if you call make a mutable, that's a no-off. But if you call anything else with meta, moose says, oh, they actually want to moose. And it will inflate a meta class. It will inflate your object to a proper moose class, which means if we include moose, we have two choices, either A, we include moose and all the dependencies, which B5P doesn't want. Or we say the new version of moose, the meta doesn't work the same way. Therefore, it's no longer backwards compatible. And we potentially break the code of everyone already using moose. Maybe, we're not sure. But I think more it's the bad affordances like with has, which is the biggest issue that we have to worry about. Plus the fact that we can't just deal with pre-subs and methods appropriately. There's a lot of other subtle things in there. But basically, we have the opportunity to build something better. Let us take that. Because good enough is not good enough. So some people say, just make it a module. Fine. We're back to the same problem we have now. Pick one, which one of those do you want? And XKCD really sums it up when we talk about, there's 14 computing standards. Well, we're going to create a new one, which encompasses all of these cases. And now we have 15 computing standards. We'll know. And we'll talk about it. Perl seven discussion pushed a lot of this. And I know this is controversial for many people. And I'm not going to get into all the background about this. But I'm going to show you what I had originally thought. Perl seven, we were going to release core under the feature guard. And under Perl eight, you're going to remove the feature guard, which is going to be permanently part of language. But why stop there? Because it turns out there's so much more we can do once we start thinking about the future possibilities of the programming language. For example, we have module, some module within that lexical scope. We can introduce backwards incompatible syntax changes that we never could before. We can make strict warnings required there or just automatic there. We can have type signatures. We can have multi methods. You have type signatures and multi methods in those things. We can have typed variables inside of the language. We don't know what the syntax is going to look like yet. But before we think about that, we need to think about how to get core in core. In order to do that, we need to figure out what's the minimum feature set that gives us something better than what's currently available out there. And because that minimum feature set actually turns out to be so easy to learn, we're going to offer standard OO, which is far more terse, but far more expressive than most other OO languages out there. And this is getting a Perl a magnificent opportunity to excel, one that I don't think we should ask on. And as I mentioned, we're probably going to start that with use feature class for the feature guard. And there's been a lot of interesting ideas about it. We need to be careful to get it right. Core V2, there's a modern amount of discussion about what's going to be involved there. I don't want to get into that now because some of those features could be in core V2. Some of those might just be part of the core Perl language itself. We don't know yet. So we're aware that there's some discussion there, but we're not there yet. We're keeping it clean. We're keeping it simple. We're actually going to have real methods and not just subroutines that have an invigint or whatever. So you can distinguish. So if you export a sum function into your class, and then you have a sum method and you call the sum method, Perl will know to call the method and not the function. So that's nice to me. Finally, we're decoupling data accessors, object construction. We're separating all of that out to make it easier to manage that. But we don't yet know what the MVP is going to look like or how extensible we can make it. So we've got some work to do, but we're getting so much closer. And we just needed to convince P5P and the steering committee that this is the way to go. But I think we're at a good start. So thank you very much for taking the time to attend my talk. Again, ovid.allaroundtheworld.fr. Please email me if you're interested in hearing more or join us online in the various online forums that we have. I'm EZified. Thank you very much. I appreciate your time. Bye bye.
|
I plan to bring modern OO to the Perl core. Modern enough that it leapfrogs the capabilities of the OO systems of many other dynamic languages. I’ve been stealing ideas from Stevan Little, Damian Conway, and anyone else foolish enough to leave their ideas lying around. I have no pride. Sawyer’s expressed interest and it's likely it will go into the Perl core, though with the upcoming Perl governance changes, the timeline is unclear. I’m not going to beat around the bush: writing object-oriented code in Perl is a shambolic mess. Some people want to use bless and hand-roll everything, others insist upon using Moo/se, while still others others reach for Class::Std, Spiffy, Class::Tiny, some in-house monstrosity their company uses and so on. You have to relearn it again and again and again. It's time to put this embarrassment aside.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.